Optimization of 'stock_supply'

We have a client who has the following volumes on a v6.0

  • ~3400 purchasable products
  • ~1050 producible products
  • sub-level of BOM max : 5
  • Period of calculation of needs: 6 months
  • ~150 sales order per month
  • ~290 customer shipping
  • ~90 purchase order per month

The ‘stock_supply’ cron takes about 4h30 to generate the requests. After some research I realize that he spends a lot of time in the ‘get_shortage’ method. It would be necessary to be able to launch the generation of supply requests twice a day, which is impossible under these conditions.
Has anyone encountered this kind of problem before? Do you have any recommendations for reducing this time?

Indeed get_shortage compute the stock quantity for each date. On long period this can create a lot of call to products_by_location (with childs).

I guess you mean the supply period. For me it is huge if you can only purchase twice a year but need to compute the purchase request twice a day.
For me your supply period should be about few days if you plan the process purchase request twice a day and receive shipments almost every day. The supply period define the period over which we try to combine purchases.

Otherwise the get_shortage implementation may probably be optimized to progressively increase the steps of the dates checked and then do a dichotomic search to find the date where the quantity is below the minimal.

They need to anticipate purchase orders because of a significant fluctuation in delivery times, and often expensive materials. Several small orders spread over the year with 6 months of anticipation are from a cash point of view much more interesting than 2 orders per year.

Interesting, for the Dichotomic search we would need a sorted list, were you thinking of a sort on the quantities?

That is not what your configuration does.
You must set the maximum lead time for each product supplier.
If you set a small supply period, it will generate many small orders every day.

The search is on the date.

so I didn’t quite understand, the ‘get_shortage’ method returns the 1st date where the stock goes below the minimum quantity, this can happen several times in the period, how can you find the 1st time with a dichotomous search on the dates ?

The idea would be to increase the padding progressively and once a smaller quantity is found, we use a dichotomic search to find the date of the smallest quantity inside the date range.
Of course if the padding is too big so it contains two local minimum, the search may not find the right minimum. But if the padding increase progressively, this should happen only for far away dates which will be corrected when the supply is computed from a closer date.

Looks like the ideal solution may to reuse part of the query of compute_quantities_query() in a query which grouped by date and returning a window function with the total stock per day.

Even without a window function, a query that returns the quantities added or substracted per day all at once without the need of a query per day may be worth it.

Another issue we’ve found in customers with many products is that the current implementation tries to compute requests for all products no matter if they have moves (or order points) in the period being computed. This means that products_by_location is used to compute the stock for 1000 products at each iteration but maybe only 10% of them have moves so we make 10 times more PBL calls than required.

In the example put by @2cadz it is probably the case that from the 1050 producible products, at least 100 should not even be considered, and would reduce the number of PBL calls by 50%.

1 Like

It is maybe doable by introducing a virtual grouping key date which will use the proper SQL expression for the date.

The quantity must be computed for every products at least for the start date because they may be bellow 0 without move in the future. Products without order point must also be computed because there is an implicit order point for all products for quantity 0.

So we will need to have two branches one for products without future moves and another (current code) with only products with future moves.
That does not seem too complicated.

Here is the implementation Allow grouping by date products by location (#11639) · Issues · Tryton / Tryton · GitLab

With Compute shortage using a single query (#11640) · Issues · Tryton / Tryton · GitLab I do not think there will be any major gain by dividing the code path. Indeed we can just not check products for which the quantity have not changed since the previous date.
Also the query to find if a product have moves over the period will be new added query to run when the products_by_location query returns already this information.

The optimization works perfectly, we went from ~4h30 of processing to ~1h for a period of 6 months and 1h45 over a period of 3 years.
Thanks

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.