Cache problem when inventorying

When I try to inventory a location where the products_by_location method return a dictionary with +2000 keys, the server is not able to complete the loop

for product in Product.browse([line[1] for line in pbl]):

in the complete_lines method. It’s due the cache record current size, 2000 by default.

To solve the problem I could:

  • Open a transaction modifying the context ‘_record_cache_size’ and set the limit over the products_by_location entires.

  • Change the tryton config entry related to the cache size.

But the underlying problem stands. So perhaps, it would be needed a change in the code.
I tired using a search instead of browse:

for product in Product.search([('id', 'in', [key[1] for key, value in pbl.items() if value])])

Cause it’s not needed to check more than one time for the same product with different lot, the type or if it is a consumable and also don’t need to check it if there is no product stock as it’s done currently.
If this modification is done, also is needed to check if the product id exists in the dictionary there

product2type.get(product_id,'') not in ['goods', 'assets'] or product2consumable.get(product_id, False)

It seems to solve the problem but I’m not sure what’s the best way to deal with this type of cache problems and if I should create an issue.

Why? What is happening?

I want to inventory a location where there has been many movements of different products (more than 2000) so the products_by_location method return a dictionary with the same number of keys and values, so when the browse is executed with this list of ids as arguments, it gets frozen due the cache record current size.

Which tryton series are you using?

I’m using Tryton 5.2.

What does it mean? For me, the cache may be not used (or less efficient) because of duplicates ids but I do not see how it could not continue the loop.

For me, the solution should be:

for product in Product.browse({line[1] for line in pbl}):

The best is to submit a change.

I don’t understand it but it is what is happening (I don’t know if it is a bug).

It solves the problem but which is the difference on using browse method with a set instead of a list as an argument (apart of the size because of repeated ids)?

I will.

Are you sure the loop stops. Is it not just very slow?
For me, what happens is that if you have duplicated ids in the browse list, the index in ModelStorage.__getattr__ is always the first one and so it has loop since almost the beginning of the list to find new unread ids. Or as there duplicated ids, very quickly the list of unread ids is sparse and no more ordered as cache is stored in a dictionary so forward duplicate ids are considered as read once the first one has been read.

If it is really stops, could you stop the process and see the traceback to find where is it stuck.

It makes just the ids unique.

Indeed you are right, it just gets too slow

I created an issue.

Did you have the stock_lot module activated? This will explain why you have duplicated entries on pbl for the same product.

Yes, I have it activated.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.