Performance of loading One2Many fields

I have a custom report which reads data from account.move.line model which suffered some performance issues.

My code was:

                                                                               
moves = Move.search([                                                          
        ('date', '>=', data['from_date']),                                     
        ('date', '<=', data['to_date']),                                       
        ('state', '=', 'posted'),                                              
        ],                                                                     
    order=[                                                                    
        ('date', 'ASC'),                                                       
        ('number', 'ASC'),                                                     
    ])                                                                         
                                                                               
records = []                                                                   
for move in moves:                                                             
    for line in move.lines:                                                    
        records.append(dict with move and line data)    

Which lasted 2minutes 16 seconds on my laptop with nearly 15.000 records

If I updated the same code to:

lines = MoveLine.search([                                                      
        ('move.date', '>=', data['from_date']),                                
        ('move.date', '<=', data['to_date']),                                  
        ('move.state', '=', 'posted'),                                         
        ],                                                                     
    order=[                                                                    
        ('move.date', 'ASC'),                                                  
        ('move.number', 'ASC'),                                                
    ])                                                                         
                                                                               
records = []                                                                   
for line in lines:                                                             
    records.append(dict with move and line data) 

The execution time with the same set of records reduced to 11 seconds.

Is this expected?

Note that I’m testing using 6.0 series.

Hi Sergi,

That is probably because the lines field on account.move has a context attributes, which forces Tryton to read the lines “move per move”.

I would be really glad if some sort of optimization was possible here.

I do not think this is really due to the context. Of course the context reduce the number of lines read together because the browse lists are per context.
But indeed the real bottleneck should be that in the first example you fetch first the moves and than the ORM has to make a query every IN_MAX to fetch the lines. So for 15000 records it is at least 15 queries (so compare to a single one it is expected to be 15 times slower).

Now about the context on Move.lines, I think it could be removed as its main goal is to fill the default value for field computed from the move. But only if on_change_move set values to the get_move_field Function fields.

@pokoli Is there a significant difference by increasing cache > record in your configuration (over the number of loaded records) ?

@ced Unless I’m mistaken, the fields journal, period and date are in the context, so that may mean hundreds of combinations for 15k records

I expect that that there is a very few journal and that the date range is not bigger than a year. So it means at most 12 periods and 365 dates. As period and dates are linked this means the worst case is 365 times the number of journals.

Indeed our data was from 1st of January to 31st of August. So there was less dates.

Indeed I just noticed that the program takes most of the time loading on the first move.lines. I guess it’s loading the move.lines and I can see that there are some queries reading account_move_line table there.

On the other hand, the read method also reads in a group of IN_MAX records so I gues there should not be any difference on the number of queries.

I have not tried.