Huge delay running on OpenBSD

This seems rational, however I have always had a huge delay.

I am running trytond+tryton 5.6 on a gigabit ethernet network.
Operating system is OpenBSD 6.6 amd64 arch, I have tried both with postgresql and sqlite with storage in memory file sytem (searching for the bottleneck).
All installed from PyPi (pip)
Using python 3.7.

Server hardware is not too powerfull, but not a Pi: 8 core 2.4 GHz atom / 8 GB Ram.

I just tried to open party, with only one party in the database. It took 7 seconds.
Python does not seem to use much cpu time, nor does postgresql.

I must be doing something wrong…

If neither the trytond nor the database are using the cpu. I think it should be a network issue. Maybe you should try to disable the Nagle’s algorightm on the server with TCP_NODELAY.

Right. I believe this would be set in the application that is creating the tcp socket.

I am guessing this would be set in werkzeug. I modified
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR,socket.TCP_NODELAY, 1)
#s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

However I do not know if this is the right approach.
It seems to me that tryton is working a little faster, but it might be autosuggestion.
I will make some more tests tomorrow…

The default werkzeug server is not designed for performance. It is OK for small setup but it is better to use a dedicated WSGI server for better performance.

I created Issue 9373: Disable nagle on tcp socket - Tryton issue tracker

how do you start the server ? do you passing -d database to trytond or using TRYTOND_DATABASE_NAMES environment variable to pre-registering modules and classes of the database ?

here (on OpenBSD too), I am seeing such delay (maybe not such important) on first load of a view. I might try to look at it to see if I could locate the origin.

Thank you so much Cedric, I think you are the most active project maintainer I have ever interacted with.

Now testing with Postgresql (on ssd same machine as trytond), 2.4GHz 8 core atom, started tryton with nice -n -20 trytond -d database -c conf
I loaded party view with 5 parties in the database by selecting party from the menu and pressing enter at the same time as pressing a stopwatch. Stop the stopwatch when the view has loaded. Below times are probably 0.1-0.2 sec off because of my slow biological reaction time.

Tested with tryton on a machine on the same ethernet segment.

Without patch: 4.09-4.13 sec; second load: 1.76 sec.
With TCP_NODELAY patch: 3.00-3.02 sec; Second load 1.03-1.15 sec.

I will also test with a higher latency network connection when I get home, the difference seemed to be bigger.
All other parameters were the same.

semarie: If it is possible to preload the modules that would be great.

PS: Cpu load on python3.7 running trytond peaks at 30 % as opposed to aprox. 12% before the patch, Good sign I guess.


It is also happening if you use a WSGI server like µwsgi?

On a different machine (i5 4690k) I get high cpu usage when something is in process - this is normal.

For example even running import_countries from proteus on the atom machines gives me max. 19% cpu. On the i5 I get >95%

I will use a different machine for now. I will post if I figure something out.

This looks like Python can not get advantages of the cores on the atom machine.

FYI: I think I will not apply the TCP_NODELAY patch because I do not think it is the real problem.

I tested with gunicorn as WSGI server.

Yes, I still have such delay. But for me, it was the fact the client is fetching the view. I have it only on first access (for each session).

For example, still with gunicorn, on a new session client, I have 3 sec delay (1 records in party):

[2020-05-28 15:11:31 +0000] [30664] [DEBUG] POST /test/
30664 8327690698608 [2020-05-28 15:11:31,248] INFO trytond.protocols.dispatcher <class ''>.get_keyword(*('tree_open', ['', 51], {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]}), **{}) from admin@
[2020-05-28 15:11:31 +0000] [13129] [DEBUG] POST /test/
13129 8327462641168 [2020-05-28 15:11:31,610] INFO trytond.protocols.dispatcher <class ''>.fields_view_get(*(116, 'tree', {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2], 'view_tree_width': True}), **{}) from admin@
[2020-05-28 15:11:31 +0000] [30664] [DEBUG] POST /test/
30664 8327690698608 [2020-05-28 15:11:31,769] INFO trytond.protocols.dispatcher <class ''>.get(*('', '[]', None, {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]}), **{}) from admin@
[2020-05-28 15:11:31 +0000] [6106] [DEBUG] POST /test/
6106 8330549218480 [2020-05-28 15:11:31,849] INFO trytond.protocols.dispatcher <class ''>.view_toolbar_get(*({'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]},), **{}) from admin@
[2020-05-28 15:11:32 +0000] [13129] [DEBUG] POST /test/
13129 8327462641168 [2020-05-28 15:11:32,520] INFO trytond.protocols.dispatcher <class ''>.search(*([], 0, 1000, [['name', 'ASC'], ['id', None]], {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]}), **{}) from admin@
[2020-05-28 15:11:32 +0000] [13129] [DEBUG] POST /test/
13129 8327462641168 [2020-05-28 15:11:32,569] INFO trytond.protocols.dispatcher <class ''>.read(*([1], ['active', 'code', 'code_readonly', 'lang', 'name', 'replaced_by', 'tax_identifier', 'lang.rec_name', 'replaced_by.rec_name', 'tax_identifier.rec_name', 'rec_name', '_timestamp'], {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]}), **{}) from admin@
[2020-05-28 15:11:33 +0000] [6106] [DEBUG] POST /test/
6106 8330549218480 [2020-05-28 15:11:33,574] INFO trytond.protocols.dispatcher <class ''>.resources(*(1, {'client': '51c7a555-b1e1-4489-811f-e1a34a6cc444', 'language': 'fr', 'language_direction': 'ltr', 'groups': [1, 2]}), **{}) from admin@

Are you using desktop client or sao?

I think the delay is more noticeable on desktop client because there are more requests that are done synchronously while on sao everything is async (due to javascript).

Nothing that I have proven, just a feeling. So it will be great if you can confirm it.

For example in your request the client does not show anything until it has a the “resources” method reply. On sao the screen will be already loaded on this case.

1 Like

This looks like loading the XML view on the OpenBSD filesystem is very slow.

it is desktop client

gunicorn being also written in Python, it will be good to try another WSGI server maybe µwsgi written in C.

Of course because after the first access, the client keep in cache the result of most queries done to open a form. So it goes from 9 :arrow_right: 3 queries.

I think it will be good to measure the time of each requests. On the demo server, it takes 1s to open parties in fresh session. So it is about 100ms per request. And it takes 500ms on the second opening. According to your logs, each request seems to take about 300ms.
One thing that could be investigated is how the connection to the database is configured. Sometimes using localhost is slow to be converted in But it is even better to use socket with postgresql:///.

@semarie Does the TPC_NODELAY patch improves the speed for you?

First, a note about the host where I did measurement: it isn’t a performant host, and I only run trytond for testing purpose on it (only 2Go of ram, and i running desktop applications on it, like web browser (eating lot of ram)). So I assume the limitator factor will be database and ram.

I don’t see significative difference with or without it.

it is already using socket to connect to postgresql.

uwsgi isn’t available as package on OpenBSD, and I don’t have time to port it. I did a second test with gevent (based on libev). the timing is comparable.