Build Docker Images

Rational

Since the neso retirement, there is not easy setup for trying tryton in a local instance, only the demo server is available. Docker images can be used to ease the local testing of the software.

There is an extra benefit, that this images can be used as base image for other people to customize their tryton installations.

Proposition

Create an oficial image for the supported series. Each image should contain an initialized database with no module installed. We can also create some variants of them:

  • Minimum: An image that only contains the trytond server.
  • Modules: An image that contains all the modules of the serie available.
  • Demo: Modules image with a demo database (with the demo script) initialized.

We should also give visibility to this images into the tryton website.

Discussion

For testing proposes maybe it will be great to build images per python version (python2 vs python3 vs pypi) and per database backend (postgresql-sqlite-mysql).
Maybe it should be also great to provide an image of the trunk version, that will can be used for testing the development version (specially in the feature freeze timeframe).

We should install the sao version if avaiaible on the serie?

Implementation

https://bugs.tryton.org/issue6876

For me, there are many points that are not clear.
Which base image and package manager will be used? It is important to keep the images up to date especially for security issue.
I guess there will be an image per series so 4 images. When will they be updated and how will we kept track of images that need to be updated.
As far as I know, it is bad practice to put the server and the database in the same docker image so I guess it is better for example to only provide the Postgresql back-end.
About the modules, the problem to take all of them is that some modules like ldap_authentication or webdav will bring many dependencies and the image will no longer be small in Docker concept.

For database, why not just an SQLite database (to be auto-sufficient) and activate PG if a postgres container is linked to?
For image size, what makes really the image very heavy is unoconv (it installs openoffice)

I agree that we should provide SQLIte but default, but allow to configure the database Url and other parameters via environment variables. For example like done on the prestashop image that can be customized via environment variables.

We can make a list of module to skip if desired. As far it is documented, it wont cause any problem. [quote=“ced, post:2, topic:225”]
I guess there will be an image per series so 4 images. When will they be updated and how will we kept track of images that need to be updated.
[/quote]

It will be great if we can upload the images on every release. Maybe the release scripts can be modified to build the docker images also.

I doubt it will be doable for two reasons:

  • I run OpenBSD to make the releases so no-docker
  • We should not prevent doing a release because something is broken on docker

Also relying only on the release means that PyPI will be the package manager of this system. I do not think it is wise because it is difficult to manage security update with it and dependencies are not very well managed (it can upgrade one package when installing a new one that will break an already installed package).

Understood so we can manage as a separe process after the release, so docker does not prevent to release thinks. [quote=“ced, post:6, topic:225”]
Also relying only on the release means that PyPI will be the package manager of this system. I do not think it is wise because it is difficult to manage security update with it and dependencies are not very well managed (it can upgrade one package when installing a new one that will break an already installed package).
[/quote]

But PyPI is the only package manage which always contain all the releases, as the packages of the disctributions must take a time to be updated with upstream.

To avoid installing new packages that may break an already installed packaged I only see one solution:

  • When creating the image for the new branch generate a requirements file with all the dependencies pined, so when upgrading new modules we only must upgrade the modules versions and re-generate the image with the same versions as the original build.

P.S: I can build the docker images if you want as I have docker on my system.

This is precisely why it will be wrong to use PyPI because it forces us to do the job of package maintainer. We do not have the resources to track updates on all the dependencies.

Why not run pip install -U from docker-entrypoint to be sure to have an up-to-date dependancies versions on startup?
The target is to have an easy to use trytond server to discover the application

Because as I said, this is not a reliable process. pip is a very poor package manager compare to the older like dpkg, portage, rpm etc.

I do not think it should be the only goal. Especially because if it works well at the beginning people will keep it for production (already seen demo installation becoming production by usage).
If Tryton is going to publish docker images, it should be done correctly for long term usage.

I agree that if we publish a good images they will be used on production environments and we should have this in mind. But

If we follow the docker philosophy we should create several composable images that will be upgraded when the new image is built. Maybe we can install all the required dependencies on a base image and make an image with only all the modules that depends on it. This way when the modules images is rebuild the dependencies images is not build.

How do you install those dependencies?

How do you ensure that you are not missing a dependency?

We can install them via the system package manager or via PyPI. I don’t mind.

They can be exctracted from setup.py

But how do you know them if you do not install the Tryton modules?

It does not look like a reliable method. Also those name may not be the name of the system package.

Also if we have a system package, it should always be used. Mixing package system is a bad practice.

Agree. After reviewing the debian packages I see they are quite updated, so maybe we can use a the latest debian stable version and use the tryton-modules-all metapackage to install all the modules and dependencies on the system.

After some research: I found that @yangoon have them avaialble on GitHub - mbehrle/docker-tryton-server: Debian based docker image (debian.tryton.org)

That sounds logical to me.
But on the @yangoon image, I do not see how it connects to the external database nor how it use an external file-store.
Also the image should provide a way to upgrade the database and also a way to customize the configuration files.

I have started to work on providing a Docker image for 4.4 and later.
I choose:

  • to use debian:stretch as base image because it the distribution on which drone is testing.
  • to use python3 because I think we should always provide the latest version of anything dependencies
  • to use as much as possible standard distribution packages except for package that comes from Tryton
  • to use external database and always PostgreSQL to follow the best practice
  • to provide also sao as it is served by trytond
  • to run with a non-root user for security
  • to provide a volume to store attachment
  • to use default configuration for everything except to listen on external interface. As trytond can load many configuration files, I think the best is to use docker configs.
3 Likes

The base image will be stretch with the own distribution python or the python build on docker hub based on stretch with oficial python [1]?

[1] https://github.com/docker-library/python/blob/cf179e4a7b442b29d85f521c2b172b89ef04beef/3.6/stretch/Dockerfile

What would be the point?

It was a question not a proposal :slight_smile:

The second option allows to change in a ‘simple’ the python version, the rest of the image its a standard debian system, but we understand that the first has more control over deploy.

We use the ‘official’ python images from docker hub for our django / flask developments and have not encountered any issues.

Now we were just dockerizing (trying it) tryton 4.5 with 3.6-slim, but if the official image will be based from a debian pure image we will change that, hence the question.