Taking care of infrastructure: code review

FWIW, KDE uses gitlab, and it was really easy to contribute something (for me as a non-developer). Easier than github and Tryton-environment.

I identified a future problem with our infrastructure, I checked past proposal that did not happen and I made a proposal with a clear plan of action and a budget estimation.
Now I do not see the point to make another proposal if you do not propose a clear plan of action nor a budget because they can not be compared then.
I made some more researches and prototyping. I’m sure a review extension can be added and fully integrated to roundup without much work (not more than ±500 slocs) and without adding extra maintenance to roundup.

1 Like

Cost should not be the primarily driver. It is most likely that bolting something here and fixing something there is cheaper. You end up in an unmaintainable environment. Like many ERP customers who extended ‘just here and there’ a bit and now cant do upgrades any more. You know the game


‘Not much work’ or ‘2 weeks effort’, as written in first post?

The current setup may fit perfectly to your way of working, but needs a huge learning curve for new contributors. It would allow sharing responsibilites, and it helps focussing on the goal to produce the worlds best OpenSource ERP system, instead of fixing things that are not maintained any longer upstream.

To sum it up in one sentence: hg is dead. Learn git.

I see no real proposal in your answer so I’m just ignoring it.

Why am I not really surprised by this answer?
My proposal is to think about a futureproof workflow for development. Gitlab may be one of them.
I have no experience in setting up gitlab (or other) but I trust that other community members have

As a daily user of mercurial and git I have to admit that mercurial is more intuitive than git. So I will be against a change in the version control system (also it has a cost with no benefit).

I may be but using the mercurial repositories thanks to heptapod.
Main problem is that currently there is no support for reviews on multiple projects and they do not consider it a feature to be implemented soon.

So I only see two options here:

  • Use a single repository with heptapod.
  • Develop our custom review project following @ced proposal.

I’m not very fan of developing a custom solution because it wil be to costly to have the same level of features of gitlab/heptapod (I’m thinking in web_ide). An standard solution can also be used for companies to develop it’s own modules following the same patterns and rules as in the tryton community easing the process to get familiar with the tryton contribution process.

Having said that I still do not have clear feeling about which is the best solution for the project.

I wasn’t aware that a blocker was a testing proposal.

I made a shell script which is doing such conversion:

  • take care of all open branches (but closed branches could be easily added)
  • keep all tags but rename them to avoid conflicts (“5.8.0” in “trytond/” is renamed as “trytond-5.8.0”)
  • keep the whole history (no commit rewrite)

there is one commit per subrepository to take care of tags rename + rebase under a directory. all subrepositories are merged (hg pull -f + hg merge)

some additional work might be necessary: for example tags aren’t 100% clean: some tags with old name are still here in duplicate of new names.

This is a first step but for me the complex and difficult tasks have always been:

  • make it works with CI (we can not run all the tests for each commits, it takes more than 8 hours now)
  • design new release scripts
  • readthedocs build

But even with all of that solved with a mono-repository, there are still the problem of the code review.

it shouldn’t being very different from what we have now, but I don’t know where to find what it is done currently.

it is about generating a list of directories (containing a .drone.yml file) from a changeset.

it is mostly: changeset → list of files → for each file take dirname(file) → for each directory look if exists .drone.yml : if yes, keep the directory ; if no push dirname(directory) in the list to process (look at 
)

yes, things should be done here

here, I don’t know how it is done currently. could you point me to some documentation how it is done ?

It is a complete new design.

The Drone does not know the changeset in the pipeline (and the image is not supposed to have hg).
Also Drone takes care only of the .drone.yml on the top of the repository. So it means that a single .drone.yml must manage any build. This will be difficult for project like trytond-gis which does not work on postgresql service but postgis.

It uses the generic webhook of readthedocs. So one different hook is called by each repository on incoming.
And again readthedocs is looking for a single configuration file to build the documentation.

I guess you see now why I said that moving to a hosted platform is mainly rebuild the all infrastructure. It will be much harder and riskier and will take more time than just replacing the defective tool.

PS: merging to a mono-repository will also loose a part of the history which is recorded in tryton-env. This is history is very useful to perform bisect research on a bug. This will be lost for all changeset before the merge.

Indeed with gitlab (should be tested with heptapod) it is possible to get the modified files by performing a diff with previous changeset.

Event it is possible to define some CI jobs to be run only when some folder is changed.

Indeed currently the readthedocs build is using several repositories and submodules which makes it complex. If we have a single repository we may consider moving to a diferent model where all the documentation resides into a single doc and only build the docs from there. This will have the benefit that we use a single configuration file while currently we have one configuration per respostory which more or less the same content.

Then it can be replaced by gitlab/heptapod:

To be clear, moving all infrastructure to GitLab can not be considered until someone demonstrate a complete working migration of everything (repository (full history), build (limiting the resource), bug (importing all existing bugs), review (styling bot), 
).
But also how much the hosting will be cost, how much power do we have to increase (yes it will be for sure more than the current one (not great for the environment)).
And Mercurial+GitLab can only be considered once it will be stable according to their roadmap (which is unknown).

I foreseen where this discussion will be going as usual, nowhere. It will die like all the others by people arguing in the void and no action taken.
So we will have again the same discussion in 2 years when it will be very complicated to keep Rietveld working.

Could you clarify what do you mean here?

As far as I see there is a redmine issue importer so we can use it as base to import all issues from roundup.

I think we can simplify the styling bot by just running flake8 on CI. This will give the user the full lines details on the build output. One drawback is that there won’t be inline comments, but as far as the lines are shown on the CI output there is not issue for me.

Indeed we can avoid running the tests if the flake8 does not pass which will force the user to fix them before getting a review (and also the same for failing tests).

This may depend if we prefer to keep it running on our servers or use the hosted services. This should be discussed also if we prefer going this way.

IIRC we started using drone when it was not stable and this caused us to migrate to new APIs because of incompatible changes (event we lost the build history). So as far as the workflow is suitable for us I think we can consider doing the same for Mercurial+Gitlab which probably is more stable.

As far as Python2 is no longer maintained (for 1 year already) I do not see what could change in 2 years to make things more complicated. Indeed I see no plan from google app engine to deprecate python 2.7.

I already explained. Build takes more than 8 hours.

Did you tested? Where can I see the Tryton’s bug tracker imported?

This would be against the PEP8 which we are supposed to follow.

Everything has a cost.

There was no other alternatives and we did not replace something that existed.

This is not linked.

Still waiting for a full plan, of such big-bang migration.

This was my last comment on this thread. I will anyway develop the code review I think we need. Others can work on whatever they want


Ok, so there is nothing else to discuss. Feel free to develop the code review tool and we will provide feedback about it and use it whenever it’s ready.

I can start a new thread, but that will make things much more noisy. My comment is not a discussion but questions :grinning_face_with_smiling_eyes: Because after reading two questions came up:

  1. Is there documentation (with diagram?) of the current infrastructure? I have no idea how the current infrastructure is intertwined and can’t get my head around it.
  2. Imagine a proposal which can fully replace the current infrastructure, everybody is happy, BUT it means migrating to Git. Is this a problem? (just a ‘yes’ or ‘no’ is ok for me)

No. But this was something that arraised on some of the latest foundation discussions. So probably something to be improved.

Yes, we prefer to use mercurial.

Like the customer who wants to see the full ERP system and migration migration upfront. We all know that this approach does not work.
One has to think about a migration approach: Do we really need the full history? (Leave it in the old system, static), do we need all bugs (or just the open ones)?

Stable enough for ‘small’ projects like KDE


Yes, same discussion, just the problem is bigger by then

1 Like

I beg to differentiate.

We are using internally git for a fairly long time and publish our work at gitlab.com. Other community members basically did the same from the beginning. Quite some others, others, others, others joined after the closing of the mercurial services of bitbucket. Those are only some examples and the list is by far not exhaustive.

To be safe we convert all mercurial stuff (we don’t want to rely on the github mirror) from tryton.org to git. Obviously this creates some overhead. Migrating the native Tryton repos to git would save us (and I think we are not alone) a lot of work. Additionally contribution back would be eased a lot.

Just in short: we (read MBSolutions) prefer git. Other community members should make their voice heard as well.

It seems that the fixation on mercurial is going to be the showstopper for an open-ended evaluation. Without digging into the internals of the two VCS and which one is easier etc. etc. it is a matter of fact that the market share of git according to this source is about 80% while that of mercurial is just 2%. As another matter of fact it could be observed that the port of mercurial to Python3 was a long lasting task and that it was just on the verge to be pushed out of several important distributions due to the Python2 removal. I think we should allow for an unemotional evaluation of the underlying VCS taking also into account the future proofness of the solution. If we are in two years at the same point again we probably could have saved a lot of resources.

1 Like

If you don’t contribute because it’s using mercurial instead of git (or subversion or fossil or whatever your preferred VCS is) quite frankly you’re the one to blame.

I contribute to project using git because they use git, I did use bzr when the project used it and I started as many around here with cvs and svn. The tool used to manage the source has never been what stopped me from contributing because in the end they all look the same to me.

You’re right. But mercurial won’t disappear either and in case it happens we already have some backups.