So I think we should move away from readthedocs instead of trying to make a complex setup using for example our Github mirror.
I think to avoid being held hostages by service providers, it will be simpler to just build and host ourselves the documentation.
We could use a daily or weekly cron job and push the content for each series to our CDN.
Indeed we may use a single build from the top of the repository of tryton.
For other projects we just need to build them and push to CDN.
Hello,
You are receving this email because your Read the Docs project is impacted by an upcoming deprecation.
We are discussing dropping of support for Bazaar, Mercurial and Subversion version system controls. This means we will only support Git to do the initial clone and build your project’s documentation. We would like your feedback on this change, to better understand if you can use Git for your project.
We’ve made this decision because 99% of our users use Git and we can’t cover the maintainance cost we were spending on Bazaar, Mercurial and SVN. Over time, these VCS tools have not gotten new features, and add confusion to users because not all our features work for them.
Unfortunately, there is no workaround on the Read the Docs side that you can follow to keep building your documentation using these VCSs, but you could probably import your Subversion or Mercurial repository into GitHub or similar services to keep building on Read the Docs.
The projects that are impacted by this change are:
Please contact us or reply to this email if you have any questions, or think there is a strong reason we should continue to offer basic support for other VCS tools.
Thanks, Read the Docs
Keep documenting,
Read the Docs
But for me reading the issue, they already have taken their decision and even if we would succeed to convince them to keep mercurial. It will just be a matter of time before they start again to want to remove mercurial because it is clear that they do not want to use abstraction over the VCS but only Git.
I think the availability of the documentation is a crucial pin point for the successful onboarding in Tryton. That’s why I asked already in How to build documentation for a way to build the available documentation in order to package it.
Because we built with tryton-server-all-in-one (s. Debian integration packages for Tryton) a highly integrated deployment with a production ready database, but when it comes to select from the available modules for activation the user is severly lacking information about the purpose of each module. It would be a great step forward to make the documentation directly available.
Another point is that if we have a global documentation build we can easily include a section for transversal documentation (so we could remove the Howto - Tryton Discussion category and have a MR workflow for them).
Indeed we can almost use .gitlab-scripts/generate-doc.sh as-is to generate the documentation website.
We just need to update readthedocs folder to use proper path (probably relative).
Another option would be do have a single build but i think it is important to be able to build the documentation of just one module (and have links using sphinx.ext.intersphinx).
I looked how sphinx implement the search. It is based on 2 files _static/searchtools.js which contains the engine and searchindex.js which contains the data.
The searchindex.js is loaded by calling Search.setIndex. The structure of the file is an object with keys:
docnames: a list of document names
filenames: a list of corresponding names
titles: a list of corresponding titles
terms: an object with term as attribute and a list of index in the previous lists.
…
So we could merge all the indexes of the sphinx documentations as long as we prefix the docname with the folder name of the package.
I made a simple test by editing the file and it is working.
Now should the merge be done in the browser or as post-process of the build?