Still, I don’t grasp what we gain by keeping doctest as is, specially if we add all those features.
After the patch is committed I understand that the only provided features by doctest are:
Testing values without an explicit assert
Testing exceptions without an explicit assert
At least in theory, makes the comments (the scenario) more prominent
The first two features are provided by the assertEqual and assertRaises respectively. The latter, can be just python comments.
At the same time if I look at what is the cost of keep using rst I see:
We need additional scripts to convert those rst into python code to have the checks pyflakes or flake8 provide. Your patch proposes what the tryton project needs, but every company and individual has its own workflows. I wouldn’t care for out of project needs if the benefits of doctest where huge but I don’t think so.
We do not have syntax highlight. Again, the argument that it is just a tools issue may be ok for me if the benefit of the feature was huge but I don’t think so. If one wants syntax highlighting he needs to implement the tooling for heptapod, github, vim, vs code, etc. While we get them for free using standard python scripts.
It forces all tests to add the ">>> " prefix and when you want multiple lines to add the "… ". I don’t know about others, but I need to write those prefixes for every single line of code. Maybe tools can automate part of that, but why would I want to create the tools if python scripts would essentially do the same job. Also, if I multiply that extra tooling cost for all developers here, is really all worth it?
The argument that you’re very productive with rst is ok but I really don’t get how you’d be less productive with plain python scripts.
So, for me (and the rest of our team, and it seems for others too) it makes both writing and reading tests more complicated and also requires extra tooling. What are the real benefits?
For me the value precisely in what they do not provide.
They force to write sequential steps (aka scenario) with automatic check on each line.
Also they provide better comparison than assert* (see the Decimal precision).
And another goal is to have those scenario part of the documentation. It happens often that on this forum we lead people to read one scenario to understand how a module works.
That Tryton uses doctest does not force anything on any body.
Ask to your editor.
Why? He just needs for where he need it.
It is not free. It is because someone has developed it.
Only scenario for Tryton.
Plain python scripts does not provide the tooling. You need to use unittest (see the unit) TestCase which are not sequential like scenario.
So if it is to have to write the same test on each line as docttest does, I do not see the point.
You can use what ever you want for testing your code.
I strongly disagree. We have already provided support to people by telling them to read scenario. It never happened with TestCase.
FYI, we actually migrated away from rst tests recently.
Basically, all our scenarios are now python files, using custom “assert_XXX” methods (Yes we had to write those ourselves, but this is not really difficult).
Migrating our ~350 scenarios was rather quick (though as mentionned above, we already wrote python files that were then converted to rst, so it was mostly replacing the actuals tests with assert_equal / assert_raises, which could be at least partially automated).
We still run them via the doctest mechanism, but the rst file is just one line: import trytond.module.XXX.tests.scenario_..... We will probably take some time to remove this soon.
We use unittest instead of doctest. This way we do not have to implement our own assert methods. We wrote a script to do the migration but needs manual changes afterwards in some lines.
You can take a look at how tests look like now:
We’d have preferred to have a plain script (with no classes and no methods) but that would have required adding some dependencies with a new library and we didn’t want to do that.
In the case of customer modules which have lots of dependencies and the initial setup for the scenarios is rather large, we’ve added a tools.py file with a setup() method:
def setup():
vars = SimpleNamespace()
vars.config = activate_modules('custom_module')
_ = create_company()
vars.company = get_company()
# Set employee::
User = Model.get('res.user')
Party = Model.get('party.party')
Employee = Model.get('company.employee')
employee_party = Party(name="Employee")
employee_party.save()
vars.employee = Employee(party=employee_party)
vars.employee.save()
...
So in the test we can reuse that setup process in several scenarios:
In the future, the idea is that the dump/restore operation that activate_modules() does, will be handled by setup() so we cache that information too, making large tests run faster.
We also created a report that can be made available to all models and that dumps the content of the current record into valid proteus code (requires manual cleanup).
The idea is to be able to create the code for that setup method a little bit faster.
If there’s something of all that you may find useful, no problem with sharing.
I think I will not be against such feature implemented this way but I will need it to support multiple setup functions (it will be great if it can also support functools.partial).
Also the function part of the name should probably be separated differently than the module to avoid name collision (ex: with --).
Unless you have something specific in mind, the problem with passing the setup function to activate_modules() is that there’s no easy way to access the records created: you have to use search() to find the records created by setup.
What we’re doing is that setup() returns a SimpleNamespace object which you can use to access the records created.
If you need that, you setup function can manage to fill such namespace.
It will be easier to implement if functools.partial is supported but it is still doable without it.
For someone that is fairly new to Tryton, I cannot overstate how helpful this form of documentation has been for me, to see how things works together in one easy-to-read document. To me, the setup methods proposed above obscures the flow a bit. I have to understand what those methods are doing behind the scenes. I guess I prefer them a little less DRY as a reader.
I agree that better tooling could improve the developer’s experience writing them. But perhaps thinking of them first as documentation would help with the motivation side of things? I would actually like to see more doctests, or at least doctests with perhaps more natural language explanations and descriptions of what the purpose of the test was, edge cases, alternative possibilities of how to use the models or how they could interact with models from other modules, etc., along the lines of the Donald Knuth’s literate programming idea, directed to developers and systems administrators. Something more hands-on and specific than the current design.rst and reference.rst afford and perhaps even more integrated with them?
I know I’m asking for the world here. But coming to an understanding of the relationships between modules and things—how things work together—is where I’ve struggled the most.
FWIW: I’ve found that this vim-rst plugin highlights python code in doctests.
For the record after 15 years writing and maintaining Tryton’s tests, it is clear that tests must be written in the dumbest way possible otherwise it is another source of bugs.