The tests that will not be done because they are not explicitly written.
Sure, but I’m curious on understanding at least one or two examples where that is possible.
In Tryton most code will never generate any output and if it does (for example, you leave a print statement in the code) it will make the test fail, but it will not be a symptom of a problem in the code.
So I can only think of the tests we already have such as:
>>> from interlude import interact; interact(locals())
It throws the developer into an interactive Python console session for debugging/testing. The environment includes everything (variables, methods, imports, …) which is available in the doctest at this line. It works like the trytond-console, but on each test run it is set up freshly with a defined environment from the doc test including an always clean test database.
In the console session you can just do things like:
After finishing the console session, hit CTRL-D and the test-run continues.
Just copy paste 1:1 the fixes or findings from the console session back to the scenario test. Death simple and no reformatting or line-wise copy paste needed.
Nowadays everything is in Pythons stdlib included. Just putting
in your doctest, will start Pdb (Python Debugger). Enter command interact and you also get an interactive Python console session. Exit the console session with Ctrl-d and then quit (q) the debugger or continue (c) the test run.
Debugging is easier, we can just manually edit the rst file with a single import trytond.modules.<my_module>.tests.scenario_...., and the python code will be executed as is (so breakpoints / prints / etc… will work as expected).
We always use Tryton’s infrastructure since we did not want to diverge too much. So we have a wrapper to run the tests that starts by converting the scenario*.py files to scenario*.rst, which are then processed as usual.
At first the converter interpreted special comments to identify expected returns:
# #Res# #'draft'
was turned into:
but we started a transition to assert_XXX tooling a few years back since it provides better information in case of failures. For instance, assert_in prints the contents of the list / the value we are checking in case of a fail, where:
>>> state in ['posted', 'paid']
will only tell us that got False, expected True.
We still use the conversion tool so far, but once (if…) all tests are converted to pure python (i.e. only uses assert_XXX), we may end up dropping rst files.
Well such test should never be done. The state should be defined so you will have:
E.g. on sale and purchase it can be irritating to get different results when having [queue] worker activated or not in the trytond.conf. Especially if there is actually no worker running. So e.g. the expected sale state can be confirmed or processing.
The two last one maybe solved by adding to globs a method similar to assertEqual which would raise an error showing the difference. The problem is that it would make the scenario no more executable outside the infrastructure of Tryton.