Replace doctest for scenario tests

We’ve been discussing internally for quite some time now that the fact that scenarios use doctests is a pain for at least a couple of reasons:

  • Hard to read and write due to lack of proper syntax highlighting in most editors, unnecessary verbosity with all the >>> prefixes, cannot use pyflakes…
  • The fact that the checks use strings only: which is often a annoying with Decimal values as Decimal("1.0") is different from Decimal("1.00")

That’s why we were thinking on a simple replacement of converting rst files to py, converting text to python comments, which could probably be automated with a script.

The checks could be changed to use an assertEqual()-like function previously converting the values to str in the automated script. Over time, str() conversions may be removed.

Thoughts?

This is only a tooling issue. It is better to improve the tools.

This is a feature that is very good at pointing errors.

For me this will result in less testing because it will need to be explicit instead of being at each line implicitly.

Why do you think that? What kind of issues can the current behaviour find that an explicit system would not?

The tests that will not be done because they are not explicitly written.

There are tools like GitHub - softwaredoug/flake8_doctest: Fork of flake8 that also runs doctests. Intended primarilly for working with syntastic in vim for realtime running of tests.

Sure, but I’m curious on understanding at least one or two examples where that is possible.

In Tryton most code will never generate any output and if it does (for example, you leave a print statement in the code) it will make the test fail, but it will not be a symptom of a problem in the code.

So I can only think of the tests we already have such as:

>>> shipment.origins == sale.rec_name
True

or

>>> len(sale.shipments), len(sale.shipment_returns), len(sale.invoices)
(1, 0, 1)

Which I would call explicit.

I doubt such test would be written with test case style.

Any way for me the syntax is great for the integration tests because your describe actions and then test result.

Also after more than 13 years writing them, I personally became very fluent. So changing will be a loose for me.

I like this tool a lot: GitHub - collective/interlude: Interlude for Doctests provides an Interactive Console.
Just install it and put the following line into your doctest:

>>> from interlude import interact; interact(locals())

It throws the developer into an interactive Python console session for debugging/testing. The environment includes everything (variables, methods, imports, …) which is available in the doctest at this line. It works like the trytond-console, but on each test run it is set up freshly with a defined environment from the doc test including an always clean test database.
In the console session you can just do things like:

>>> invoice.click('post')

and check if all works as expected,

>>> invoice.number
INV00001
>>> invoice.move
<account.move,1>
>>> len(invoice.move.lines) == 3  # debit, credit, tax line
True

After finishing the console session, hit CTRL-D and the test-run continues.
Just copy paste 1:1 the fixes or findings from the console session back to the scenario test. Death simple and no reformatting or line-wise copy paste needed.

3 Likes

Iiuc one would use interlude for interfactive testing during development, and continue to use doctest for non-interactive testing (e.g. CI/CD pipeline). Do I understand correctly?

Nowadays everything is in Pythons stdlib included. Just putting

>>> breakpoint()

in your doctest, will start Pdb (Python Debugger). Enter command interact and you also get an interactive Python console session. Exit the console session with Ctrl-d and then quit (q) the debugger or continue (c) the test run.

1 Like

We’ve already taken a similar approach @ coopengo.

Internally we write python files with a few annotations that are then converted to rst files. We ditched the “input / output” approach for tests in favor of homemade assert_XXX utils.

For instance:

# ...A bunch of imports
from trytond.modules.coog_core.tests.tools import assert_eq, assert_raises

config = activate_modules([
    'account_payment_sepa_contract',
    'account_payment_clearing_cog',
    'account_payment_clearing_contract',
    'contract_insurance_invoice',
    'contract_process',
    'default_configuration'])

ChainRun = Model.get('batch.chain')
Group = Model.get('account.payment.group')
Payment = Model.get('account.payment')

# ... Code to create / process payments

group, = Group.find([])
payment, = group.payments
journal = payment.journal
journal.allow_group_deletion = True
journal.save()
assert_raises(UserError, payment.click, 'approve')
assert_raises(UserError, payment.click, 'submit')
Group.delete([group])

payments = Payment.find([])
assert_eq(len(payments), 0)

Debugging is easier, we can just manually edit the rst file with a single import trytond.modules.<my_module>.tests.scenario_...., and the python code will be executed as is (so breakpoints / prints / etc… will work as expected).

I think nobody wrote a rst file in a long time :slight_smile:

Cool! Thanks for sharing.

What is the reason of converting them to rst?

Do you execute the test with the .py file manually and convert to .rst in order to use the Tryton’s doctest infrastructure?

We always use Tryton’s infrastructure since we did not want to diverge too much. So we have a wrapper to run the tests that starts by converting the scenario*.py files to scenario*.rst, which are then processed as usual.

At first the converter interpreted special comments to identify expected returns:

invoice.state
# #Res# #'draft'

was turned into:

>>> invoice.state
'draft'

but we started a transition to assert_XXX tooling a few years back since it provides better information in case of failures. For instance, assert_in prints the contents of the list / the value we are checking in case of a fail, where:

>>> state in ['posted', 'paid']
True

will only tell us that got False, expected True.

We still use the conversion tool so far, but once (if…) all tests are converted to pure python (i.e. only uses assert_XXX), we may end up dropping rst files.

Yes, that’s another anoying point of rst tests which I forgot to mention.

Well such test should never be done. The state should be defined so you will have:

>>> state
'posted'

You get same problem with badly written assert.

But I’m wondering if we should not activate ELLIPSIS by default to all doctest.
So we can match string containing variable parts.

E.g. on sale and purchase it can be irritating to get different results when having [queue] worker activated or not in the trytond.conf. Especially if there is actually no worker running. So e.g. the expected sale state can be confirmed or processing.

Tests are not run with worker.

Indeed, but rst basically limits us to the assertEqual() of unittest.

Anyway, any tooling we are using is executed as rst for now, so this is not really relevant to the issue at hand.

I have identified few drawbacks when writing scenario that I think we could fix/improve:

The two last one maybe solved by adding to globs a method similar to assertEqual which would raise an error showing the difference. The problem is that it would make the scenario no more executable outside the infrastructure of Tryton.

Indeed we could just add those assert methods to trytond.tests.tools.