Django under the hood: testing - Ana Balica

Tags: django, djangocon

(One of my summaries of a talk at the 2016 django under the hood conference).

A quick history of django’s testing framework.

  • Ticket #2333 got added to django’s bug tracker before 1.0 was out: we want an integrated test framework (“Rails has it too”). A while later there was a test runner that looked at tests.py and models.py. models.py? Yes, as at that time doctests were still very popular and models were commonly tested with doctests. The rest was for the normal tests in tests.py.

  • Django 1.1 added a build-in testclient for basic GET/PUT. Also TransactionTestCase was added: this one rolled back database transactions at the end of the tests. Better performance.

  • 1.2 added a new class-based test runner. You could now also terminate the entire test run upon the first error (“failfast”).

  • 1.3 splits the old test client into an actual client and a RequestFactory. Well, the client is a subclass of RequestFactory, something Ana doesn’t like and would like to see refactored during the sprints.

    Doctests turned out not to be an ideal combination of tests and documentation. Testing was harder and the documentation not clear. So doctests were discouraged.

  • In 1.4, more TestCases were added. SimpleTestcase for tests without databases, for instance.

  • 1.5. Python 3 support lands in django. A full testing tutorial is added to the documentation. Several assert tests are added.

  • 1.6. “patch” is added to the supported methods of the build-in client. Test discovery is improved. Doctest discovery was removed.

  • 1.7 uses the new unittest. In an earlier version, the unittest2 library used to be backported, but the basic python unittest can be used now as old python versions have been deprecated: the basic unittest library includes all unittest2 functionality.

  • 1.8. Testcase is changed again. Fixture loading is sped up.

  • 1.9. --parallel is added: running multiple tests in parallel, if the tests support it. If you use an older django version, you might use nose’s multiprocessing plugin.

  • 1.10. Nicest feature: you can tag your tests to group them and exclude/run them as a group:

    @tag('slow')
    def test_something(self):
        ....
    

Running tests

Now on to what happens under the hood when you run the tests.

  • Set up the main test runner.
  • setup_test_environment() This sets up a locmem email backend. Translations are deactivated.
  • build_suite(): collect all the tests. The heavy lifting is done by python’s unittest framework. Django adds some functionality, like tags.
  • setup_databases()
  • run_suite()
  • teardown_databases()
  • teardown_test_environment()
  • suite_result(). Return the tests results.

Test classes

  • SimpleTestCase. Very fast, it doesn’t interact with the database. It does have access to the test client.
  • TransactionTestCase. Slow. It hits the database and does (necessary) transaction management to isolate the tests.
  • TestCase. Faster than TransactionTestCase.
  • LiveServerTestCase: launches a live http server in a separate test. Slow.
  • StaticLiveServerTestCase. Special version of the above.

(Note TODO for myself: investigate them further, I can probably speed up my tests by using a separate testcase!)

Quality

Django provides test functionality, but.... how do we write high quality tests? There are some tools to help us.

  • Use FactoryBoy. It replaces fixtures by easily creating model objects. It uses Faker to provide nice random data (person names, company names, email adresses).

  • Hypothesis: property based testing for python. This will run tests multiple times with random data to try and find corner cases. There is a django add-on for it.

  • Coverage testing. It is currently 76% for django. High coverage doesn’t mean high quality. She thinks it is a deceptive metric.

    (Personal note: I dislike this reasoning. Everybody (=several of my colleages) tend to say “high coverage doesn’t mean a thing”. Coverage metrics are discouraged when you say it like this. But, isn’t it obvious that a low coverage rate indicates bad quality tests? As most of the code isn’t tested? Yes, you cannot say tests are high quality if the coverage is high, but you can say the tests aren’t good enough if the coverage is low. So why is code coverage bad as a metric?)

  • Read the django tutorial on testing! This is a good explanation. Django tries to improve your code quality by making it as easy as possible for you to write tests. https://docs.djangoproject.com/en/stable/intro/tutorial05/

Improve test speed

Test speed is important. The quicker, the more often you run it.

  • Use MD5Passwordhasher in testing. Django does it.

  • Consider in-memory sqlite3.

  • Have more SimpleTestCase.

  • Use setUpTestCase() instead of setUp()

  • Be vigilant of what gets created in setUp()

  • Don’t save model objects if not needed. Is an in-memory model enough? So instead of Robot.objects.create(), use Robot().

  • Isolate the fast unit tests from the rests of the test. For instance by using SimpleTestCase. You can run those tests separate from your functional tests.

    For instance, you can run the fast tests very often during coding. When everything’s OK, you run all tests together.

testing his strength

Photo explanation: my son testing his strength by throwing a switch in an old signal box

water-gerelateerd Python en Django in het hartje van Utrecht!
blog comments powered by Disqus
 
vanrees.org logo

About me

My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.

Weblog feeds

Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):