Automation for better behaviour

Tags: django, python, nelenschuurmans

Now… that’s a provocative title! In a sense, it is intended that way. Some behaviour is better than other behaviour. And automation that notifies you of your good or bad behaviour helps to provide better behaviour, right?

Note 2015-09-07: I changed the wording and removed some of the more provocative terminology and phrasing. The original is in my git repo so if you want the real thing you can mail me :-)

Absolute values

I think there are absolutes you can refer to, that you can compare to. Lofty goals you can try to accomplish. Obvious truths (which can theoretically be wrong…) that are recognized by many.

Take programming in python. PEP 8, python’s official style guide is recognized by most of the python programmers as the style guide they should adhere to. At least, nobody in my company complained if I adjusted/fixed their code to comply with PEP 8. And the addition of bin/pep8 in all of our software projects to make it easy to check for compliance didn’t raise any protests. Pyflakes’ utility is even clearer, as it often points at real errors of obvious omissions.

For django projects, possible good things include:

  • Sentry integration for nicely-accessible error logging.

  • Using a recent and supported django version. So those 1.4 instances we still have at my workplace should go the way of the dodo.

  • Using proper releases instead of using the latest master git checkout.

  • Using migrations.

  • Tests.

Automation is central to good behaviour

My take on good behaviour is that you should either make it easy to do the good thing or you should make non-good behaviour visible.

As an example, take python releases. As a manager you can say “thou shalt make good releases”. Oh wow. An impressive display of power. It reminds me of a certain SF comic where, to teach them a lesson, an entire political assembly was threathened with obliteration from orbit. Needless to say, the strong words didn’t have a measurable effect.

You can say the same words at a programmer meeting, of course. “Let’s agree to make proper releases”. Yes. Right.

What do you have to do for a proper release?

  • Adjust the version in setup.py from 1.2.dev.0 to 1.2.

  • Record the release date in the changelog.

  • Tag the release.

  • Update the version number in setup.py to 1.3.dev.0.

  • Add a new header for 1.3 in the changelog.

Now… That’s quite an amount of work. If I’m honest, I trust about 40% of my colleagues to make that effort every time they release a package.

There is a better way. Those very same colleagues can be relied on to make perfect releases all the time if all they have to do is to call bin/fullrelease and press ENTER a few times to do all of the above automatically. Thanks to zest.releaser.

Zest.releaser makes it easier and quicker to make good releases than it is to make bad/quick/sloppy releases by hand. Note that I didn’t write zest.releaser at my current job, I needed it at previous jobs, too :-)

Further examples

Now… here are some further examples to get you thinking.

All of our projects are started with “nensskel”, a tool to create a skeleton for a new project (python lib, django app, django site). It uses “paste script”; many people now use “cookie cutter”, which serves the same purpose.

  • For all projects, a proper test setup is included. You can always run bin/test and your test case will run. You only have to fill it in.

  • bin/fullrelease, bin/pep8, bin/pyflakes: if you haven’t yet installed those programs globally, you have no excuse not to use them now.

  • If you want to add documentation, sphinx is all set up for you. The docs/source/ directory is there and sphinx is automatically run every time you run buildout.

  • The README.rst has some easy do-this-do-that comments in there for when you’ve just started your project. Simple quick things like “add your name in the setup.py author field”. And “add a one-line summary to the setup.py and add that same one to the github.com description”.

    I cannot make it much easier, right?

    Now… quite some projects still have this TODO list in their README.

    Just suggesting and not checking is not enough. A tool that checks for this specific “todo” comment in combination with github pull request integration would probably help.

Conclusion: you need automation to enable policy

You need automation to enable policy, but even that isn’t enough. I cannot possibly automatically write a one-line summary for a just-generated project. So I have to make do with a TODO note in the README and in the setup.py. Which gets disregarded more often than I like.

If even such simple things get disregarded, bigger things like “add a test” and “provide documentation” and “make sure there is a proper release script” will be hard to get right. I must admit to not always adding tests for functionality myself, though.

I’ll hereby torture myself with a quote. “Unit testing is for programmers what washing your hands is for doctors before an operation”. It is an essential part of your profession. If you go to the hospital, you don’t expect to have to ask your doctor to disinfect the hands before the operation. That’s expected. Likewise, you shouldn’t expect your clients to explicitly ask you for software tests: those should be there by default!

Again, I admit to not always adding tests. That’s bad. As a professional software developer I should make sure that at least 90% test coverage is considered normal at my company. In the cases where we measure it, coverage is probably around 50%. Which means “bad”. Which also means “you’re not measuring it all the time”. 90% should also be normal for my own code and I also don’t always attain that.

Our company-wide policy should be to get our test coverage at least to 90%. Whether or not if that’s our policy, we’ll never make 90% if we don’t measure it.

And that is the point I want to make. You need tools. You need automation. If you don’t measure your test coverage, any developer or management policy statement will be effectively meaningless. If you have a jenkins instance that’s in serious neglect (70% of the projects are red), you don’t effectively have meaningful tests. Without a functioning jenkins instance (or travis-ci.org), you cannot properly say you’re delivering quality software.

Without tooling and automation to prove your policy, your policy statements are effectively worthless. And that’s quite a strong value statement :-)

 
vanrees.org logo

About me

My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.

Weblog feeds

Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):