Text-based acceptance testing using TextTest (Geoffrey Bache)

Tags: europython2009, europython

There are cases when unittest-style testing isn’t the best choice. Text-based testing is a good alternative and he presents TextTest as a possible replacement, especially for acceptance testing. Fit tests are often used, but that’s, according to Geoffrey, basically a nicer interface around regular unit tests.

Unittests also means you need a unittest framework for your language. Most common languages have them, but not all. Also, unix-style command line scripts are hard to test. And legacy systems: retrofitting an API for testing internals later on is hazardous. And Geoffrey works for Jeppesen that makes airline planning systems. By nature, the correct output wasn’t determinable beforehand (that was the whole point about the planning system), so that doesn’t play well with your common test setup where you need to know the answer beforehand to test it…

Text testing: use the textual information that is produced by your system anyway. So log files or command line output. It means you have to invest in your output like making your log files easy to read and rich in content.

Text testing also means you can compare output of various versions of your software as you compare the output. You also need some change management as obviously your software changes.

TextTest is open source and he’s being paid by his company to maintain the tool as so many internal projects use it. It starts everything under test via the command line. TextTest has a GUI to help you get started with test definitions. One of the features is the ability to filter out runtime-dependent words like timestamps, temporary file names and process IDs. And you can provide input for scripts (like pressing “y”).

If you have several tests that fail in exactly the same way, TextTest groups them in the report. Handy for the kind of tests that TextTest runs as you often have external factors that influence a lot of tests in the same way.

Some other features: there are various “data mining” tools for automatic log file interpretation. Performance testing is build-in. You can extend it with python. And you can automatically store the reports per date in order to compare the output over time.

A common comment on text testing: there’s a risk of subject experts getting bored and accidentally accepting output recognized as wrong by TextTest so that a wrong situation gets accepted the next time. a) yes, you need discipline. b) the actual content that is tested isn’t supremely important, the changes in the content are what is interesting. You really record the change and thereby document you’re OK that something changed. That’s the real information that you want to have. View acceptance testing as “behaviour change management”, not “correctness assertion”.

Logging: do also debug level logging. Developers often add temporary log or print statements to debug a problem. If you leave the debug-level logging on, you’re really building up knowledge about your system. Debug-level log files are readable by TextTest. You’re putting knowledge in the system instead of losing it after the programmer’s debugging session.

System testing is slow. Well, if it is important for you: throw more hardware at it. Parallelize. He runs a huge amount of tests in just a minute.

You’ll still need unittests (and/or doctests), but probably you need less of them.

Queen Victoria
 
vanrees.org logo

Reinout van Rees

My name is Reinout van Rees and I program in Python, I live in the Netherlands, I cycle recumbent bikes and I have a model railway.

Weblog feeds

Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):