~/imallett (Ian Mallett)

Don't Write Test Cases

"Trying to improve software quality by increasing the amount of testing is like try[ing] to lose weight by weighing yourself more often."

by Ian Mallett

Introduction

Here's another ridiculous claim: test cases are stupid. They suck time and energy from a project, give a false sense of security, encourage laziness and inattention, and don't even improve the problem of correctness.

Note: this article was originally a single work, but has been split into two. A lot of the justification and elaboration now lives in my article about Test-Driven Development, which is the most common manifestation of unit testing.

The Basic Problem of Software Engineering

The most basic problem with a test case is that it ignores the real problem in software engineering. When teaching, I am often asked which programming language is easiest to use. The answer (briefly, mu) is that that question is not the right question. Languages aren't difficult to use because of syntax. They are difficult because they require you to think—and languages are roughly equal in this respect.

The hard part of writing software is collecting your ideas into a cogent whole. It's about organization. It's about conceptualizing something potentially very complicated and structuring it (in approximate order of importance) in a clear, simple, and efficient way. Every idiot with a compiler can write code, but it usually takes special training to even begin to grasp how to solve the above problem. It's amazing how often software engineers forget that this is what their degrees and certifications symbolize.

Tests: Bloat and Complexity

The problem with writing test cases is that they bloat and complicate the structure. A short and oversimplified maxim of the programming ancients is that more code is always worse. Perhaps it is better summed up by, allegedly, Bill Gates: "Measuring programming progress by lines of code is like measuring aircraft building progress by weight."

The simile is appealing. More code might mean you're making more progress, but it stands to reason that the less code you can get away with, the more likely your plane will fly. This makes a kind of intuitive sense. The best programmers hold entire modules in their minds as they work. If they must also hold the tests for all of that functionality, both the time for development and the chance of error compound geometrically. Design patterns exist to simplify design, but if you never get a chance to appreciate the underlying design because you're too busy updating tests, you run an increased risk of reinventing wheels or breaking something.

Test cases themselves are fraught with errors, and trying to operate on objects with some internal state has lead to whole testing frameworks that work around the logistical nightmare of instantiating codependent objects just for testing purposes. The horror can escalate to extremes, as I show on my article about Test-Driven Development.

Production Costs and Laziness

Even if good design is not sacrificed, the fact that more work must be done to arrive at that "finished" design means that either the design will never be reached or at the very least that it will be approached far more slowly. The longer it takes to write something, the less likely it will ever be finished.

This hurts development costs, in the sense that more development effort must be done and that bringing people up to speed takes longer. This hurts time costs, in the sense that release cycles are longer. Even if the project ever does get finished, it was certainly not finished in the most efficient way possible.

All this contributes to stultification and stagnation. No one wants to work on a project that's just grinding through test case specification. More often than not, someone else got to design it. Someone else will do the implementation. You're just checking that someone in the future will do the work in a way someone else says is right. It's degrading and uninteresting.

Critical errors go unchecked because no one wanted to write a test case for a Can't-Happen bug. So now you're flying your airplane or shooting particle beams at a cancer and someone dies because the programmers were too busy thinking about specifications and documentation and whether something makes a nice unit test that they didn't look carefully at the actual algorithm and remember that integers overflow if you add one onto them too much.

Effectiveness and a Substitute

I will concede that occasionally code will fail in a way that could have been caught easily with a well-written test. But, this presupposes that every function be tested in this way. One might just have easily made a mistake in writing the test, and ineffective test cases give developers a false sense of security. In my experience, tests are never written well; tests don't catch the subtle errors that are the most problematic.

The most devious bugs are those that underlie algorithms—and unit testing cannot possibly hope to address these since they work on canned examples. The stupid things people test for—like array-out-of-bounds, invalid dereferences, runtime type errors, and so on, are the sorts of things that take a programmer a few seconds to locate from a stack trace. Maybe you catch a few runtime bugs before they happen—but languages have better means.

This leads me to assertion. Dynamic assertion is available in just about every modern language, and many support compile-time assertions (like C++'s "static_assert"). These don't affect the finished project, they actually encode the programmer's intent directly, and they catch errors in the real world—not just with some expected data. These are far more effective and compact than test cases. Why not use them instead?

Maybe you think test cases can handle testing of more complex data. This is not actually true. If an assert statement can't discover that input data is malformed, then your design needs refactoring. End of story. And, hey—if you don't use test cases, doing so will be easy!

Conclusion

Of course everyone makes mistakes, even if they're rare. That's why good programmers test their code—as they go—against real data. You can even keep this data around to test your implementation later, but this is not unit testing.

But, the bottom line with all this is that software development doesn't work at all if the people doing it are incompetent. No measure of unit testing or code review or whatever buzzphrase du jour will ever be thought up can guarantee that a program will work as intended. Unit testing is a bandaid over a larger problem: writing good code in the first place.

Yes, edge and corner cases do happen—but that is why you need good programmers to think of them first. Unit tests discourage this sort of thought by giving false security in the form of a passed test suite. As in Test-Driven Design, unit testing in general leads to arrogance and shortsightedness—and worse code.


COMMENTS
Ian Mallett - Contact -
Donate
- 2018 - Creative Commons License