~4 min read|
In 2019, Michael Lynch gave a talk at PyTexas titled “Why Good Developers Write Bad Tests”.
I was watching it recently and found myself nodding along to the points he was making, so I took some notes:
Be sure to also checkout Michael’s blog post on the topic, which includes code snippets to demonstrate the points.
The reader should understand your test without reading any other code.
An application is large and complex. This means that an organized application creates abstractions to convey concepts which are then chunked together logically.
A consequence of this is that when you read application code, you might get a cursory understanding of what’s happening (particularly if methods are well named), but in order to get the details, it’s very likely that you’ll move between files to understand all of the details.
This is the opposite of what good tests look like to Michael where the goal is to maximize obviousness, i.e., keep all of the information necessary to understand the test in one place to minimize cognitive load. The Arrange, Act, Assert approach works nicely here.
In production code, when code repeat itself multiple times it’s an indication that there’s an opportunity to refactor. This is the DRY principle and it works well (though Sandi Metz makes a compelling argument to prefer duplication over the wrong abstraction).
Michael argues that testing is an altogether different beast. In testing, if the goal is to allow a reader to understand the test without reading other code than repeating the setup, even if it’s redundant to another test is helpful.
Improving your production code simplifies your test code.
If a class is difficult to instantiate in tests, so that a test requires a lot of setup that seems unrelated to what is actually being tested, instead of using a helper function to make testing easier, see if the production code can be refactored.
In the spirit of maximizing obviousness, if tests use helper methods, don’t bury the key value for the test. Instead, the helper method should take the key value as a parameter so that the reader can see how the numbers relate.
Name your tests so well that others can diagnose failures from the name alone.
Unlike production code where method names will have to be called in order to be used (and therefore typed by engineers later), test methods are only called by the test framework. So, err on the side of verbosity so that when a test fails, it is clear where it is failing.
In production code, magic numbers are eschewed for named constants which provide meaning. In test code, however, magic numbers are helpful. Not only do they simplify the code, but, critically, they avoid the need to copy calculations from production code to a test (which would make a problem in the calculation difficult to detect).
I’ve been having a really difficult time writing tests of late, and I think in part, it’s because I didn’t have a good grounding in testing practices. Michael’s talk clarified a few of the things I’ve been struggling with - particularly about the differences between test code and production and for that I’m grateful.
Hi there and thanks for reading! My name's Stephen. I live in Chicago with my wife, Kate, and dog, Finn. Want more? See about and get in touch!