Fast feedback is everything

As software developers, we spend a lot of our time writing code. Whether we're implementing new features, fixing nasty bugs, or doing boring maintenance work, there's always some code we either create from scratch or try to modify for the better. When writing code, we need confidence in what we do. We need to know whether our changes work as intended. That's why we write tests after or, preferably, before touching any code. So, inevitably, a huge part of our daily work comes down to these two things: programming and testing.


Programming and testing are, of course, only vague terms, and their meaning depends on how exactly you approach software development. For example, when practicing test-driven development (TDD), you typically follow these steps:

  1. Write a test that defines the desired behavior.
  2. Run the test to see if it fails.
  3. Write some code to make the test pass.
  4. Run the test to see that it passes.
  5. Refactor the code (and re-run the test).

Even if you don't practice TDD (you totally should!), you're probably still going to:

  1. Write some code.
  2. Write a test after the fact.
  3. Run the test to see if it's successful.

In order to write an actual program, you have to run through these steps -- this loop -- again and again. The point is that no matter what development method you prefer, constantly jumping back and forth between programming and testing comes at a price: it doesn't just slow you down in terms of time spent (especially with long-running tests), the involved context switching also drains your mental energy, which might ultimately destroy your productivity. For this reason I believe that the following statement is so important -- and I'm not tired of repeating it whenever I get the chance:


When it comes to programming and testing, fast feedback is everything.


Here, fast feedback means that the time between changing code (or tests) and getting results from running the tests is reduced to a minimum. In other words, you end up with a fast edit-compile-test loop (the compile step is optional, of course). The great thing about this, as Joel Spolsky put it: "the faster the Edit-Compile-Test loop, the more productive you will be".


There are a couple things you can do to shorten the feedback loop. Certainly the first technique that comes to mind is isolated testing, which involves eliminating (slow) external dependencies like databases. This topic has received a lot of attention lately and I won't go into it here (by the way, TDD isn't dead). You should absolutely invest in your tests and make them faster. Besides trying to implement faster tests, however, you can also optimize the way you run them.

 

Running tests the fast way

At first glance, the way you go about running tests might not seem to have a big impact on the edit-compile-test loop. If a test takes a minute to finish, does it really matter if we can shave off a second or two by tweaking the running step? Yes, it does. Seconds add up over time. Each additional step requires a little more brain power and incurs a significant context-switching cost.


I don't like wasting my time with work that can easily be avoided. If there's a way to minimize the cost of context switching, I'm more than happy to add it to my toolbox. By following these three steps, I managed to run tests faster and, more importantly, become more productive.

  1. Figure out how to execute individual tests. During development, don't run the entire test suite each time you change a bit of code. Aside from the fact that running all tests is often too slow, it's always better (and faster!) to first get feedback on local code changes before integrating with other code. Reducing the scope and testing a small subset of code in isolation is not only faster, it also helps you find bugs, and it's a must-have for TDD. (It goes without saying that at some point you or your continuous integration system should run all the tests.)
  2. Write a test runner. This is optional and depends on your test framework/setup. For example, RSpec already allows you to execute a specific test file or only a single test case in that file. Unfortunately, it's not always that easy. Sometimes you need to execute additional setup/teardown tasks, or running tests on a package level may be the best you can do. That's where a test runner comes in handy. In its most basic form, it's a shell script that takes a single argument -- the filename of the test you're currently working on -- and does everything required to run the test. I usually store this script as script/test in every project I need it.
  3. Run tests using a keyboard shortcut. For fast feedback, it's important to not leave your editor while hacking on code. Configure your editor of choice to execute tests when pressing a combination of keys on your keyboard. At a minimum, set up a shortcut to run the test currently open in your editor by passing its filename directly to the respective testing tool (e.g. rspec) or to a custom test runner (see step #2). If possible, it's also useful to have a shortcut for running the test case under the cursor, which will further narrow the focus of your testing.

Examples

Let me give you three concrete examples where I've successfully applied the steps I described above. Note that all examples are somehow related to infrastructure automation, both because that's an area where rapid feedback matters all the more and because it's what I do for a living. You will also see that I'm a fan of Vim, but it should be straightforward to do the same with other editors. Here we go.

  • rspec-puppet is a test framework that allows you to write RSpec tests for Puppet code. When I started working at Jimdo in 2013, where Puppet has been used for most things, it wasn't possible to run individual tests by simply pointing rspec at a test file in our codebase. One reason is the unusual way test fixtures are handled in the Puppet world. To remedy this, I wrote a test runner script. Together with vim-spec-runner, a Vim plugin that automatically sets up keyboard shortcuts for running tests, we had everything in place to test our Puppet code at the whim of a keystroke.
  • I primarily developed chef-runner for use with Vim. Instead of jumping back and forth between editing a Chef recipe and running the painfully slow vagrant provision command, I wanted to be able to change code and get immediate feedback without having to leave the editor. chef-runner's ability to rapidly provision a machine with just a single Chef recipe -- the file currently open in Vim -- made this possible. There's no Vim plugin; the setup is as simple as sticking a one-liner in your .vimrc file.
  • chef-runner used to be a 100-line shell script before I decided to rewrite it in Go. Go comes with first-class testing support. The go test command is used to run tests (_test.go files) and report test results. However, the tool itself can only run tests for one or more packages based on their import paths; it cannot handle arbitrary _test.go files. In order to execute package tests for the source file opened in Vim, I wrote a test runner script for Go. (I later learned that the GoTest command of vim-go does something similar.)

Wrapping up

as an engineer, you should constantly work to make your feedback loops shorter in time and/or wider in scope — Kent Beck (@KentBeck) November 11, 2014

After reading this post and Kent's fitting tweet, I hope you'll agree that fast feedback plays an important role in software development. Optimizing the way we run tests is one effective method to shorten the feedback loop and, as a result of this, get things done.

 

Acknowledgement: The ideas I presented in this post were heavily inspired by the excellent Destroy All Software screencasts by Gary Bernhardt.


 "Fast feedback is everything" was originally published by Mathias Lafeldt on his blog, mlafeldt.github.io. You can find him on Twitter here.