Mine is two seconds.
Here at Power of Two Games, we write all our code with test-driven development. C++ tests use the fantastic UnitTest++ framework (no big surprise there 🙂 ) and we run all unit tests automatically as the last step of our build process. That means that every time we build anything, the tests for that library or program get executed. Every time. No exceptions.
None of that would be an issue except that TDD creates lots of unit tests and encourages almost constant incremental builds. In the past I’ve stressed how important it is to keep them fast. But this week that I was able to really feel what a dramatic difference slow unit tests make.
For many months, our unit tests ran under one second. Each library has its own set of unit tests, and so does the game and each tool. So even though we were writing lots of unit tests, we weren’t always running them all at the same time. There was a definite snappiness to coding: write test, compile, fail, write code, compile, pass, move on. Go, go, go.
As the game project got larger, we started accumulating more unit tests and testing times started creeping up into the second range. At the time, I couldn’t quite put my finger on what had changed, but there was a definite change in feel. Things weren’t quite as snappy.
Then, about a couple of weeks ago, we hit the two second mark and things definitely changed.
> bin\SweetPea_d.exe -unittestSuccess: 1145 tests passed.Test time: 2.13 seconds.
As the tests were running, my mind would wander, thinking about what test I would write next, or whether the code I had just written would be cache-friendly, or whether we’d do Honey’s or Manhattan for lunch. I would be jerked out of my state of flow. It wasn’t just me, Charles felt it too. The difference in productivity was huge.
For those of you laughing at those times and dismissing them as insignificant, you need to keep in mind that our build times are very short:
- An incremental build with just a change in a cpp file takes under a second (including link times).
- A full build of *all* our runtime code, including libraries, is around 50 seconds in a dual-core machine .
So adding on 2 seconds to the incremental build time is going from â€œpractically instantâ€ to â€œOK, I’m waiting…â€.
Of course, none of this has anything to do with unit tests in particular. It’s total incremental build times (including compiling, linking, and running unit tests) that matter. An incremental build that takes 3 seconds with no unit tests is just as bad as one that is under one second with 3 second unit tests. They both interrupt the flow the same way.
I’ve often thought about how a company size affects communication and â€œfeelâ€. It’s close to a step function with logarithmic scale: there’s a distinct jump from a company of 3 to a company of 10, to one of 35, to one of 100+.
I think something similar applies to build times 
- 0 to 2 seconds. Programmer stays in the flow. Productivity is way high.
- 2 to 8 seconds. Flow is broken. Definite feel of things being a bit painful and sluggish.
- 8 to 30 seconds. Attention wanders to other parts of the code, email, etc. Multitasking kicks in and overall productivity plummets.
- 30+ seconds. Builds seen as batch processing. After each build is started, some other action is started. Maybe even physically get up, get a drink, start a conversation. Full brain reboot between coding tasks.
- 5+ minutes. Programmers bang their heads against the desk, actively consider the merits of different forms of suicide, or at the very least start polishing their resumes.
Having been spoiled by sub-second build times, neither one of us wanted to put up with the broken flow of 2+ second builds, so we decided it was time to fix things .
The good news is that our slow build times were solely due to running the unit tests. 2.13 seconds to run only about 1100+ tests is really bad, even on a so-so desktop machine like the ones we can afford. That’s almost 2ms per test! Remember that these are unit tests, not functional tests. So I expect to be able to run several thousands of these under one second. Something was way off.
Things had gotten much worse in the last couple of weeks, so it wasn’t a slow, steady creep up. Chances are we had introduced something very slow recently, which meant it was probably easy to fix.
As a first pass, we turned on the option to fail any test that takes over a certain amount of time in UnitTest++:
RunAllTests(reporter, UnitTest::Test::GetTestList(), NULL, 5); // fail any test over 5ms
That brought about tons of failing test, most of them added recently and creating one particular object in their fixtures. Digging a big deeper, that particular class had an array of 25,000 (yes, that many zeros) objects. Those objects were plain-old-data structures, but had a default constructor. Even though the constructor was simply initializing its members to zero, that was causing a significant slow down when so many objects were created. To make things worse, some of the code I had written was iterating over the array resetting the entries or shifting them up or down, repeatedly calling the constructor. Ouch!
A few minutes and lots of code hacking later, and the object didn’t call a single constructor anymore. Our test times went down by almost half. Victory!
We were already at 1 second, which is in the acceptable range, but since we were here, maybe we could improve things a bit more. After all, 1ms per test is still too slow.
UnitTest++ wasn’t reporting failing tests over 1ms very reliably. Different runs would cause very different results. Things were too spread out or other things in the computer were interfering too much.
About a fifth or so of our tests use Havok. Not with mocks, but directly. They each initialize Havok, create a world, and deal with a simple rigid body, constraint, or something of the sort. I suspected all along that the Havok system initialization and shutdown for each test was probably causing quite a slow down, so I changed it to be initialized only once before we ran any tests.
And that did the trick. With that change we were down to 0.16 seconds. Back to snappy, happy land!
> bin\SweetPea_d.exe -unittestSuccess: 1145 tests passed.Test time: 0.16 seconds.
In release mode things are even better, but we do most development in debug mode, so that’s the really important one:
> bin\SweetPea.exe -unittestSuccess: 1145 tests passed.Test time: 0.06 seconds.
Perhaps sub-second build times are not realistic for all projects, but I think there’s a lot to be gained by keeping compile, link, and test times as short as possible. For tips on how to keep compile times down, check out these articles I wrote a while ago. If your unit tests are slow and you really can’t speed them up anymore, consider splitting them into two sets: one that is important or recent ones that runs with every build, and another, more comprehensive one that runs in the build server.
Even if you cut down from 20 seconds down to 10, you’ll be making a huge shift in how you’re able to work and iterate on your programming.
 We’d love to keep detailed metrics with each build: executable size, number of lines of code, number of tests, etc, etc. Unfortunately, when the rubber hits the road, that ends up in the â€œnice to have categoryâ€ and we simply have no resources to spare on a two-man startup.
 For a whole whooping statistical population of one since I’m only talking about my experience. Still, I suspect that the same applies to many other programmers by scaling the function by some constant.
 Perfect use of agile mentality. Fix things when you need them fixed, not before. But make sure you do fix them when you need them. Otherwise it’s not agile, it’s just lazy.