Mine is two seconds.
Here at Power of Two Games, we write all our code with test-driven development. C++ tests use the fantastic UnitTest++ framework (no big surprise there 🙂 ) and we run all unit tests automatically as the last step of our build process. That means that every time we build anything, the tests for that library or program get executed. Every time. No exceptions.
None of that would be an issue except that TDD creates lots of unit tests and encourages almost constant incremental builds. In the past I’ve stressed how important it is to keep them fast. But this week that I was able to really feel what a dramatic difference slow unit tests make.
For many months, our unit tests ran under one second. Each library has its own set of unit tests, and so does the game and each tool. So even though we were writing lots of unit tests, we weren’t always running them all at the same time. There was a definite snappiness to coding: write test, compile, fail, write code, compile, pass, move on. Go, go, go.
As the game project got larger, we started accumulating more unit tests and testing times started creeping up into the second range. At the time, I couldn’t quite put my finger on what had changed, but there was a definite change in feel. Things weren’t quite as snappy.
Then, about a couple of weeks ago, we hit the two second mark and things definitely changed.
> bin\SweetPea_d.exe -unittestSuccess: 1145 tests passed.Test time: 2.13 seconds.
As the tests were running, my mind would wander, thinking about what test I would write next, or whether the code I had just written would be cache-friendly, or whether we’d do Honey’s or Manhattan for lunch. I would be jerked out of my state of flow. It wasn’t just me, Charles felt it too. The difference in productivity was huge.
The Zones
For those of you laughing at those times and dismissing them as insignificant, you need to keep in mind that our build times are very short:
- An incremental build with just a change in a cpp file takes under a second (including link times).
- A full build of *all* our runtime code, including libraries, is around 50 seconds in a dual-core machine [1].
So adding on 2 seconds to the incremental build time is going from “practically instant†to “OK, I’m waiting…â€.
Of course, none of this has anything to do with unit tests in particular. It’s total incremental build times (including compiling, linking, and running unit tests) that matter. An incremental build that takes 3 seconds with no unit tests is just as bad as one that is under one second with 3 second unit tests. They both interrupt the flow the same way.
I’ve often thought about how a company size affects communication and “feelâ€. It’s close to a step function with logarithmic scale: there’s a distinct jump from a company of 3 to a company of 10, to one of 35, to one of 100+.
I think something similar applies to build times [2]
- 0 to 2 seconds. Programmer stays in the flow. Productivity is way high.
- 2 to 8 seconds. Flow is broken. Definite feel of things being a bit painful and sluggish.
- 8 to 30 seconds. Attention wanders to other parts of the code, email, etc. Multitasking kicks in and overall productivity plummets.
- 30+ seconds. Builds seen as batch processing. After each build is started, some other action is started. Maybe even physically get up, get a drink, start a conversation. Full brain reboot between coding tasks.
- 5+ minutes. Programmers bang their heads against the desk, actively consider the merits of different forms of suicide, or at the very least start polishing their resumes.
The Fix
Having been spoiled by sub-second build times, neither one of us wanted to put up with the broken flow of 2+ second builds, so we decided it was time to fix things [3].
The good news is that our slow build times were solely due to running the unit tests. 2.13 seconds to run only about 1100+ tests is really bad, even on a so-so desktop machine like the ones we can afford. That’s almost 2ms per test! Remember that these are unit tests, not functional tests. So I expect to be able to run several thousands of these under one second. Something was way off.
Things had gotten much worse in the last couple of weeks, so it wasn’t a slow, steady creep up. Chances are we had introduced something very slow recently, which meant it was probably easy to fix.
As a first pass, we turned on the option to fail any test that takes over a certain amount of time in UnitTest++:
RunAllTests(reporter, UnitTest::Test::GetTestList(), NULL, 5); // fail any test over 5ms
That brought about tons of failing test, most of them added recently and creating one particular object in their fixtures. Digging a big deeper, that particular class had an array of 25,000 (yes, that many zeros) objects. Those objects were plain-old-data structures, but had a default constructor. Even though the constructor was simply initializing its members to zero, that was causing a significant slow down when so many objects were created. To make things worse, some of the code I had written was iterating over the array resetting the entries or shifting them up or down, repeatedly calling the constructor. Ouch!
A few minutes and lots of code hacking later, and the object didn’t call a single constructor anymore. Our test times went down by almost half. Victory!
We were already at 1 second, which is in the acceptable range, but since we were here, maybe we could improve things a bit more. After all, 1ms per test is still too slow.
UnitTest++ wasn’t reporting failing tests over 1ms very reliably. Different runs would cause very different results. Things were too spread out or other things in the computer were interfering too much.
About a fifth or so of our tests use Havok. Not with mocks, but directly. They each initialize Havok, create a world, and deal with a simple rigid body, constraint, or something of the sort. I suspected all along that the Havok system initialization and shutdown for each test was probably causing quite a slow down, so I changed it to be initialized only once before we ran any tests.
And that did the trick. With that change we were down to 0.16 seconds. Back to snappy, happy land!
> bin\SweetPea_d.exe -unittestSuccess: 1145 tests passed.Test time: 0.16 seconds.
In release mode things are even better, but we do most development in debug mode, so that’s the really important one:
> bin\SweetPea.exe -unittestSuccess: 1145 tests passed.Test time: 0.06 seconds.
Perhaps sub-second build times are not realistic for all projects, but I think there’s a lot to be gained by keeping compile, link, and test times as short as possible. For tips on how to keep compile times down, check out these articles I wrote a while ago. If your unit tests are slow and you really can’t speed them up anymore, consider splitting them into two sets: one that is important or recent ones that runs with every build, and another, more comprehensive one that runs in the build server.
Even if you cut down from 20 seconds down to 10, you’ll be making a huge shift in how you’re able to work and iterate on your programming.
[1] We’d love to keep detailed metrics with each build: executable size, number of lines of code, number of tests, etc, etc. Unfortunately, when the rubber hits the road, that ends up in the “nice to have category†and we simply have no resources to spare on a two-man startup.
[2] For a whole whooping statistical population of one since I’m only talking about my experience. Still, I suspect that the same applies to many other programmers by scaling the function by some constant.
[3] Perfect use of agile mentality. Fix things when you need them fixed, not before. But make sure you do fix them when you need them. Otherwise it’s not agile, it’s just lazy.
Hi Noel,
For my game I have currently around 60000 lines including some tests, My build time for just modifying a single .cpp is around 30 secs. Around 5 seconds for the library in question, 10-15 to link the test runner and 10-15 for the game to link.
It is getting to be very frustrating – 2 secs would be a dream :). I am using CxxTest and Visual Studio 2005 on a Q6600 Quadcore with 3 gig of ram and a reasonably fast HD – I am also using Precompiled headers.
Do you have any general tips on getting the compile time down? I’ve always had the feeling that it was slow, but I now realise its massively slow!
Thanks for your help
All the best,
Ash
Hi Ash,
My best advice on compile times is in this article: http://gamesfromwithin.com/?p=8
Basically it comes down to:
– Use include guards
– Make the first include of each cpp file its corresponding .h file
– Keep what you have in a header file to a minimum (forward declare instead of include, etc)
– Keep your logical dependencies to a minimum (don’t include windows.h unless you absolutely have to for example).
Precompiled headers can be OK, but I’m not a huge fan of them. Maybe for some really heavy headers like windows or Havok if you use them in most of your code.
The times you’re quoting sound pretty high. Are you using STL? That’s a time sink for sure!
Hi Noel,
Thanks for the quick reply – I found the article soon after my post – sorry I should have done a more extensive search!
I tried to get the analysis perl script working, but came across a few problems, perlmod seems to get confused easily (much like myself 🙂 ) I had a number of unmatched brackets errors, I did get it running against a particular project, but it reported a lot of useless headers which were actually quite useful (essential to compile) :).
I’m not sure what I am doing wrong, but I’ll hopefully have a chance to look at it after Christmas. I havent used doxygen before – i was creating the doxyfile using the win32 wizard thingy.
I am using quite a lot of STL and Boost, I am also using Ogre.
I’ll have to have a major look at ways to reduce it, it can’t continue at this speed, I am getting distracted when I am compiling as you mention in the article, it makes it really hard to keep the flow.
Thanks again!
All the best,
Ash
If you’re using quite a lot of STL and Boost, that’s your answer for long compile and link times 🙁 I’m not sure what you can do about it other than try to minimize their use and how far in your code they extend (which can be difficult once they’re in), and to keep them away from new code you write, preferably in a separate library.
Personally, I haven’t found STL or Boost to be useful for game runtime in a console environment. On a PC I might consider it, but I would probably choose not to because of their effect on build times like you’re experiencing.