Writing Reusable Code

Some people asked what I meant by a “toolkit architecture” in the previous post about my middleware fears. It turns out I wrote about that in a previous Inner Product column that for some reason I never reposted here. I think at the time I wrote this (late 2008), I already wasn’t very concerned about writing reusable code, and I was focusing it mostly with respect to using other people’s code and how I wanted it to be architected.

Continue reading

Data-Oriented Design Now And In The Future

There has been a lot of recent discussion (and criticism) on Data Oriented Design recently. I want to address some of the issues that have been raised, but before that, I’ll start with this reprint from my most recent Game Developer Magazine. If you have any questions you’d like addressed, add write a comment and I’ll try to answer everything I can.

 

Last year I wrote about the basics of Data-Oriented Design (see the September 2009 issue of Game Developer). In the time since that article, Data-Oriented Design has gained a lot of traction in game development and many of teams are thinking in terms of data for some of the more performance-critical systems. Continue reading

Start Pre-allocating And Stop Worrying

One of the more frequent questions I receive is what kind of memory allocation strategy I use in my games. The quick answer is none (at least frame to frame, I do some allocation at the beginning of each level on a stack-based allocator). This reprint of one of my Inner Product column covers quite well how I feel about memory allocation.

We’ve all had things nagging us in the back of our minds. They’re nothing we have to worry about this very instant, just something we need to do sometime in the future. Maybe that’s changing those worn tires in the car, or making an appointment with the dentist about that tooth that has been bugging you on and off.

Dynamic memory allocation is something that falls in that category for most programmers. We all know we can’t just go on allocating memory willy-nilly whenever we need it, yet we put off dealing with it until the end of the project. By that time, deadlines are piling on the pressure and it’s usually too late to make significant changes. With a little bit of forethought and pre-planning, we can avoid those problems and be confident our game is not going to run out of memory in the most inopportune moment.

On-Demand Dynamic Memory Allocation

memory-all-ranks.jpgThe easiest way to get started with memory management is to allocate memory dynamically whenever you need it. This is the approach many software engineering books consider as ideal and it’s often encouraged in Computer Science classes.

It’s certainly an easy approach to use. Need a new animation instance when the player is landing on a ledge? Allocate it. Need a new sound when we reach the goal? Just allocate another one!

On-demand dynamic memory allocation can help to keep memory usage to a minimum, because, you only allocate the memory that you need and no more. In practice it’s not quite as neat and tidy because there can be a surprisingly large amount of overhead per allocation, which adds up if programmers become really allocation-happy.

It’s also a good way to shoot yourself in the foot.

Games don’t live inside a Computer Science textbook, so we have to deal with real world limitations, which make this approach cumbersome, clunky, and potentially disastrous. What can go wrong with on-demand dynamic memory allocation? Almost everything!

Limited Memory

Games, or any software for that matter, run on machines with limited amounts of memory. As long as you know what that limit is, and you keep extremely careful track of your memory usage, you can remain under the limit. However, since the game is allocating memory any time it needs it, there will most likely come a time when the game tries to allocate a new block but there is no memory left. What can you do then? Nothing easy I’m afraid. You can try to free an older memory block and allocate the new one there, or you can try to make your game tolerant to running out of memory. Both those solutions are very complex and difficult to implement correctly.

Even setting memory budgets and sticking to them can be very difficult. How can a designer know that a given particle system isn’t going to run out of memory? Are these AI units going to create too many pathfinding objects and crash the game? Hard to say until we run the game in all possible combinations. And even then, how do you know it isn’t going to crash five minutes later? Or ten? It’s almost impossible to know for certain.

If you insist in using this approach, at the very least, you should tag all memory allocations, so you have an idea of how memory is being used. You can either tag each allocation based on what system initiated it (physics, textures, animation, sound, AI, etc) or even on the filename where it originated, which has the advantage that it can be automated and should still give you a good picture of the overall memory usage.

Memory Fragmentation

Even if you take lots of pain not to go over your the available memory, you might still run into trouble because of memory fragmentation. You might have enough memory for a new allocation, but in the form of many small memory blocks instead of a large contiguous one. Unless you provide your own memory allocation mechanism, fragmentation is something that is very hard to track on your own, so you can’t even be ready for it until the allocation fails.

Virtual Memory

Virtual memory could solve all those problems. In theory, if you run out of real memory, the operating system swaps out some older, unused pages to disk and makes room for the new memory you requested. In practice, it’s just a bad caching scheme because it can be triggered at the worst possible moment, and it doesn’t know about what data it’s swapping out or how your game uses it.

Games, unlike most other software, have a “soft realtime” requirement: The game needs to keep updating at an acceptable interactive rate, which is somewhere around 15 or more frames per second. That means that gamers are going to make a trip to the store to return your game if it pauses for a couple of seconds every few minutes to “make some room” for new memory. So relying on virtual memory isn’t a particularly attractive solution.

Additionally, lots of games run in platforms with fixed amounts of RAM and no virtual memory. So when memory runs out, things won’t get slow and chuggy, they’ll crash hard. When the memory is gone, it’s really gone.

Performance Problems

There are some performance issues that are relatively easy to track down and fix. Usually ones that occur every frame and are happening in a single spot: some expensive operation, a O(n3) algorithm, etc. Then there are performance problems introduced by dynamic memory allocations, which can be really hard to track down.

Standard malloc returns memory pretty quickly, and usually doesn’t ever register on the the profiler. Every so often though, whenever the game has been running for a while and memory is pretty fragmented, it can spike up and cause a significant delay for just a frame. Trying to track down those spikes has caused more than one programmer to age prematurely. You can avoid some of those problems by using your own memory manager, but don’t attempt to write a generic one yourself from scratch. Instead start with some of the ones listed in the references.

Malloc spikes are not the only source of performance problems. Allocating many small blocks of memory can lead to bad cache coherence when the game code access them sequentially. This problem usually manifests itself as a general slowdown that can’t be narrowed down in the profiler. With today’s hardware of slow memory systems and deep caches, good memory access patterns are more important than ever.

Keeping Track Of Memory

Another source of problems with dynamic memory allocation are bugs in the logic that keeps track of the allocated memory blocks. If we forget to free some of them, our program will have memory leaks and has the potential to run out of memory.

The flip side of memory leaks are invalid memory access. If we free a memory block and later we access it as if it were allocated, we’ll either get a memory access exception, or we’ll manage to corrupt our own game.

Some techniques, such as reference counting and garbage collection can help keep track of memory allocations, but introduce their own complexities and overhead.

Introducing Pre-allocation

On the opposite corner of the boxing ring is the purely pre-allocated game. It excels at everything that the dynamically-allocated game is weak at, but it has a few weaknesses of its own. All in all, it’s probably a much safer approach for most games though.

The idea behind a pre-allocation memory strategy is to allocate everything once and never have to do any dynamic allocations in the middle of the game. Usually you grab as big a block of memory as you can, and then you carve it out to suit your game’s needs.

Some advantages are very clear: no performance penalties, knowing exactly how your memory is used, never running out of memory, and no memory fragmentation to worry about. There are some other more subtle advantages, such as being able to put data in contiguous areas of memory to get best cache coherency, or having blazingly-fast load times by loading a baked image of a level directly into memory.

The main drawback of pre-allocation is that is more complex to implement than the dynamic allocation approach and it takes some planning ahead.

Know Your Data

For preallocation to work, you need to know ahead of time how much of every type of data you will need in the game. That can be a daunting proposition, especially to those used to a more dynamic approach. However, with a good data baking system (see last month’s Inner Product column), you can get a global view of each level and figure out how big things need to be.

There is one important design philosophy that needs to be adopted for preallocation to work: Everything in the game has to be bounded. That shouldn’t feel too restrictive; after all, the memory in your target platform is bounded, as well as every single resource. That means that everything that can create new objects, including high-level game constructs, should operate on a fixed number of them. This might seem like an implementation detail, but it often bubbles up to what’s exposed to game designers. A common example is an enemy spawner. Instead of designing a spawner with an emission rate, it should have a fixed number of enemies it can spawn (and potentially reuse them after they’re dead).

Potentially Wasted Space

If you allocate enough data for the worst case in your game, that can lead to a lot of unused data most of the time. That’s only an issue if that unused data is preventing you from adding more content to the game. We might initially balk at the idea of having 2000 preallocated enemies when we’re only going to see 10 of them at once. But when you realize that each of those enemies is only taking 256 bytes and the total overhead is 500 KB, which can be easily accommodated in most modern platforms today.

Preallocation doesn’t have to be as draconian as it sounds though. You could relax this approach and commit to having each level preallocated and never having dynamic memory allocations while the game is running. That still allows you to dynamically allocate the memory needed for each level and keep wasted space to a minimum. Or you can take it even further and preallocate the contents of memory blocks that are streamed in memory. That way each block can be divided in the best way for that content and wasted space is kept to a minimum.

Reuse, RecycleM

If you don’t want to preallocate every single object you’ll ever use, then you can create a smaller set, and reuse them as needed. This can be a bit tricky though. First of all, it needs to be very much specific to the type of object that is reused. So particles are easy to reuse (just drop the oldest one, or the ones not in view), but might be harder with enemy units or active projectiles. It’s going to take some game knowledge of those objects to decide which ones to reuse and how to do it.

It also means that systems need to be prepared to either fail an allocation (if your current set of objects is full and you don’t want to reuse an existing one), or they need to cope with an object disappearing from one frame to another. That’s a relatively easy problem to solve by using handles or other weak references instead of direct pointers.

Then there’s the issue that reusing an object isn’t as simple as constructing a new one. You really need to make sure that when you reuse it, there’s nothing left from the object it replaced. This is easy when your objects are just plain data in a table, but can be more complicated when they’re complex C++ classes tied together with pointers. In any case, you can’t apply the Resource Acquisition Is Initialization (RAII) pattern, but it doesn’t seem to be a pattern very well suited for games, and it’s a small price to pay for the simplicity that preallocation provides.

Specialized Heaps

Truth be told, a pure pre-allocated approach can be hard to pull off, especially with highly dynamic environments or games with user-created content. Specialized heaps is a combination of dynamic memory allocation and pre-allocation that takes the best of both worlds.

The idea behind specialized heaps is that the heaps themselves are pre-allocated, but they allow some form of specialized dynamic allocation within them. That way you avoid the problems of running out of memory, or memory fragmentation globally, but you still can perform some sort of dynamic allocation when needed.

One type of specialized heaps is based on the object type. If you can guarantee that all objects allocated in that heap are going to be of the same size, or at least a multiple of a certain size, memory management becomes much easier and less error prone, and removes a lot of the complexity of a general memory manager.

My favorite approach for games is to create specialized heaps based on the lifetime of the objects allocated in them. These heaps use sequential allocators, always allocating memory from the beginning of a memory block. When the lifetime of the objects is up, the heap is reset and allocations can start from the beginning again. The use of a simple sequential allocator bypasses all the insidious problems of general memory management: fragmentation, compaction, leaks, etc. See the code in http://gdmag.com/resources/code.htm for an implementation of a SequentialAllocator class.

The heap types most often used in games are:

  • Level heap. Here you allocate all the assets and data for the level at load time. When the level is unloaded, all objects are destroyed at once. If your game makes heavy use of streaming, this can be a streaming block instead of a full level.
  • Frame heap. Any temporary objects that only need to last a frame or less get allocated here, and destroyed at the end of the frame.
  • Stack heap. This one is a bit different from the others. Like the other heaps, it uses a sequential allocator and objects are allocated from the beginning, but instead of destroying all objects at once, it only destroys objects up to the marker that is popped fro the stack.

What About Tools?

You can take everything I’ve written here, and (almost) completely ignore it for tools. I fall in the camp of the programmers who consider the runtime as a totally separate beast from the tools. That means that the runtime can be lean and mean and minimalistic, but I can relax and use whatever technique makes me more productive when writing tools. That means you can allocate memory any time you want, you can use complex libraries like STL and Boost, etc. Most tools are going to run on a beefy PC and a few extra allocations here and there won’t make any difference.

Be careful with performance-sensitive tools though. Tools that build assets or compute complex lighting calculations might be a bottleneck in the build process. In that case, performance becomes crucial again and you might want to be a bit more careful about memory layout and cache coherency.

On the other hand, if the tool you’re writing is not performance sensitive, you should ask yourself if it really needs to be written in C++. Maybe C# or Python are better languages if all you’re doing is transforming XML files or verifying that a file format is correct. Trading performance for ease of development is almost always a win with development tools.

Next time you reach out for a malloc inside your main loop, think about how it can fail. Maybe you can pre-allocate that memory and stop worrying about what’s going to happen the day you prepare the release candidate.

This article was originally printed in the February 2009 issue of Game Developer.

Mock Objects: Friends Or Foes?

In a previous article we covered all the details necessary to start using unit testing on a real-world project. That was enough knowledge to get started and tackle just about any codebase. Eventually you might have found yourself doing a lot of typing, writing redundant tests, or having a frustrating time interfacing with some libraries and still trying to write unit tests. Mock objects are the final piece in your toolkit that allow you to test like a pro in just about any codebase.

Testing Code

The purpose of writing unit tests is to verify the code does what it’s supposed to do. How exactly do we go about checking that? It depends on what the code under test does. There are three main things we can test for when writing unit tests:

  • Return values. This is the easiest thing to test. We call a function and verify that the return value is what we expect. It can be a simple boolean, or maybe it’s a number resulting from a complex calculation. Either way, it’s simple and easy to test. it doesn’t get any better than this.
  • Modified data. Some functions will modify data as a result of being called (for example, filling out a vertex buffer with particle data). Testing this data can be straightforward as long as the outputs are clearly defined. If the function changes data in some global location, then it can be more complicated to test it or even find all the possible places that can be changed. Whenever possible, pass the address of the data to be modified as an input parameter to the functions. That will make them easier to understand and test.
  • Object interaction. This is the hardest effect to test. Sometimes calling a function doesn’t return anything or modify any external data directly, and it instead interacts with other objects. We want to test that the interaction happened in the order we expected and with the parameters we expected.

Testing the first two cases is relatively simple, and there’s nothing you need to do beyond what a basic unit testing-framework provides. Call the function and verify values with a CHECK statement. Done. However, testing that an object “talks” with other objects in the correct way is much trickier. That’s what we’ll concentrate on for the rest of the article.

As a side note, when we talk about object interaction, it simply refers to parts of the code calling functions or sending messages to other parts of the code. It doesn’t necessarily imply real objects. Everything we cover here applies as well to plain functions calling other functions.

Before we go any further, let’s look at a simple example of object interaction. We have a game entity factory and we want to test that the function CreateGameEntity() finds the entity template in the dictionary and calls CreateMesh() once per each mesh.

TEST(CreateGameEntityCallsCreateMeshForEachMesh)
{
    EntityDictionary dict;
    MeshFactory meshFactory;
    GameEntityFactory gameFactory(dict, meshFactory);

    Entity* entity = gameFactory.CreateGameEntity(gameEntityUid);
    // How do we test it called the correct functions?
}

We can write a test like the one above, but after we call the function CreateGameEntity(), how do we test the right functions were called in response? We can try testing for their results. For example, we could check that the returned entity has the correct number of meshes, but that relies on the mesh factory working correctly, which we’ve probably tested elsewhere, so we’re testing things multiple times. It also means that it needs to physically create some meshes, which can be time consuming or just need more resources than we want for a unit test. Remember that these are unit tests, so we really want to minimize the amount of code that is under test at any one time. Here we only want to test that the entity factory does the right thing, not that the dictionary or the mesh factory work.

Introducing Mocks

To test interactions between objects, we need something that sits between those objects and intercepts all the function calls we care about. At the same time, we want to make sure that the code under test doesn’t need to be changed just to be able to write tests, so this new object needs to look just like the objects the code expects to communicate with.

A mock object is an object that presents the same interface as some other object in the system, but whose only goal is to attach to the code under test and record function calls. This mock object can then be inspected by the test code to verify all the communication happened correctly.

TEST(CreateGameEntityCallsCreateMeshForEachMesh)
{
    MockEntityDictionary dict;
    MockMeshFactory meshFactory;
    GameEntityFactory gameFactory(dict, meshFactory);

dict.meshCount = 3;

    Entity* entity = gameFactory.CreateGameEntity(gameEntityUid);

    CHECK_EQUAL(1, dict.getEntityInfoCallCount);
    CHECK_EQUAL(gameEntityUid, dict.lastEntityUidPassed);
    CHECK_EQUAL(3, meshFactory.createMeshCallCount);
}

This code shows how a mock object helps us test our game entity factory. Notice how there are no real MeshFactory or EntityDictionary objects. Those have been removed from the test completely and replaced with mock versions. Because those mock objects implement the same interface as the objects they’re standing for, the GameEntityFactory doesn’t know that it’s being tested and goes about business as usual.

Here are the mock objects themselves:

struct MockEntityDictionary : public IEntityDictionary
{
    MockEntityDictionary() 
        : meshCount(0)
        , lastEntityUidPassed(0)
        , getEntityInfoCallCount(0)
    {}

    void GetEntityInfo(EntityInfo& info, int uid)
    {
        lastEntityUidPassed = uid;
        info.meshCount = meshCount;
        ++getEntityInfoCallCount;
    }

    int meshCount;
    int lastEntityUidPassed;
    int getEntityInfoCallCount;
};


struct MockMeshFactory : public IMeshFactory
{
    MockMeshFactory() : createMeshCallCount(0)
    {}

    Mesh* CreateMesh()
    {
        ++createMeshCallCount;
        return NULL;
    }
};

Notice that they do no real work; they’re just there for bookkeeping purposes. They count how many times functions are called, some parameters, and return whatever values you fed them ahead of time. The fact that we’re setting the meshCount in the dictionary to 3 is how we can then test that the mesh factory is called the correct number of times.

When developers talk about mock objects, they’ll often differentiate between mocks and fakes. Mocks are objects that stand in for a real object, and they are used to verify the interaction between objects. Fakes also stand in for real objects, but they’re there to remove dependencies or speed up tests. For example, you could have a fake object that stands in for the file system and provides data directly from memory, allowing tests to run very quickly and not depend on a particular file layout. All the techniques presented in this article apply both to mocks and fakes, it’s just how you use them that sets them apart from each other.

Mocking Frameworks

mock.jpgThe basics of mocking objects are as simple as what we’ve seen. Armed with that knowledge, you can go ahead and test all the object interactions in your code. However, I bet that you’re going to get tired quickly from all that typing every time you create a new mock. The bigger and more complex the object is, the more tedious the operation becomes. That’s where a mocking framework comes in.

A mocking framework lets you create mock objects in a more automated way, with less typing. Different frameworks use different syntax, but at the core they all have two parts to them:
A semi-automatic way of creating a mock object from an existing class or interface.
A way to set up the mock expectations. Expectations are the results you expect to happen as a result of the test: functions called in that object, the order of those calls, or the parameters passed to them.

Once the mock object has been created and its expectations set, you perform the rest of the unit test as usual. If the mock object didn’t receive the correct calls the way you specified in the expectations, the unit test is marked as failed. Otherwise the test passes and everything is good.

GoogleMock

GoogleMock is the free C++ mocking framework provided by Google. It takes a very straightforward implementation approach and offers a set of macros to easily create mocks for your classes, and set up expectations. Because you need to create mocks by hand, there’s still a fair amount of typing involved to create each mock, although they provide a Python script that can generate mocks automatically from from C++ classes. It still relies on your classes inheriting from a virtual interface to hook up the mock object to your code.

This code shows the game entity factory test written with GoogleMock. Keep in mind that in addition to the test code, you still need to create the mock object through the macros provided in the framework.

TEST(CreateGameEntityCallsCreateMeshForEachMesh) 
{
    MockEntityDictionary dict;
    MockMeshFactory meshFactory;
    GameEntityFactory gameFactory(dict, meshFactory);

    EXPECT_CALL(dict, GetEntityInfo())
        .Times(1)
        .WillOnce(Return(EntityInfo(3));

    EXPECT_CALL(meshFactory, CreateMesh())
        .Times(3);

    Entity* entity = gameFactory.CreateGameEntity(gameEntityUid);
}

MockItNow

This open-source C++ mocking framework written by Rory Driscoll takes a totally different approach from GoogleMock. Instead of requiring that all your mockable classes inherit from a virtual interface, it uses compiler support to insert some code before each call. This code can then call the mock and return to the test directly, without ever calling the real object.

From a technical point of view, it’s a very slick method of hooking up the mocks, but the main advantage of this approach is that it doesn’t force a virtual interface on classes that don’t need it. It also minimizes typing compared to GoogleMock. The only downside is that it’s very platform-specific implementation, and the version available only supports Intel x86 processors, although it can be re-implemented for PowerPC architectures.

Problems With Mocks

There is no doubt that mocks are a very useful tool. They allow us to test object interactions in our unit tests without involving lots of different classes. In particular, mock frameworks make using mocks even simpler, saving typing and reducing the time we have to spend writing tests. What’s not to like about them?

The first problem with mocks is that they can add extra unnecessary complexity to the code, just for the sake of testing. In particular, I’m referring to the need to have a virtual interface that objects are are going to be mocked inherit from. This is a requirement if you’re writing mocks by hand or using GoogleMock (not so much with MockItNow), and the result is more complicated code: You need to instantiate the correct type, but then you pass around references to the interface type in your code. It’s just ugly and I really resent that using mocks is the only reason those interfaces are there. Obviously, if you need the interface and you’re adding a mock to it afterwards, then there’s no extra complexity added.

If the complexity and ugliness argument doesn’t sway you, try this one: Every unnecessary interface is going to result in an extra indirection through a vtable with the corresponding performance hit. Do you really want to fill up your game code with interfaces just for the sake of testing? Probably not.

But in my mind, there’s another, bigger disadvantage to using mock frameworks. One of the main benefits of unit tests is that they encourages a modular design, with small, independent objects, that can be easily used individually. In other words, unit tests tend to push design away from object interactions and more towards returning values directly or modifying external data.

A mocking framework can make creating mocks so easy, to the point that it doesn’t discourage programmers from creating a mock object any time they think of one. And when you have a good mocking framework, every object looks like a good mock candidate. At that point, your code design is going to start looking more like a tangled web of objects communicating in complex ways, rather than simple functions without any dependencies. You might have saved some typing time, but at what price!

When to Use Mock Frameworks

That doesn’t mean that you shouldn’t use a mocking framework though. A good mocking framework can be a lifesaving tool. Just be very, very careful how you use it.

The case when using a mocking framework is most justified when dealing with existing code that was not written in unit testing in mind. Code that is tangled together, and impossible to use in isolation. Sometimes that’s third-party libraries, and sometimes it’s even (yes, we can admit it) code that we wrote in the past, maybe under a tight deadline, or maybe before we cared much about unit tests. In any case, trying to write unit tests that interface with code not intended to be tested can be extremely challenging. So much so, that a lot of people give up on unit tests completely because they don’t see a way of writing unit tests without a lot of extra effort. A mocking framework can really help in that situation to isolate the new code you’re writing, from the legacy code that was not intended for testing.

Another situation when using a mocking framework is a big win is to use as training wheels to get started with unit tests in your codebase. There’s no need to wait until you start a brand new project with a brand new codebase (how often does that happen anyway?). Instead, you can start testing today and using a good mock framework to help isolate your new code from the existing one. Once you get the ball rolling and write new, testable code, you’ll probably find you don’t need it as much.

Apart from that, my recommendation is to keep your favorite mocking framework ready in your toolbox, but only take it out when you absolutely need it. Otherwise, it’s a bit like using a jackhammer to put a picture nail on the wall. Just because you can do it, it doesn’t mean it’s a good idea.

Keep in mind that these recommendations are aimed at using mock objects in C and C++. If you’re using other languages, especially more dynamic or modern ones, using mock objects is even simpler and without many of the drawbacks. In a lot of other languages, such as Lua, C#, or Python, your code doesn’t have to be modified in any way to insert a mock object. In that case you’re not introducing any extra complexity or performance penalties by using mocks, and none of the earlier objections apply. The only drawback left in that case is the tendency to create complex designs that are heavily interconnected, instead of simple, standalone pieces of code. Use caution and your best judgement and you’ll make the best use of mocks.

This article was originally printed in the June 2009 issue of Game Developer.

Nitty Gritty Unit Testing

It’s one thing to see someone drive a car and have a theoretical understanding of what the pedals do and how to change gears. It’s is a completely different thing to be able to drive a car safely on the street. There are some activities that require many small details and some hands-on experience to be able to execute them successfully.

The good news is that unit testing is a lot simpler than driving a standard shift, but there are a lot of small details you need to get right to do it successfully. Even after reading about unit testing and being convinced of its benefits, programmers are often not sure how to get started. This month’s column is not going to try to convince you of the many benefits of unit testing (I hope you are already convinced), but rather, describe some very concrete tips to help you get started right away.

Goals of Unit Testing

There are many different reasons to write unit tests:

  • Correctness testing. Checking that the code behaves as designed.
  • Boundary testing. Checking that the code behaves correctly in odd or boundary situations.
  • Regression testing. Checking that the behavior of the code doesn’t change unintentionally over time.
  • Performance testing. Checking that the program meets certain minimum performance or memory constraints.
  • Platform testing. Checking that the code behaves the same across multiple platforms.
  • Design. Tests provide a way to advance the code design and architecture. This is usually referred as Test-Driven Development (TDD).
  • Full game or tool testing. Technically this is a functional test, not a unit test anymore because it involves the whole program instead of a small subset of the code, but a lot of the same techniques apply.

Some developers use unit tests only for one of the reasons listed above, while others use many kinds of tests for a variety of different reasons. It’s important to recognize that because there are so many different uses for unit tests, no single solution is going to fit everybody. The ideal setup for some of those situations is going to be slightly different than for others. The basics are the same for all of them though.

When working with unit tests, these are our main goals:

  • Spend as little time as possible writing a new test.
  • Be notified of failing tests, and see at a glance which ones failed and why.
  • Trust our tests. Have them be consistent from run to run and robust in the face of bad code.

Testing Framework

lack_of_tests.jpgMost of us have created one-off programs in the past to test some particularly complicated code. It’s usually a quick command line program that runs through a bunch of cases and asserts after each one that the results were correct. That’s the most bare-bones way of creating unit tests.

Unfortunately, it’s also a pain and it misses on most of the unit-testing goals described in the previous section: creating a new program just for that is a pain, we have to go out of our way to run the tests, and it usually gets out of date faster than the latest Internet meme. That is in part why a lot of programmers have an initial aversion to writing unit tests.

If you’re considering writing even a small unit test, you should use a unit-testing framework. A unit-testing framework removes all the busy work from writing unit tests and lets you spend your time on the logic of what to test. This doesn’t mean that the framework writes the tests for you. Be very wary of any tool that claims to do that! No, a unit-testing framework is simply a small library that provides all the glue for running unit tests and reporting the results. Sorry, you still need to use your brain and do (some of) the typing.

A quick search will reveal plenty of unit-testing frameworks to choose from for your language of choice, and most of them are free and open source so you can rely on them and modify them to suit your needs. For C/C++ and game development, I strongly recommend starting with UnitTest++. Charles Nicholson and I wrote that framework a few years ago specifically with games and consoles in mind. Many game teams have adopted it for their games and tools, and it has been used on lots of different game platforms including current and last generation consoles, Windows, Linux, and Mac PCs, and even on the iPhone. In most situations, it should be a straight drop-in to your project and you’re up and running.

If you end up using a different testing framework, or even if you roll your own, the techniques described here still apply, even if the syntax is slightly different.

Hello Tests

Writing your first test is easy as pie:

#include <UnitTest++.h>

TEST(MyFirstTest)
{
    int a = 4;
    CHECK(a == 4);
}

To run it you need to add the following line to your executable somewhere. We’ll talk more about the physical organization of tests in a moment.

int failedCount = UnitTest::RunAllTests();

Done! Easy, wasn’t it? When you compile and run the program you should see the following output:

Success: 1 test passed.
Test time: 0.00 seconds.

Let’s add a failing test:

TEST(MyFailingTest)
{
    int a = 5;
    CHECK(a == 4);
}

Now we get:

/fullpath/filename.cpp:17: error: Failure in MyFailingTest: a == 4
FAILURE: 1 out of 2 tests failed (1 failures).
Test time: 0.00 seconds.

That’s great, but if we’re going to diagnose the problem, we probably need to know the value of the variable a, and all the test is telling us is that it’s not 4. So instead, we can change the CHECK statement to the following:

TEST(MyFailingTest)
{
    int a = 5;
    CHECK_EQUAL(4, a);
}

And now the output will be

/fullpath/filename.cpp:17: error: Failure in MyFailingTest: Expected 4 but was 5
FAILURE: 1 out of 2 tests failed (1 failures).
Test time: 0.00 seconds.

Much better. Now we get both the error information and the value of the variable under test. Virtually all unit-testing frameworks include different types of CHECK statements to get more information when testing floats, arrays, or other data types. You can even make your own CHECK statement for your own common data types such as colors or lists.

As a bonus, if you’re using an IDE, double-clicking on the test failure message should bring you automatically to the failing test statement.

When To Run

When to run unit tests will depend on what is being tested and how long it takes to do so. In general, the more frequently you run the tests, the better. The sooner you get feedback that something went wrong, the easier it will be to fix. Maybe even before it was checked in and it spread to the rest of the team. On the flip side, realistically, building and running a set of unit tests takes a certain amount of time, so it’s important to find the right balance between feedback frequency and time spent waiting for tests.

At the very least, all tests should run once a day, during the nightly build process in your build server (You have a build server, don’t you? If not, run over and read this column in the August 2008 issue). It doesn’t matter how long they take or how how many different projects you need to run. Just add them to the build script, and hook their output into the build results.

On the other extreme, you can build your tests every time you build the project and execute them as a postbuild step. That way, any time you make a change to a project, all tests will execute and you’ll see if anything went wrong. This is a great approach, but I wouldn’t recommend it if tests add more than a couple of seconds to the incremental build time, otherwise, they’ll be slowing you down more than they help.

For most developers, some approach in the middle will make most sense. For example, take a small, fast subset of tests that are more likely to break, and run those with every build. Whenever any code is checked in to version control, the build server can run those tests plus a few more, slower ones. And finally, at night, you can bring out the big guns and run those really long, really thorough ones that take a few hours to complete. You can separate those tests into different projects, or, if your framework supports them, into different test suites, which allow you to decide which sets of tests to execute at runtime.
Reporting Results

If a unit test fails and nobody notices, is it really an error? Just running the tests isn’t good enough. We have to make sure that someone sees the failure it and fixes the problem.

Most unit-testing frameworks will let you customize how you want the failure errors to be reported. By default they will probably be sent to stdout, but you can easily customize the framework to send them to debug log streams, save to a file, or upload them to a server.

Even more important than the actual error messages is detecting whether there were any failures. After running all the tests, there is usually some way to detect how many tests failed. The program that was running the tests can detect any failures, print an error message, and exit with an error code. That error code will propagate to the build server and trigger a build failure. Hopefully by now alarm sounds are going off across the office and someone is on his way to fix it.

Project Organization

When people start down the unit test path, they often struggle to figure out how to physically lay out the unit tests. In the end, it really doesn’t matter too much as long as it makes sense to you, the final build doesn’t contain any tests, and they’re still easy to build and run.

My personal preference is to keep unit tests separate from the rest of the code. Usually I end up creating one file of tests for every cpp file. So FirstPersonCameraController.h and .cpp have a corresponding TestFirstPersonCameraController.cpp. Since I use this convention regularly throughout all of my code, I have a custom IDE macro to toggle between a file and its corresponding test file. I also put all the tests in a separate subdirectory to keep them as physically separate as possible.

I prefer to break up my code into several static libraries for each major subsystem: graphics, networking, physics, animations, etc. Each of those libraries has a set of unit tests, but instead of compiling them into the library, I create a separate project that creates a simple executable program. That project contains all the unit tests and links against the library itself, and in its main entry point it just calls the function to runs all unit tests and returns the number of failures. This keeps the tests separate from the library, but still very easy to build.

If all your code is organized into libraries, and your game is just a collection of libraries linked together, that’s all you need. Most games and tools, however, have a fair amount of code that you might want to test in the project itself. Since the game is an executable, you can’t easily link against it from a different project like we did before. In this case, I build the unit tests into the game itself, and I optionally call them whenever a particular command-line parameter like -runtests is present. Just make sure to #ifdef out all the tests in the final build.

Multiplatform Testing

Running the tests on the PC where you build the code is very straightforward. But unless you’re only creating games and tools for that platform, you will definitely want to run your tests on different platforms as well. Unit tests are an invaluable tool for catching slight platform inconsistencies caused by different compilers, architecture idiosyncrasies, or varying floating point rules.

Unfortunately, running unit tests on a different platform from your build machine is usually a bit more involved and not nearly as fast as doing it locally. You need to start by compiling the tests for the target platform. This is usually not a problem since you’re already building all your code for that platform, and hopefully your unit testing framework already supports it. Then you need to upload your executable with the tests and any data required to the target platform and run it there. Finally, you need to get the return code back to detect if there were any failures. This is surprisingly the trickiest part of the process with a lot of console development kits. If getting the exit code is not a possibility, you’ll need to get creative by parsing the output channel, or even waiting for a notification on a particular network port.

Some target platforms are more limited than others in both resources and C++ support. One of the features that makes UnitTest++ a good choice for games is that it requires minimal C++ features (no STL) and it can be trimmed down even further (no exceptions or streams).

For example, running unit tests on the PS3 SPUs was extremely useful, but it required stripping the framework down to the minimum amount of features. It was also tricky being able to fit the library code plus all the tests in the small amount of memory available. To get around that, we ended up changing the build rules for the SPUs so each test file created its own SPU executable (or module). We then wrote a simple main SPU program that would load each module separately, run its tests, keep track of all the stats, and finally report them.

Running a set of unit tests on the local machine can be an almost instant process, but running them on a remote machine is usually much slower, and can take up to 10 or 20 seconds just with the overhead of copying them and launching the program remotely. For this reason, you’ll want to run tests on other platforms less frequently.

No Leaking Allowed

Finally, if you’re going to have this all this unit testing code running on a regular basis, you might as well get as much information out of it as possible. I have found it invaluable to keep track of memory leaks around the unit test code.

You’ll have to hook into your own memory manager, or use the platform-specific memory tracking functions. The basic idea is to get a memory status before running the tests, and another one after all the tests execute. If there are any extra memory allocations, that’s probably a leak. In that case, you can report it as a failed build by returning the correct error code.

Watch out for static variables or singletons that allocate memory the first time they’re used. They might be reported as memory leaks even though it wasn’t what you were hoping to catch. In that case, you can explicitly initialize and destroy all singletons, or, even better, not use them at all, and keep your memory leak report clean.

You’re now armed with all you need to know to set up unit tests into your project and build pipeline. Grab a testing framework and get started today.

This article was originally printed in the May 2009 issue of Game Developer.