in Game tech, iOS

The Curious Case of Casey and The Clearly Deterministic Contraptions

As we gear up for Casey’s Contraptions launch on May 19th, this is the first post in a series dealing with different aspects of the game. I’m planning on covering technical aspects like today, but also design and other parts of development.

Casey 640x100

For those of you who have been living under a rock and haven’t seen the Casey’s Contraptions video, go watch it now. I’ll wait. Or even better, here it is. You don’t even have to leave this page:

The core interaction loop of Casey’s Contraptions gameplay is placing some items, pressing the Play button, seeing the simulation, and repeating based on what you learned from seeing the simulation. We designed this loop to be very tight and without any penalty: Running/stopping the simulation is instant, there’s no limit on the number of times you run it, and you can even stop the simulation by tapping anywhere on the screen. Even if you painted yourself in a wall by creating a solution that needs a very specific placement of items on the screen, making small changes is very quick and painless.

There’s a very important, underlaying assumption in that loop: Running the same simulation multiple times will result in the same behavior. Imagine how frustrating it would be to create a complex chain reaction, just to find out it only works every other time you run it. That would truly deserve a ONE STAR WANT MY MONEY BACK!!

Determinism

That’s referred to as the game being deterministic: Given the same inputs, it produces the same outputs. It’s not the first time I’ve had to deal with that. Actually, it seems that I’ve had to deal with that in one form or another for all my games in the last 10 years (except for Flower Garden). Most recently, it was a key component of gameplay for the unreleased game we were working on at Power of Two Games.

To make a game deterministic, you need to remove the obvious sources of “accidentally different” inputs. That’s usually random number generators, and making sure the initial state is truly the same. It’s too easy to have some leftover state from the previous simulation that throws things off a little bit.

Another potential source of problems are uninitialized variables. If at some point the simulation depends on an uninitialized variable, different runs will cause different results based on whatever value happened to be in memory at that time.

Once you fix all of those little things, that should be it, right? Same input should create the same output. Not really. We’re missing the most crucial input: The timer.

Casey’s Contraptions runs at 60 fps on an iPad one, but the timestep isn’t set to a fixed 16.667ms. Instead, I use a high-resolution timer to measure how much time has really elapsed since last frame, and I advance the simulation by that much.

The problem is that the timer doesn’t always return the exact same amount. It’s not super-precise, and it can be slightly affected by other things going on in the device. It won’t be off by more than 1/10th of a ms, but that’s enough to cause different results and start diverging the simulation.

Fixed timestep

One way to get around this problem is to hardcode the timestep to always be 16.667ms, independently of what it really was. That would fix the determinism problem, but it would add its own drawbacks. If the simulation can’t keep up with 60 fps, the game will appear to slow down, which is not the effect we want. Casey’s Contraptions includes a level editor to make your own free-form contraptions, and even though we have a cap in the maximum number of items you can add, it’s probably possible to add enough of them to start slowing down an iPad 1.

CaseyDollTruck

Recording timesteps

A tempting solution (and one I’m ashamed I even briefly tried), is to record the exact timestep during the first run of the simulation. Then, in subsequent runs when nothing has changed, we use the initially recorded stream of timesteps instead of the real ones from the clock.

Apart from being a clumsy-looking solution (I hate having two modes for doing the same thing), the solution only works in the case of replaying the exact same input. This approach falls apart if you make a change to an item that doesn’t affect the simulation until after several seconds have passed. At that point, you’re running a new simulation with new timesteps and everything can diverge before the affected object is changed. Bad, idea. Bad.

Fixed simulation timesteps

This is the correct solution and it’s what we’re doing in Casey’s Contraptions.

You probably already know that physics simulations are a lot more stable if you take small steps, always of the same size. So you really never want to advance your physics simulation by your frame time. Instead, you want to have a small and fixed simulation step (we’re running at 120Hz, but 200 or even 300 are not unheard of). Then, you run the physics simulation as many times as you can fit into your larger timestep. It’s also important to keep around the amount of leftover time you haven’t simulated, to apply it to the next frame.

The simulation loop code looks something like this:

	const float timestep = 1.0f/120.0f;
	accumulator += dt;
	while (accumulator >= timestep)
	{
		// Do physics simulation by timestep
		accumulator -= timestep;
	}

Once we have this loop in place, our real frame time doesn’t matter anymore. The physics simulation will either run for a full step (or multiple steps), or it won’t. There’s no half-way states. So it doesn’t matter if in the first run of the simulation the loop gets executed 3 times in the first frame and 2 times in the second, and in the next simulation is 2 times in the first and 3 in the second. The state of the world will be the same.

Glenn Fiedler writes about this kind of simulation loop in much more detail in this excellent article.

Jitterbug

There’s still a slight problem with the above loop. Nothing horrible; the simulation is deterministic at this point (assuming everything else is taken care of correctly). But you might notice some annoying jittering as things move around.

CaseysDollKick

That’s because how we’re sampling the simulation. We might render a frame after 3 steps of the physics simulation, but another one after 2. So movement isn’t going to be very smooth.

To get around it, we need to interpolate the state of the world to match the time at which the frame is rendered. To do that, we need to simulate one timestep in the future, and then interpolate between the previous one and the future one by the percentage amount left in the accumulator.

Something like this:

	const float timestep = 1.0f/120.0f;
	accumulator += dt;
	while (accumulator >= timestep)
	{
		GameState lastState = gameState;
		// Do physics simulation by timestep
		accumulator -= timestep;
	}

	const float t = accumulator/timestep;
	GamePhysicsUtils::LerpState(interpolatedState, prevState, gameState, t);


The interpolation is just the positions and rotations of every item in the world. That’s only used for rendering, so no need to worry about velocities or forces (unless those are affecting rendering in a very obvious way too).

This is when it comes really handy to have the game state as a contiguous, relocatable block of memory. You can really easily copy states around without having to worry about allocating any memory or fixing any pointers. Yes, I really like my simple, pre-allocated data.

One added benefit of this approach: If you do it right, you can have extreme slow-motion in your game and everything will be right. Even if you only run one physics simulation loop every 10 rendering frames, the interpolation will take care of making sure everything is super smooth.

Odds and ends

A couple gotchas to watch out for (that’s code speak for “I didn’t think of this and it bit me hard”).

If you have any game logic that affects the physical state of your game objects, it needs to be executed in the inner, simulation loop. Anything that deals with visual or audio stuff can go in the regular update loop and only be executed once per frame.

A good example of this was the balloon code. Every frame, I iterate through all existing balloons, and apply some upward force based on buoyancy and other factors. Initially that was done once per frame, after the physics simulation, but that’s obviously wrong because it will leave balloons (and everything else) in a different state depending on your frame delta time. I moved it to the inner simulation loop and everything became deterministic again.

CaseyBalloons

This second gotcha sounds silly, but it was a big deal: Make sure you deal with the situation where you have different numbers of items in the previous game state than in the next game state. For example, if a balloon is destroyed, between the two states, the item count will be different. In my case I don’t free any memory, but I was initially making the assumption that I could interpolate based on the index of the items on both states. But since I removed the balloon and pushed all the items behind it forward, the interpolation was totally wrong.

I solved that problem by marking items as invalid, and removing them only after the inner simulation loop was complete. That allowed me to still interpolate in a very straightforward way and not have to worry about missing items.

Taking it further

Not only are we relying on this determinism while you play a level of Casey’s Contraptions, but also to let you replay solutions from your friends. We simply load their initial layout and run the simulation locally.

[Pause]

Ah, yes, astute reader, you’re right to be puzzled about that. How do we make the simulation deterministic across platforms and versions? Good question. Stay tuned.

11 Comments

  1. In my game’s particular situation I am able to run the physics simulation (bullet) at a fixed 60 Hz almost all of the time but conveniently increase the simulation resolution when needed to prevent tunneling while still maintaining determinism. Before the physics simulation in each frame, I see if there are any objects moving fast enough to potentially tunnel (determined by how small the object is and how fast it is going). For each object I do a convex sweep query to see if that fast moving object is going to occupy any space during that frame already occupied with another object. I take the fastest/smallest object that will likely collide with something else and determine how much I need to increase the simulation resolution during that frame to ensure that the fast moving object doesn’t tunnel through the object(s) it is very likely to collide with. Usually when an object collides with another it significantly slows down and so the next frame’s physics simulation can usually be run at a much lower resolution or even back at the standard 60 Hz if it has slowed down enough. This has resulted in a smooth framerate most of the time even when there are fast moving objects flying around with only a few frames of brief slowdown when fast moving objects collide. It isn’t a perfect solution but it has worked really well for 99% of the scenarios in my game that I care about and maintains determinism. I have been writing a more detailed blog post on the technique which I will hopefully have done soon.

  2. I wonderfully informative post. Thank you for taking the time to write it 😀

  3. Just out of curiosity, can you give a little insight about what data, your GameState struct holds? Thanks.

  4. Im not usually one for commenting on articles, but this was great. I’ve just started designing an iOS game and I hope to be able to use determinism to keep multiplayer games in lockstep. I look forward to your next article and hope it covers the tricky problems of floating point determinism across iOS devices.

    Keep up the interesting articles!

    • As the article explains it is a matter of interpolating between two physicaly simulated moments. You continue to simulate at 60Hz, but you sample the simulation at 200Hz with interpolation when it comes to rendering.

  5. Hey Noel, we are still tuned about that “cross platform determinism” at the end of your article! I really wonder how you do this. How can you ever control how the CPU, compiler, etc will perform the operations on different hardware and/or compilers…?! 

    • It depends on your platform / programming language. For example, in Java floating point math is guaranteed to be IEE 754. And unless you use some randomness, there is just no source of non-determinism I can think of. In practice, nearly all C++ compilers will also use IEE 754. You could check for that, to be sure. Or fallback to some software implementation, if necessary. With performance penalty, of course.

      • The problem is that compilers will rearrange floating point operations as the code changes, or even as the compiler changes. And floating point operations will produce different results depending on the order of the instructions (for example, A*(B+C) vs A*B+A*C).

Comments are closed.