in Conferences and events

2004 GameTech Report: Game Tech Leadership Summit

The GameTech Leadership Summit was the second part of this year’s Game Tech Seminars. Underneath a somewhat confusing name, it really was an analysis of the tools and technology in games today.

The first day is where the real meat of this second part was. It was a postmortem/analysis of the tech involved in some of the most successful games today (Halo 2, Half Life 2, The Sims 2, and Stranger) done by the tech lead (or someone close to that position) for each of the teams. Each of the talks was packed with information and some of them are freely available online, so I’m not even going to try to summarize the talks. Instead, this is going to be more of a highlight of things that caught my eye or I thought were particularly important.

gametech

Jay Stelly on Half Life 2

[amtap amazon:asin=B000ID1AKI]

Nobody is going to argue that the Half Life 2 development didn’t have its share of problems: repeated delays, “Steam” issues, etc. Still, there is no doubt that Half Life 2 is a great game and there’s a lot to be learned from Valve‘s approach.

One driving goal of the Half Life 2 tech was to design around the workflow. The technology should adapt to the way content creators want to work, and it shouldn’t get in the way. This also means that the asset pipeline should let people iterate as much as possible with no dependencies on other people or parts of the game. It seems like an obvious statement, but I think a lot of companies are missing that. This is going to become extremely important for next generation consoles, and I predict that companies that don’t completely adopt this paradigm will fall by the wayside.

One of the ways they went about designing around the workflow was to create a layered system with different levels of abstractions to represent their concepts. I really think that’s the only way to go when building complex systems (otherwise we have the tall building with shaky foundations syndrome), but it just struck me how they were always reaching for a higher level of abstraction than I would have normally considered. Having that extra level of abstraction allows for better separation of content creation tasks.

For example, they could have left their sound system at the level of abstraction of sound emitters and listeners. Most games stop there. They went one level of abstraction higher and modeled sound environments, which include a collection of emitters, and sound processing parameters. Then each of the sound environments could be applied to particular locations in a level. This approach allows the sound designer and level designer to work in parallel, bringing their work together at any point they want while they’re still iterating.

But workflow doesn’t apply just to designers and artists. Programmers also do some work, you know? So, not surprisingly, another driving point of the Half Life 2 technology was to improve the programming environment. They had 14 programmers in their team, which starts to be a bit on the large side, so that was yet another motivation to go with a very layered system. Different programmers could concentrate on specific areas and not interfere with each other’s work. They also made the distinction between system code (engine) and leaf code (game-specific code), which helped organize things. Interestingly, they went with a DLL-based approach to reduce link times, which often stop fast programmer iteration dead in its tracks.

One drawback of the extremely layered approach was that Half Life 2 was completely memory bound. Heavily abstracted and layered object-oriented systems usually have a memory access pattern that looks more random than lottery-winning numbers. That means they can kiss goodbye to a lot of cache consistency, and the program is going to be constantly stalling, waiting for data to come from main memory. This is a very serious problem, and since memory is going to continue getting slower relative to CPUs, it’s something we’re going to have to deal with.

Program performance was never a major driving force behind programming languages, and especially not nowadays, but it would be interesting to come up with a language that allowed for good modularization and design, but at the same time being extremely cache-friendly. On the other hand, scratch that thought. We have much bigger problems to solve with programming languages before we worry exclusively about cache performance.

Interestingly, general scripting didn’t work for Valve very well. Instead, in the future they plan to go down the path of C++ program behavior, with data that can be easily changed from text files. As long as adding new C++ functionality can be done very quickly, and there’s good communication between programmers and designers, there’s no reason why that can’t work.

Chris Butcher on Halo 2

[amtap amazon:asin=B00008J7NZ]

Chris started right away with a very insightful observation: The Halo games are really a world simulation, where the player is just another entity moving around the world. On the other hand, the Half Life games are more of a player-based simulation, where everything happens for the benefit of the player.

The whole Halo engine is based around the concept of “tags,” which is something that Bungie has used in most of their past games. Tags are just a hierarchy of variable-length block arrays, and each block is made out of basic data types. Tags describe the properties of a game object, as well as how meshes are represented. The part that surprised me is that tags are actually defined in the C code itself as opposed to being an external data representation that then gets processed and generates some C code (which would have been a much cleaner way to separate tools and engine).

The tag approach has lots of advantages. It has full introspection abilities and it can be used to easily load and save data. Also, because everything in the engine uses it, it means that if we have a generic tag editor, we can use it to change any of the data. They even go as far as seeing the level editor as a fancy, graphical editor for the tag system.

Because the tag system is hierarchical in nature, Bungie creates their entities by using composition instead of inheritance. In my opinion, this is totally the right way to go. You want an enemy to have a weapon, not to inherit from a HasWeapons class.

One thing I still can’t really understand is why Bungie is using plain C for most of their game. Don’t get me wrong. They can clearly produce great games that way, but it just seems that with C++ they could do it more easily or faster. I understand that right now they can pretty much save raw memory and load it straight back into the game, which would be very difficult to do with C++. Still, if that’s the only benefit, I’m sure it would be relatively easy to set up a system to do transparent serialization in C++ as well and reap all the benefits of the language.

As was the case for Half Life 2, they also spent considerable amount of time trying to improve the artist workflow. They use resource hotloading, automatically detect changed files, etc. So with a lot of resources, their artists can just re-export the asset and see it live in the game in a few seconds.

For memory allocation, they tried to minimize dynamic memory allocation at runtime (although they admit there are a few out there), but all allocations are bounded, which really is the only way to deal with a fixed-memory system like a game console. Apparently Havok is one of the exceptions to the rule since it doesn’t make any hard guarantees on memory usage, so they had to treat it specially. Also, their world simulation is perfectly deterministic, which is great for repeating bugs. Their world view (rendering, sound, etc), however, is not deterministic.

Charles Bloom on Stranger’s Wrath (presentation notes)

[amtap amazon:asin=B0006I5I58]

From the very beginning of the talk, Charles Bloom adopted the underdog role and played it to perfection. He made it very clear that Stranger’s Wrath was a very different project from Half Life 2 or Halo 2. Whereas we had just been told that the teams of Valve and Bungie were full of talented and experienced programmers, Charles claimed that Oddworld had a much wider mix of talent, with many more weak links. In his words, not everybody can have the super programmers from Half Life 2 or Halo 2. That’s a funny way to start, but it really drove a point home about how Stranger was perhaps the project that most people will be able to identify with.

So in that light, the Oddworld method relies a lot more on putting a lot of responsibility on the shoulders of their lead programmers. They’re in charge of reviewing the code of other programmers, and making sure everybody stays productive. The philosophy behind code is that any code should be easily usable by everybody, regardless of their skill level. It also means that code needs to be robust, and should be hard to break or do wrong things with it.

This is a very interesting point. I totally agree with the idea of robust, defensive code, but for different reasons. I believe that’s a good practice in general, even if your team is full of “super-programmers.” Later on down the line, it will be a lot easier to work with that code, or modify it, or anything. Using the excuse of “the programmers in my team are really talented” to write brittle or badly encapsulated code is a really near-sighted attitude. I was glad to see that Charles later mentions in his “rants” page (entry from 12-06-04) that he really thought that code robustness was a good quality in general, independent of your team composition or skills.

The same attitude applied not just to the code, but to the game engine and the tools themselves. They never wanted the game to go down and prevent people from working, so they always tried to cope in the best way possible with missing assets, incorrect data, etc. All great things to aim for in my opinion. I have applied similar rules in some of my past projects and they worked out great.

The differences between Halo 2 and Strangers didn’t end there. Even though they’re both somewhat similar games, both Xbox exclusive, they’re as far apart as you can get from a technology point of view. Halo 2 tried to be as minimalistic as possible: it hardly used any C++, they avoided dynamic memory allocations as much as possible, etc. Stranger, on the other hand, embraced C++ and a lot of its advanced features: they used dynamic memory allocation, the STL, smart pointers, and even exceptions. Yes, exceptions! (They did turn them off for release, but they made good use of them during development as part of their defensive code style). So, in spite of all the differences, they still managed to make a game that rivals Halo 2 as far as the technology and pushing the Xbox to its limits.

Another area where Stranger is totally different from Halo 2 is how game entities are organized. In Halo 2 everything uses composition, which to me feels like the natural way of creating them. Stranger apparently makes heavy use of multiple inheritance to achieve the same effect. And not just multiple inheritance of abstract interfaces, but full blown classes with their own implementation. I’m really surprised that the system worked well for them and I’d definitely be curious to learn more about it.

From the asset pipeline point of view, Oddworld also uses hotloading of resources to see changes in the game in an almost instantaneous way. They also stressed the importance of build farms and distributing a lot of that processing.

Interestingly, Charles admitted that most of their data was generated by manipulating plain text files. No fancy GUI tools (other than Maya for level editing and layout). Just plain text files. I think people sometimes forget how easy it is to work with text files, and get all wrapped up into making fancy tools that end up limiting what they can do. Clearly, text files are going to need very strong validation to deal with all the errors that are introduced by typing things by hand. The only thing I didn’t like is that apparently at Oddworld that validation happens at load time. I would have preferred it to happen as soon as the file is saved to give feedback right away to the creator.

Andrew Willmott on The Sims 2

[amtap amazon:asin=B00009WNZA]

This talk was a bit different from the others. First of all, the Sims 2 is a very different game from the ones presented so far. It was also clear that the talk was going to be different just from the emphasis in the title “Shipping the Sims 2.” Andrew didn’t go into tech details as heavily, but gave an overall overview of the project organization and statistics. This was by far the largest of all projects presented (it had 250+ people at the end!), but that’s not so unusual for an EA project.

A lot of his talk dealt with the pressure they had to deal with, and the challenge of organizing so many people. Being the followup to the best-selling game The Sims couldn’t have been easy, and throwing that many people at it at the wrong time could have made things close to impossible. The fact that the project slipped and Maxis was moved to a new location months before shipping certainly didn’t help any!

One of the most interesting facts is that Them Sims 2 used a visual scripting language called Edith, and at the end, they all hated it and wanted to move away from it. That has been my experience as well with visual scripting languages. The only time visual scripting languages might be OK is if they generate a text-based script that you can then manipulate just like any other text. Otherwise, there are just too many disadvantages: difficult to copy/paste, can’t grep anything, hard to do multi-script changes, hard to view history/differences in version control, etc. Andrew mentioned Lua as their likely scripting language of choice in the future.

Another interesting technical fact was that they made heavy use of scene graphs, but they were also very dissatisfied with them and they mostly got in the way of the game programmers. At one point I was a fan of light scene graphs, but I’ve changed my mind in the last couple of years, and I also prefer to organize elements in much more specific containers (and keep the same element in multiple containers for different tasks).

As a really funny aside, The Sims 2 uses a technique I thought it was well dead and buried: dirty rectangles. I’m not kidding. I thought I saw the last of those when I started using page flipping in the VGA days, but apparently it was useful, given their problems rendering so many objects in view. My hat’s off to them for even thinking of it!

Like Stranger, The Sims 2 made heavy use of C++ as well, with dynamic memory allocations, STL, etc. That’s not all that surprising since it’s PC-only game, but it’s still a good point of reference.

The rest

The last day was much weaker than the first one. I really wished we could have extended the first day to the full conference (either by looking at other games, or by digging deeper into some specific topics).

The one really worthwhile session that stood out from the others was Casey Muratori‘s talk on “Designing Reusable Code.” Coming from someone who wrote two versions of Granny, it actually had some very interesting insights. It was clearly coming from the point of view of writing middleware (as opposed to writing reusable code to use in multiple games in the same company). The most important observation is that people will want to integrate your code in different ways. At first they just want a quick and dirty integration just to get it working. Later they’ll want some more control to fine tune things. Finally, towards the end of the project, they’ll want to take over a lot of the aspects of the code, such as loading and memory allocation.

To support that pattern of usage, Casey suggests having several levels of interface exposed. One high level interface that accomplishes the basic functionality with a single call, a more detailed one that requires more work on the part of the user, all the way down to the detailed functions. I absolutely agree with the overall principle, but I think Casey was suggesting taking it to the extreme, to the point that he was suggesting that just about every function should be made public and that loading something should be a matter of streaming the data into memory without any other function calls required. That’s just a bit too extreme for my tastes. I also happen to value simplicity of interfaces very highly, so there’s the big question of how to organize those interfaces to make sure they don’t seem more complicated than they really are.

Casey also argued that the best way to develop an interface was to use it before you actually implemented anything. Since that’s exactly what one of the things that test-driven development accomplishes, that was music to my ears. I was also shocked when we had a show of hands to see who was doing test-driven development, and about 10-15 people from the audience raised their hands. Yay! I’m glad to see I’m not alone.

Another session worth mentioning was Brian Sharp‘s talk on how to integrate a physics engine into a game. He brought up a bunch of really good points and anybody integrating a physics engine into their game for the first time would do well to track down the slides and read them carefully. Some of the points were obvious, but some were very interesting, such as how it affects the interactions between different programmers and designers, and what people might expect out of them.

The rest of the sessions ranged from simply OK, to a couple of pretty bad ones, that totally felt like a sponsored session where they were trying to sell their product. It was too bad that the conference ended on such a low note, after starting with the amazing first day (and the really good first two days).

I really hope they organize a similar conference for next year, but the key is going to be to come up focused, relevant topics. Personally, I’d like to see one of the sessions concentrate on asset pipelines: both overviews of current projects, and specific techniques and organizations (use of databases, concurrent editing, integration of source control, etc). All in all, it was a great experience, though. The small size of the conference and the great speakers and attendees made it really unique, and gave me an opportunity to finally meet a lot of people I had only interacted with through mailing lists before.

Comments are closed.