E3 is just around the corner, so we can expect to finally get the official announcements of Microsoft’s next-generation console, and maybe even Sony’s. That will mark the official transition into the next generation. And this is not just another console generation transition. This time it’s bigger. Much bigger.
The underlying current at this year’s GDC was the transition to the next generation of platforms. The change was in the air. It wasn’t just the topic of the talks, but it was also underlying the questions asked, the technologies shown in the expo, and even the conversations at lunch. It influenced every activity and message. If it wasn’t because of what was said, it was because of what was left out, or what was put on hold “for a few more months.”
E3 is just around the corner, so we can expect to finally get the official announcements of Microsoft’s next-generation console, and maybe even Sony’s. That will mark the official transition into the next generation.
And this is not just another console generation transition. This time it’s bigger. Much bigger. Certainly much bigger than the PSX to PS2 console transition we went through five years ago. Maybe even of the same caliber as the transition from 2D to 3D that we saw 10 years ago. The changes that are about to happen affect more than the hardware we develop for. They’re going to affect how we develop games, how we organize ourselves, how we sell games, what type of games we make, and even how we think about games. Winds of change indeed.
These are the major changes I see coming.
As the consoles become more powerful, we can create much more detailed environments, and players are going to expect that. Whereas it took someone a day to create a Doom 1 level that was roughly in a shippable state, now it takes someone several weeks to create just some of the geometry that is in a room in one of those levels. The team sizes, especially on the content creation side, are going to balloon, and soon teams numbering in the hundreds are going to be commonplace.
Large volumes of content aren’t just going to affect team sizes. We’re already struggling to deal with the amount of content we’re generating today, so we better get ready for the demands of tomorrow. A smooth, fast, robust, and efficient asset pipeline is going to be of prime importance. Probably more so than fancy tech and clever algorithms. The games that are going to stand out from the crowd from the point of view of visuals and polish are the ones with really good asset pipelines that allowed their content creators to iterate over and over and come up with something very close to what they were thinking. Tech-centered teams might have some impressive algorithms under the hood, but the resulting product is going to pale in comparison to their competitors.
The whole topic of asset pipelines is very dear to my heart. I previously wrote about the MechAssault asset pipeline, and I’ve changed my mind about several things since then. Adding the demands of next-generation content is going to make this topic even hotter.
From a technical point of view, we also have to consider the rebirth of procedural content. This is something we had considered in the previous console generation (and quickly dismissed), but this time it’s going to make much more of an impact. In a machine that can render a gorgeous scene that fully fits in memory, we’re going to need to have either an amazing streaming system, or great procedural content creation (or both!). Will Wright drove this point home in his talk about his new game, Spore.
Alternatively, maybe we’ll see a more clear distinction in what paths games take. Some games are trying to follow movies in the experience they offer: they are short (8-10 hours or less), full of new content, and giving players totally new experiences throughout the game. These are what I call “interactive experiences.” Then we have the “real” games, or toys. These are not a content showcase, but a game, with some rules and some variations, and lots of interaction with the player. This is the category of Sim City, Amplitude, or even sports or driving games. Games like Grand Theft Auto fall in an intermediate category because they straddle both types of games (mostly a toy sandbox with a story bolted on top). The point is, the more difficult or labor-intensive content creation becomes, the more we’re going to see this divide.
Next-generation games are going to need large number of people during production, but pre-production will still require only a small number of people. That’s the case today, but this trend will be exaggerated further.
Up until now, small companies with only one or two projects at once just put up with it and didn’t fully take advantage of everybody during pre-production. But will they be able to keep doing that now when they need to staff up to 100 people per project during production? If they staff up, how are they going to continue in business between projects? Does everybody need to grow to a size of 200-300 people to have multiple projects going on at once and smooth out the production spike needs?
One alternative that has been banded about for the last couple of years is the movie industry model. Game companies could do pre-production with a small number of people, and then assemble a team just for the duration of one project. There are some among us (especially from the UK) who will say that that’s already happening, just unofficially, with companies regularly laying people off after a game ships (see recent Oddworld news after completing Stranger’s Wrath).
Personally, I really dread this development model. Not only does it mean that we’ll lack a stable paycheck (and probably have even more emphasis on crunch periods), but it also means that the industry is going to cluster around a few central locations. In the US that will most certainly be Los Angeles, San Francisco, and maybe one other city. This is necessary to have a readily available talent pool, but it’s really a bummer for those of us who’d like to live elsewhere. Not like there are many companies in remote locations, but today you can choose to work in places like Maryland, North Carolina, Western Massachusetts, or Texas. Tomorrow we might not have that luxury.
Outsourcing went from being something that other industries talked about a few years ago, to something that we now accept as fairly common for art resources or cinematics. Outsourcing is a great way of flattening the curve of necessary resources, so I fully expect it to become the norm. Anything that can be outsourced will be. And a few things that can’t be will also be outsourced. For now that means cinematics, static geometry, some animations, music and some sound, and maybe some level layout.
Is outsourcing going to affect programming? I don’t think so right now. The closest thing we have to outsourcing is middleware, which is also a trend that is going to continue to grow. Especially if companies want to hit the ground running for a large production project, it makes total sense to use as much middleware as possible. Just look at how many developers are flocking to the Unreal Engine 3. It’s a very good set of technology and Epic managed to position itself in the right place at the right time (i.e. console generation transition).
This is undoubtedly going to be the biggest architecture change we’ve seen in a long while: multiple processors.
We all know that writing threaded programs is difficult. But this is something that goes beyond making your program thread-safe; it’s going to be a whole lot more than that. We have to deal with not just splitting the game into parallelizable parts, but we have to make sure those parts don’t interfere with each other, and that they access main memory in a controlled, efficient way. The gap between CPU and memory speeds continues to widen, and adding multiple processors to the mix isn’t going to help any. Games that ignore memory access patterns are going to crawl along in next-generation hardware.
Sony’s PS3 is going to take things even further. It’s going to use a cell processor with multiple cores, but each of the cores only has access to a very small amount of local memory. Not exactly the everyday programming model of the hardware we’re used to.
To be able to take full advantage of the new hardware, we’re going to have to change how we think about our programs. Take a snapshot of your game engine architecture, tilt your head 90 degrees, and that’s more how it’s going to look now. Data-driven architectures and service-oriented systems are going to dominate this new landscape.
Fortunately this is not just a bizarre architectural decision that console manufacturers pulled out of a hat. They are just leading the way as far as future architectures. More and more multiprocessors are becoming the norm on PCs, and both Intel and AMD have been busy at work for several years to take advantage of this trend. When you can’t make your processors faster, cheaper, and smaller, you need to start thinking of different directions to continue improving performance. Herb Sutter has been talking about it for a while now and he also sees it the inevitable future. So, as a programmer, don’t be afraid to dive in and get comfortable with this architecture. Multiprocessors are here to stay, and it’s going to be a valuable skill in the future.
It seems that everybody is talking online this, online that. Especially Microsoft. Personally, I’m a bit scared (and skeptical) of this trend. Even though I enjoy the occasional multiplayer game (and I’m enjoying Guild Wars quite a bit right now), I certainly don’t want to make all my gaming experience be multiplayer. I do enjoy playing against the AI, or against somebody else sitting on my couch. I certainly don’t want to wait 10 or even 5 minutes to get everybody together before we can start a game, and I most certainly don’t want to hear eight-year-olds yelling while I’m trying to unwind by playing a quick game in the evening. I don’t want that on a PC game, and certainly not on a console game, which is supposed to provide a quick gaming experience.
I think the PC went through that phase about five years ago, when online gaming was new and exciting and every game had to have an online component, but it has reached a more mature position now. In the console realm, online play is still pretty new, so it keeps being pushed as a big marketing point. Xbox Live was apparently a big hit (certainly much bigger than I predicted back when it was announced), but it’s still only a tiny fraction of the market. Console manufacturers and publishers seem to think that broadband penetration is what’s preventing more people from jumping online. Have they considered that maybe most people don’t want to play online?
It always amazes me to see games whose multiplayer component is a completely different game from the single player one. They are two totally different experiences duct-taped together in one package, and usually one (or both) suffers for it. Wouldn’t it have been better to spend the resources in one of those areas and do it well? For example, the Grand Theft Auto series got it right. Sure, they could have done multiplayer with capture the flag, conquest modes, etc, but that’s not what the game is about.
So please, let’s not get drawn in by the hype and let’s keep the single player alive.
Times really are a-changing. The potential changes to the development model scare me. I hope things are not as grim as they seem right now and small game developers can stay afloat and not have to move to California to survive. On the other hand, I’m genuinely excited about the new technical challenges and figuring out how they’re going to affect the way we develop software. Bring on those multiprocessor machines!