E3 is just around the corner, so we can expect to finally get the official announcements of Microsoft’s next-generation console, and maybe even Sony’s. That will mark the official transition into the next generation. And this is not just another console generation transition. This time it’s bigger. Much bigger.
The underlying current at this year’s GDC was the transition to the next generation of platforms. The change was in the air. It wasn’t just the topic of the talks, but it was also underlying the questions asked, the technologies shown in the expo, and even the conversations at lunch. It influenced every activity and message. If it wasn’t because of what was said, it was because of what was left out, or what was put on hold “for a few more months.”
E3 is just around the corner, so we can expect to finally get the official announcements of Microsoft’s next-generation console, and maybe even Sony’s. That will mark the official transition into the next generation.
And this is not just another console generation transition. This time it’s bigger. Much bigger. Certainly much bigger than the PSX to PS2 console transition we went through five years ago. Maybe even of the same caliber as the transition from 2D to 3D that we saw 10 years ago. The changes that are about to happen affect more than the hardware we develop for. They’re going to affect how we develop games, how we organize ourselves, how we sell games, what type of games we make, and even how we think about games. Winds of change indeed.
These are the major changes I see coming.
Content generation
As the consoles become more powerful, we can create much more detailed environments, and players are going to expect that. Whereas it took someone a day to create a Doom 1 level that was roughly in a shippable state, now it takes someone several weeks to create just some of the geometry that is in a room in one of those levels. The team sizes, especially on the content creation side, are going to balloon, and soon teams numbering in the hundreds are going to be commonplace.
Large volumes of content aren’t just going to affect team sizes. We’re already struggling to deal with the amount of content we’re generating today, so we better get ready for the demands of tomorrow. A smooth, fast, robust, and efficient asset pipeline is going to be of prime importance. Probably more so than fancy tech and clever algorithms. The games that are going to stand out from the crowd from the point of view of visuals and polish are the ones with really good asset pipelines that allowed their content creators to iterate over and over and come up with something very close to what they were thinking. Tech-centered teams might have some impressive algorithms under the hood, but the resulting product is going to pale in comparison to their competitors.
The whole topic of asset pipelines is very dear to my heart. I previously wrote about the MechAssault asset pipeline, and I’ve changed my mind about several things since then. Adding the demands of next-generation content is going to make this topic even hotter.
From a technical point of view, we also have to consider the rebirth of procedural content. This is something we had considered in the previous console generation (and quickly dismissed), but this time it’s going to make much more of an impact. In a machine that can render a gorgeous scene that fully fits in memory, we’re going to need to have either an amazing streaming system, or great procedural content creation (or both!). Will Wright drove this point home in his talk about his new game, Spore.
Alternatively, maybe we’ll see a more clear distinction in what paths games take. Some games are trying to follow movies in the experience they offer: they are short (8-10 hours or less), full of new content, and giving players totally new experiences throughout the game. These are what I call “interactive experiences.” Then we have the “real” games, or toys. These are not a content showcase, but a game, with some rules and some variations, and lots of interaction with the player. This is the category of Sim City, Amplitude, or even sports or driving games. Games like Grand Theft Auto fall in an intermediate category because they straddle both types of games (mostly a toy sandbox with a story bolted on top). The point is, the more difficult or labor-intensive content creation becomes, the more we’re going to see this divide.
Development model
Next-generation games are going to need large number of people during production, but pre-production will still require only a small number of people. That’s the case today, but this trend will be exaggerated further.
Up until now, small companies with only one or two projects at once just put up with it and didn’t fully take advantage of everybody during pre-production. But will they be able to keep doing that now when they need to staff up to 100 people per project during production? If they staff up, how are they going to continue in business between projects? Does everybody need to grow to a size of 200-300 people to have multiple projects going on at once and smooth out the production spike needs?
One alternative that has been banded about for the last couple of years is the movie industry model. Game companies could do pre-production with a small number of people, and then assemble a team just for the duration of one project. There are some among us (especially from the UK) who will say that that’s already happening, just unofficially, with companies regularly laying people off after a game ships (see recent Oddworld news after completing Stranger’s Wrath).
Personally, I really dread this development model. Not only does it mean that we’ll lack a stable paycheck (and probably have even more emphasis on crunch periods), but it also means that the industry is going to cluster around a few central locations. In the US that will most certainly be Los Angeles, San Francisco, and maybe one other city. This is necessary to have a readily available talent pool, but it’s really a bummer for those of us who’d like to live elsewhere. Not like there are many companies in remote locations, but today you can choose to work in places like Maryland, North Carolina, Western Massachusetts, or Texas. Tomorrow we might not have that luxury.
Outsourcing went from being something that other industries talked about a few years ago, to something that we now accept as fairly common for art resources or cinematics. Outsourcing is a great way of flattening the curve of necessary resources, so I fully expect it to become the norm. Anything that can be outsourced will be. And a few things that can’t be will also be outsourced. For now that means cinematics, static geometry, some animations, music and some sound, and maybe some level layout.
Is outsourcing going to affect programming? I don’t think so right now. The closest thing we have to outsourcing is middleware, which is also a trend that is going to continue to grow. Especially if companies want to hit the ground running for a large production project, it makes total sense to use as much middleware as possible. Just look at how many developers are flocking to the Unreal Engine 3. It’s a very good set of technology and Epic managed to position itself in the right place at the right time (i.e. console generation transition).
Multiprocessors
This is undoubtedly going to be the biggest architecture change we’ve seen in a long while: multiple processors.
We all know that writing threaded programs is difficult. But this is something that goes beyond making your program thread-safe; it’s going to be a whole lot more than that. We have to deal with not just splitting the game into parallelizable parts, but we have to make sure those parts don’t interfere with each other, and that they access main memory in a controlled, efficient way. The gap between CPU and memory speeds continues to widen, and adding multiple processors to the mix isn’t going to help any. Games that ignore memory access patterns are going to crawl along in next-generation hardware.
Sony’s PS3 is going to take things even further. It’s going to use a cell processor with multiple cores, but each of the cores only has access to a very small amount of local memory. Not exactly the everyday programming model of the hardware we’re used to.
To be able to take full advantage of the new hardware, we’re going to have to change how we think about our programs. Take a snapshot of your game engine architecture, tilt your head 90 degrees, and that’s more how it’s going to look now. Data-driven architectures and service-oriented systems are going to dominate this new landscape.
Fortunately this is not just a bizarre architectural decision that console manufacturers pulled out of a hat. They are just leading the way as far as future architectures. More and more multiprocessors are becoming the norm on PCs, and both Intel and AMD have been busy at work for several years to take advantage of this trend. When you can’t make your processors faster, cheaper, and smaller, you need to start thinking of different directions to continue improving performance. Herb Sutter has been talking about it for a while now and he also sees it the inevitable future. So, as a programmer, don’t be afraid to dive in and get comfortable with this architecture. Multiprocessors are here to stay, and it’s going to be a valuable skill in the future.
Online focus
It seems that everybody is talking online this, online that. Especially Microsoft. Personally, I’m a bit scared (and skeptical) of this trend. Even though I enjoy the occasional multiplayer game (and I’m enjoying Guild Wars quite a bit right now), I certainly don’t want to make all my gaming experience be multiplayer. I do enjoy playing against the AI, or against somebody else sitting on my couch. I certainly don’t want to wait 10 or even 5 minutes to get everybody together before we can start a game, and I most certainly don’t want to hear eight-year-olds yelling while I’m trying to unwind by playing a quick game in the evening. I don’t want that on a PC game, and certainly not on a console game, which is supposed to provide a quick gaming experience.
I think the PC went through that phase about five years ago, when online gaming was new and exciting and every game had to have an online component, but it has reached a more mature position now. In the console realm, online play is still pretty new, so it keeps being pushed as a big marketing point. Xbox Live was apparently a big hit (certainly much bigger than I predicted back when it was announced), but it’s still only a tiny fraction of the market. Console manufacturers and publishers seem to think that broadband penetration is what’s preventing more people from jumping online. Have they considered that maybe most people don’t want to play online?
It always amazes me to see games whose multiplayer component is a completely different game from the single player one. They are two totally different experiences duct-taped together in one package, and usually one (or both) suffers for it. Wouldn’t it have been better to spend the resources in one of those areas and do it well? For example, the Grand Theft Auto series got it right. Sure, they could have done multiplayer with capture the flag, conquest modes, etc, but that’s not what the game is about.
So please, let’s not get drawn in by the hype and let’s keep the single player alive.
Conclusion
Times really are a-changing. The potential changes to the development model scare me. I hope things are not as grim as they seem right now and small game developers can stay afloat and not have to move to California to survive. On the other hand, I’m genuinely excited about the new technical challenges and figuring out how they’re going to affect the way we develop software. Bring on those multiprocessor machines!
“Anything that can be outsourced will be. And a few things that can’t be will also be outsourced. For now that means cinematics, static geometry, some animations, music and some sound, and maybe some level layout.”
I’m a bit dubious that outsourcing in a custom way will take off much (ie. paying someone outside the dev team to make a few models), but rather I would expect that predone ‘packs’ of content will be sold, somewhat like http://soundfx.com/librariessfx.htm, except for textures and models.
I also reckon that content generation programs will get more sophisticated and tuned to tasks that are needed for creating content specifically for games.
>And this is not just another console generation transition. This time it’s bigger. Much bigger
Actually I think in general, at least from the player’s point of view this transition is going to be much much smaller. We’ve already seen that as beautiful as Half Life 2 is it’s really not that much of a new experience from Half Life 1. 5 years passed, generations of hardware and we have the same game just prettier. I’m sure there will be a few breakout games but 95% of the next gen games are most likely going to be the same games as today with prettier graphics.
>The games that are going to stand out from the crowd from the point of view of visuals and polish are the ones with really good asset pipelines that allowed their content creators to iterate over and over and come up with something very close to what they were thinking
No, actually, games with great artists are going to make games with great art. A great artist with just a pencil will make great art. A bad artist with the best tools in the world will still make bad art. That’s not to say that the best pipeline won’t help but if you actually check the games you think have the best art and the games that have the best pipelines there is very little correlation. No that I also have your affinity for trying to make the best pipeline to help my artists but my point is, hire the best artists if you want the best art.
>Console manufacturers and publishers seem to think that broadband penetration is what’s preventing more people from jumping online. Have they considered that maybe most people don’t want to play online?
Korea has shown that it is possible that the problem is broadband penetration. In Korea you are considered a geek or outcast if you DON’T play online games. For men, 80% of the population plays online games. It’s the topic of discussion at bars and social gatherings such that if you are not playing you will not be able to be a part of the average conversation. Whether other countries will follow that trend remains to be seen, few if any other coutries have close to Korea’s ULTRABAND penetration. Note: What Korea and Japan have should NOT be called broadband because what they have is 8 to 100 times faster than what most American providers call broadband. They are categorically difference experiences and it’s insulting and misleading to think that 1 to 3meg connections in the U.S. for $30 to $60 a month are anything like to 24 to 100meg connections in Japan and Korea for $20 to $40 a month.
As for mutliprocessers I’m still not convinced they are going to make much difference to the “game”. Sure, some library programmers are going to have to figure out how to use them to process polygons, draw effects, simulate phyics or run A.I. but those are library issues. The actual “game” programming will most likely stay about the same.
Next-gen consoles need to provide standardized load/save and other perks. I’ve done load/save interface code on three platforms for two titles now, and I have to say that I look upon the 3 hours I spent getting PSP’s to work much more fondly than the 3 weeks I spent getting XBox and PS2 compliant.
If manufacturers pick up on this and other things that could save us from tech requirement purgatory, I bet we’d all be in a better position to tackle the technical challenges of the new platforms. To Microsoft and Sony I say… “Please?”
“Actually I think in general, at least from the player’s point of view this transition is going to be much much smaller. ”
That’s very true. I was purely talking from a technical developer point of view. Which I guess begs the question, should we bother taking full advantage of the hardware given how painful is going to be? I still think we have to. The games that do are going to stand out technically, and a lot of developers can’t afford to stay behind.
“No, actually, games with great artists are going to make games with great art.”
About that, we must have very different experiences. We’re not talking paper sketches anymore. In my experience a good artist with a bad pipeline will make a model/texture/level that looks reasonably OK in the content-creation tool, and looks totally bland in the game. An average artist with a great pipepline will be able to make a good-looking piece of art in the game. Guess which one the players are going to like best.
Maybe that’s even more important with designers. Nobody (that I’ve seen) can think of all the subtleties of a level and AI in their head and make it work in the first (or second, or tenth…) pass in the editor. They need to create it, play it, tweak it, play it, etc. The better the pipeline, the more they’ll be willing to try complex things too.
Of course, if you combine both talent and pipeline, then you’ll end up with amazing content in the game, which is what we all want.
“Which I guess begs the question, should we bother taking full advantage of the hardware given how painful is going to be? I still think we have to. The games that do are going to stand out technically, and a lot of developers can’t afford to stay behind.”
I think you are contradicting the point of your article with this comment. Normal everyday people can’t really tell the difference between the latest tech and the previous tech. All that matters anymore is art quality. [Well, that and gameplay, but we all know that doesn’t sell a game.] That means a rock’n art pipeline and rock’n artists (as your article states).
I am very much afraid that after all our procedural tricks are created and implemented, all we’ll have done is retained parity with current games with regard to the amount of content – and we’ll have broken our backs (and bankrupted our budget) to do it.
I disagree with your assertion that only the games that exploit every graphical advantage will succeed…the Grand Theft Auto series has proven that gamers WILL accept a game less graphically advanced if the result is more content (and boy does GTA:SA have content…)
Personally I think we should have figured out procedural content and streaming for 3D games years ago…I guess it’s just natural human laziness to put it off until you HAVE to do it. But I’m glad it’s happening. I just don’t want to see it only used to create a small amount of gorgeous content – I’d be much happier with a very large amount of merely pretty content.
Good article. Lots of food for thought. I definitely agree that content will become more and more important as machine capabilities increase and more developers use existing middle-ware. I hope to see more articles and thoughts on the content asset pipeline.
Like most people, I’m both scared and excited about the transition.
One of the things that shocked me when they presented the architecture of the cell is how much the machine is geared towards number crunching.
I don’t believe that we have reached yet the point of diminishing returns regarding graphics (and certainly not physics) but we’re getting there. Some people have said that players can’t tell the difference between two games even if one can draw twice the number of polygons of the other, and I think it’s true most of the time.
But the hardware, Nintendo being the exception, is encouraging exactly that kind of race. Throw more polygons and more flashes, put more spinning boxes and ragdolls. I am racking my brain trying to find ways to build better AI and new player experiences. I don’t think that having less (comparatively) memory, with more latency issues, but more vector operations is going to help much.
Another thing that concerns me is that the new generation is going to put even more pressure on small studios. I work in Spain and it’s already difficult enough to build games for current generation. I hope that we can keep working on video games in our country.
Noel, in your online section I think you are missing another hugely important aspect of the online experience. Namely everything you can do with an online connection _besides_ multiplayer gameplay. Downloading new maps, vehicles, weapons, background music, up-to-date real-world team stats, weather reports, *cough*patches*cough*, etc. XBox seems to have recognized the value of this with their Live aware titles. The gameplay itself does not have to be multi-player to qualify as Live aware.
“Namely everything you can do with an online connection _besides_ multiplayer gameplay.”
Very true. But I’m afraid that in the first wave of mainstream online games (especially on the Xbox 2), developers are going to put too much emphasis on the online part to the detriment of the single player gameplay. I hope I’m wrong though, because I don’t have a network connection in my living room or plans to put one there any time soon. So any effort towards online-aware titles is going to be totally wasted for me (and, as it stands right now, 90% of the US console players?)
“For example, the Grand Theft Auto series got it right. Sure, they could have done multiplayer with capture the flag, conquest modes, etc, but that’s not what the game is about.”
I couldn’t agree with this more. Unfortunately…
Our small studio recently released a moderately well-received Xbox title. When we were searching for a publisher, and subsequently applying for approval from Microsoft for the title, we were told in no uncertain terms: as an unknown developer, if there isn’t a sizeable online component, forget it. You will not be approved.
So, we were forced to divide our single player team in half, and do a full online mode.
As a small developer, you want to do one thing, and do it really well. This is the opposite of what the product approval people at console manufacturers want: they want either high profile, high budget games, or lots of bullet points on the back of boxes. Preferably both.
Suddenly PC development isn’t sounding quite so bad 🙂