Presenting at GDC 2009 on iPhone Development

gdc09It seems like GDC was just the other day, but GDC 2009 is around the corner! And this year, I’m going to be giving a presentation titled iPhone Development: Exploring The New Frontier I’m sorry about reverting to the cliched format of having a colon in the presentation title. It was too hard to resist :-).

I’ll be sharing my experiences transitioning from traditional AAA console game development, with teams of 100+ people, multi-million budgets, and several years of development, to indie iPhone development. There are a surprising amount of things that carry over from “big game development”, and quite a few that are totally different. I’ll go into what’s involved making games for the iPhone, and what game developers can expect when making the transition.

Apart from my talk, I’m particularly excited about this year’s GDC. It seems that the amount of content on indie game development and iPhone game development has shot through the roof. On Monday and Tuesday we’re treated to not just one, but two great summits: The Independent Games Summit and GDC Mobile! Last year, the Independent Games Summit was the best part of the show. Meeting all the other indie game developers out there and hearing their experiences was great. This year I’m hoping for more of the same plus all the iPhone-specific content.

Of course, the main conference is packed with great content too, but I haven’t had time to go through all of it and pick the sessions I want to attend yet. Too busy wrapping up my current project.

So if you see me around the show, stop by and say hi. I’m always glad to meet other fellow developers, and it’s always nice to put a face with a name for those of you that I know know through Twitter or online blogs.

San Diego iPhone Developers Gathering This Wednesday

We had so much fun the first time, that we decided to do it again. Last time we had a great turnout and we had a great time comparing ninja UIKit techniques, giving exclusive worldwide previews of apps in the works, and sharing voodoo spells to to make your app show up in the front page of the App Store (amazing what a few beers will do).

So we’re back at it again. Anybody involved in iPhone development or interested in learning more about it is welcome to come.

When: Wednesday Feb 11th from 8PM until whenever.
Where O’Sullivan’s Pub in Carlsbad (home of the Appy Entertainment Secret World Headquarters)

If you’re interested, you might also want to join the San Diego iPhone Developers group to keep up to date with any future announcements.

See you all there!

Last Objective-C Annoyance: Fixed!

I’ve gone on and on before about how much I like Objective-C and especially, how well it plays with C++. Mixing the two is a pleasure, and it’s often hard to remember where one stops and the other one starts. So much so that sometimes I catch myself calling C++ member functions like this [self myFunction:param];. Oops.

It is true that Objective-C can be a bit more verbose than plain C++, but it more than makes up for it with named parameters (which increase readability a huge amount), and not having to duplicate many things between header and implementation files.

Yet, even though I like Objective-C so much, there was one last, really annoying feature that was preventing perfect interop between C++ and Objective-C. Something so simple, yet so fundamental, that it really got in the way. Something that wouldn’t allow me to write something as simple as this:

@interface MyView : UIView
{
    // ...
    SingleTouchEvent m_singleTouchEvent;
}

SingleTouchEvent is a custom C++ class. There’s nothing wrong with that right? There shouldn’t. But if you try and compile it, you’ll get this warning:

warning: type `SingleTouchEvent' has a user-defined constructor
warning: C++ constructors and destructors will not be invoked for Objective-C fields

Some of you might be saying, no big deal, you can dynamically allocate objects and have their constructors called as usual. That’s true, but I hate allocating objects on the heap just because of something like this. My preference is always to make them member variables if I can. It’s faster, more efficient, less error prone, and easier to read. Needless to say, this was bugging the hell out of me.

I looked all over the build options in case there was some setting I could toggle to enable C++ constructors to be called, but no luck. I did a bit more research, and lo and behold, it turns out there is a custom setting you can pass directly to gcc to do exactly that! It’s nowhere in the XCode settings GUI, so you have to enter it by hand. Go to Project | Info, and in the Custom Settings section, enter the following: GCC_OBJC_CALL_CXX_CDTORS = YES

setting

With that setting enabled, Objective-C and C++ now play together better than ever!

Without any barriers to writing new code in Objective-C, I expect it will slowly (or maybe not so slowly) replace C++ as my main language of choice for future projects.

Remixing OpenGL and UIKit

Yesterday I wrote about OpenGL views, and how they can be integrated into a complex UI with multiple view controllers. There’s another interesting aspect of integrating OpenGL and UIKit, and that’s moving images back and forth between the two systems. In both cases, the key enabler is the CGContext class.

From UIKit to OpenGL

This is definitely the most common way of sharing image data. Loading an image from disk in almost any format is extremely easy with Cocoa Touch. In fact, it couldn’t get any easier:

UIImage* image = [UIImage imageNamed:filename];

After being used to image libraries like DevIL, it’s quite a relief to be able to load a file with a single line of code and no chance of screwing anything up. So it makes a lot of sense to use this as a starting point for loading OpenGL textures. So far we have a UIImage. How do we get from there to a texture?

All we have to do is create a CGContext with the appropriate parameters, and draw the image onto it:

byte* textureData = (byte *)malloc(m_width * m_height * 4);
CGContext* textureContext = CGBitmapContextCreate(textureData, m_width, m_height, 8, m_width * 4, 
            CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (CGFloat)m_width, (CGFloat)m_height), image);
CGContextRelease(textureContext);

At that point, we’ll have the image information, fully uncompressed in RGBA format in textureData. Creating an OpenGL texture from that is done like we always do:

glGenTextures(1, &m_handle);
glBindTexture(GL_TEXTURE_2D, m_handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);

And we’re done.

A word of advice: This is a quick way to load textures and get started, but it’s not the ideal way I would recommend for a shipping product. This code is doing a lot of work behind the scenes: loading the file, decompressing the image into an RGBA array, allocating memory, copying it over to OpenGL, and, if you set the option, generating mipmaps. All of this at load time. Ouch! If you have more than a handful of textures, that’s going to be a pretty noticeable delay while loading.

Instead, it would be much more efficient to perform all the work offline, and prepare the image into the final format that you want to use with OpenGL. Then you can load that data directly into memory and call glTextImage2D on it directly. Not as good as having direct access to video memory and loading it there directly, but that’s as good as it’s going to get on the iPhone. Fortunately Apple provides a command-line tool called texturetool that does exactly that, including generating mipmaps.

Another use of transferring image data from UIKit to OpenGL beyond loading textures is to use the beautiful and full-featured font rendering in UIKit in OpenGL applications. To do that, we render a string into the CGContext:

CGColorSpaceRef    colorSpace = CGColorSpaceCreateDeviceGray();
int sizeInBytes = height*width;
void* data = malloc(sizeInBytes);
memset(data, 0, sizeInBytes);
CGContextRef context = CGBitmapContextCreate(data, width, height, 8, width, colorSpace, kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
CGContextSetGrayFillColor(context, grayColor, 1.0);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
UIGraphicsPushContext(context);
    [txt drawInRect:CGRectMake(destRect.left, destRect.bottom, destRect.Width(), destRect.Height()) withFont:font 
                                lineBreakMode:UILineBreakModeWordWrap alignment:UITextAlignmentLeft];
UIGraphicsPopContext();

There are a couple things that are a bit different about this. First of all, notice that we’re using a gray color space. That’s because we’re rendering the text into a grayscale, alpha-only texture. Otherwise, a full RGBA texture would be a waste. Then there’s all the transform stuff thrown in the middle. That’s because the coordinate system for OpenGL is flipped with respect to UIKit, so we need to inverse the y.

Finally, we can create a new texture from that data, or update an existing texture:

glBindTexture(GL_TEXTURE_2D, m_handle);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, m_width, m_height, GL_ALPHA, GL_UNSIGNED_BYTE, data);

From OpenGL to UIKit

This is definitely the more uncommon way of transferring image data. You would use this when saving something rendered with OpenGL back to disk (like game screenshots), or even when using OpenGL-rendered images on UIKit user interface elements, like I’m doing in my project.

The process is very similar to the one we just went over, but backwards, with a CGContext as the middleman.

We first start by rendering whatever image we want. You can do this from the back buffer, or from a different render target. I don’t know that it makes a difference in performance either way (these are not fast operations, so don’t try to do them every frame!). Then you capture the pixels from the image you just rendered into a plain array:

unsigned char buffer[width*(height+30)*4];
glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE, &buffer);

By the way, notice the total awesomeness of the array declaration with non-constant variables. Thank you C99 (and gcc). Of course, that might not be the best thing to do with large images, since you might blow the stack size, but that’s another issue.

Anyway, once you have those pixels, you need to go through the slightly convoluted CGContext again to put it in a format that can be consumed directly by UIImage like this:

CGImageRef iref = CGImageCreate(width,height,8,32,width*4,CGColorSpaceCreateDeviceRGB(),
                            kCGBitmapByteOrderDefault,ref,NULL, true, kCGRenderingIntentDefault);
uint32_t* pixels = (uint32_t *)malloc(imageSize);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), 
                            kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);    
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage* image = [[UIImage alloc] initWithCGImage:outputRef];
free(pixels);

Again, we do the same flip of the y axis to avoid having inverted images.

Here the trickiest part was getting the color spaces correct. Apparently, even though it looks very flexible, the actual combinations supported in CGBitmapContextCreate are pretty limited. I kept getting errors because I kept passing an alpha channel combination that it didn’t like.

At this point, you’ll have the OpenGL image loaded an a UIImage, and you can do anything you want with it: Slap it on a button, save it to disk, or anything that strikes your fancy.

Using Multiple OpenGL Views And UIKit

The iPhone includes OpenGL ES for graphics rendering (thank you Apple for not coming up with a custom API!). Specifically, it uses OpenGL ES 1.1 plus a few extensions. It’s all very familiar and standard, except for the actual setup, which requires some integration with the iPhone UI.

The now gone CrashLander sample, in spite of some atrocious code, was the best example on how to get a simple, OpenGL app on the iPhone. It covered the basics: creating an OpenGL frame buffer, loading some textures, drawing some polys, and presenting the result to the screen. It was very useful because it was a very small and concise and it made for a great starting point for any OpenGL-based app.

Unfortunately, because it was so basic, it didn’t show how OpenGL can play with the rest of UIKit user interface. Instead, it took the approach from many games of taking control of the full screen and viewing it a a single resource. That’s fine as long as you’re making those kind of games, but I found myself having to mix OpenGL and UIKit quite a bit in my current project. And eventually, I even had to use multiple OpenGL views in different parts of the app.

Single OpenGL View

To render with OpenGL to a view, you have to first set the class-static method +layerClass so it creates the right type of layer. Specifically, we need a CAEAGLLayer:

+ (Class)layerClass
{
    return [CAEAGLLayer class];
}

You can create a view of that type in code, or you can lay it out on the Interface Builder. I actually like the Interface Builder quite a bit, so that’s how I end up creating all of mine.

Then, you need to create an OpenGL context:

    m_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
    [EAGLContext setCurrentContext:m_eaglContext];

Finally, you need to create a valid frame buffer object from the CAEAGLLayer from the view you just created:

    glGenFramebuffersOES(1, &buffer.m_frameBufferHandle);
    glGenRenderbuffersOES(1, &buffer.m_colorBufferHandle);
    glGenRenderbuffersOES(1, &buffer.m_depthBufferHandle);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);
    [m_eaglContext renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:drawable];
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &buffer.m_width);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &buffer.m_height);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_depthBufferHandle);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, buffer.m_width, buffer.m_height);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, buffer.m_frameBufferHandle);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle); 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, buffer.m_depthBufferHandle)

It’s all pretty standard frame buffer OpenGL stuff. The renderBufferStorage:fromDrawable function is the one that actually allocates buffer storage for that particular view.

After that, you’re home free and you can use OpenGL the way you’ve always done. The only difference is that to present, you call this functions instead:

    [m_eaglContext presentRenderbuffer:GL_RENDERBUFFER_OES];

Done.

Integrating OpenGL and UIKit

After you have the basic OpenGL app set up, you can treat it as a full-screen, exclusive resource and do all your output through it. That works fine for a lot of games, but there are a lot of games and apps that benefit from using some amount of UIKit functionality. After all, UIKit is a great UI API, and it seems like a shame to reinvent it all if it’s not necessary.

The good news is that OpenGL rendering is happen in a view, so you can do everything you can do with views: Animate it, do fancy transitions, add them to nav bars or tab bars, etc. The bad news is that, unless you’re careful, performance will suffer.

Apple has some best practices on what to do with OpenGL views, but it comes down to avoid doing transforms on them (instead, use the transforms in OpenGL), and avoid putting other UIKit elements on top of it, especially transparent ones. Although, I seem to be able to get away with small buttons and such without affecting the frame rate, so that’s always a plus.

So having realized that, now you can make your application transition between the OpenGL view, and other views. Maybe you have a table view with settings, or some sort of UIKit dialog box, or a high-score table. You can safely do all of that with UIKit (and add your own graphics and custom rendering if you want to give it a totally different look).

One thing to watch out for: Whenever your OpenGL view isn’t visible, make sure to stop doing frame updates (for simulation and rendering). Otherwise, the rest of the UI is going to be very unresponsive (and you might not find that out until you run it on the device, because the simulator is very fast). The UIViewController viewWillAppear and viewDidDisappear functions are perfect to find out when you should start and stop frame updates.

Multiple Views

Things get more complicated when you want to have multiple views with OpenGL rendering. I don’t mean multiple viewports in the same screen, but multiple views in the sense of multiple UIViews. My current project, for example, has different views connected with a UITabBarController. Some of the are custom UIViews and some of them are OpenGL views.

My first thought was to try to have different OpenGL contexts for each view, but that would complicate things because I wanted to share the same textures and other resources. I know there’s a shared groups option, but that was definitely not the way to go.

The best solution I found was to use multiple frame buffers, binding each of them to the correct OpenGL view. That way, when I switch views, I switch frame buffers and everything works correctly.

On the -viewDidLoad function for each view controller with an OpenGL view, I call the function listed above to create a new frame buffer bound to that view, and it returns an index. Then, in the -viewWillAppear function, I set the frame buffer corresponding to that index:

    const FrameBuffer& buffer = m_frameBuffers[bufferIndex];
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, buffer.m_frameBufferHandle);   
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);
    glViewport(....);

That’s it. Now you can switch between different OpenGL views and UIKit views without any problems. Just make sure to only render the visible views! That can be trickier than it sounds because the viewWillAppear and viewDidDisappear functions only get called for views that were layed out in the Interface Builder. If you added them by hand, you need to call those methods yourself (why??!?!). So keep tabs on that, otherwise everything will slow down to a halt.

And To Wrap It Up…

You thought I had forgotten about the promise to slowly unveil art from my current project, uh? Fear not, here comes the next teaser. This is an actually screenshot taken a minute ago. Incidentally, this is not one of the views using OpenGL. We’ll have to save those for another day 🙂

screenshot