Last Objective-C Annoyance: Fixed!

I’ve gone on and on before about how much I like Objective-C and especially, how well it plays with C++. Mixing the two is a pleasure, and it’s often hard to remember where one stops and the other one starts. So much so that sometimes I catch myself calling C++ member functions like this [self myFunction:param];. Oops.

It is true that Objective-C can be a bit more verbose than plain C++, but it more than makes up for it with named parameters (which increase readability a huge amount), and not having to duplicate many things between header and implementation files.

Yet, even though I like Objective-C so much, there was one last, really annoying feature that was preventing perfect interop between C++ and Objective-C. Something so simple, yet so fundamental, that it really got in the way. Something that wouldn’t allow me to write something as simple as this:

@interface MyView : UIView
{
    // ...
    SingleTouchEvent m_singleTouchEvent;
}

SingleTouchEvent is a custom C++ class. There’s nothing wrong with that right? There shouldn’t. But if you try and compile it, you’ll get this warning:

warning: type `SingleTouchEvent' has a user-defined constructor
warning: C++ constructors and destructors will not be invoked for Objective-C fields

Some of you might be saying, no big deal, you can dynamically allocate objects and have their constructors called as usual. That’s true, but I hate allocating objects on the heap just because of something like this. My preference is always to make them member variables if I can. It’s faster, more efficient, less error prone, and easier to read. Needless to say, this was bugging the hell out of me.

I looked all over the build options in case there was some setting I could toggle to enable C++ constructors to be called, but no luck. I did a bit more research, and lo and behold, it turns out there is a custom setting you can pass directly to gcc to do exactly that! It’s nowhere in the XCode settings GUI, so you have to enter it by hand. Go to Project | Info, and in the Custom Settings section, enter the following: GCC_OBJC_CALL_CXX_CDTORS = YES

setting

With that setting enabled, Objective-C and C++ now play together better than ever!

Without any barriers to writing new code in Objective-C, I expect it will slowly (or maybe not so slowly) replace C++ as my main language of choice for future projects.

Remixing OpenGL and UIKit

Yesterday I wrote about OpenGL views, and how they can be integrated into a complex UI with multiple view controllers. There’s another interesting aspect of integrating OpenGL and UIKit, and that’s moving images back and forth between the two systems. In both cases, the key enabler is the CGContext class.

From UIKit to OpenGL

This is definitely the most common way of sharing image data. Loading an image from disk in almost any format is extremely easy with Cocoa Touch. In fact, it couldn’t get any easier:

UIImage* image = [UIImage imageNamed:filename];

After being used to image libraries like DevIL, it’s quite a relief to be able to load a file with a single line of code and no chance of screwing anything up. So it makes a lot of sense to use this as a starting point for loading OpenGL textures. So far we have a UIImage. How do we get from there to a texture?

All we have to do is create a CGContext with the appropriate parameters, and draw the image onto it:

byte* textureData = (byte *)malloc(m_width * m_height * 4);
CGContext* textureContext = CGBitmapContextCreate(textureData, m_width, m_height, 8, m_width * 4, 
            CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (CGFloat)m_width, (CGFloat)m_height), image);
CGContextRelease(textureContext);

At that point, we’ll have the image information, fully uncompressed in RGBA format in textureData. Creating an OpenGL texture from that is done like we always do:

glGenTextures(1, &m_handle);
glBindTexture(GL_TEXTURE_2D, m_handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);

And we’re done.

A word of advice: This is a quick way to load textures and get started, but it’s not the ideal way I would recommend for a shipping product. This code is doing a lot of work behind the scenes: loading the file, decompressing the image into an RGBA array, allocating memory, copying it over to OpenGL, and, if you set the option, generating mipmaps. All of this at load time. Ouch! If you have more than a handful of textures, that’s going to be a pretty noticeable delay while loading.

Instead, it would be much more efficient to perform all the work offline, and prepare the image into the final format that you want to use with OpenGL. Then you can load that data directly into memory and call glTextImage2D on it directly. Not as good as having direct access to video memory and loading it there directly, but that’s as good as it’s going to get on the iPhone. Fortunately Apple provides a command-line tool called texturetool that does exactly that, including generating mipmaps.

Another use of transferring image data from UIKit to OpenGL beyond loading textures is to use the beautiful and full-featured font rendering in UIKit in OpenGL applications. To do that, we render a string into the CGContext:

CGColorSpaceRef    colorSpace = CGColorSpaceCreateDeviceGray();
int sizeInBytes = height*width;
void* data = malloc(sizeInBytes);
memset(data, 0, sizeInBytes);
CGContextRef context = CGBitmapContextCreate(data, width, height, 8, width, colorSpace, kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
CGContextSetGrayFillColor(context, grayColor, 1.0);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
UIGraphicsPushContext(context);
    [txt drawInRect:CGRectMake(destRect.left, destRect.bottom, destRect.Width(), destRect.Height()) withFont:font 
                                lineBreakMode:UILineBreakModeWordWrap alignment:UITextAlignmentLeft];
UIGraphicsPopContext();

There are a couple things that are a bit different about this. First of all, notice that we’re using a gray color space. That’s because we’re rendering the text into a grayscale, alpha-only texture. Otherwise, a full RGBA texture would be a waste. Then there’s all the transform stuff thrown in the middle. That’s because the coordinate system for OpenGL is flipped with respect to UIKit, so we need to inverse the y.

Finally, we can create a new texture from that data, or update an existing texture:

glBindTexture(GL_TEXTURE_2D, m_handle);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, m_width, m_height, GL_ALPHA, GL_UNSIGNED_BYTE, data);

From OpenGL to UIKit

This is definitely the more uncommon way of transferring image data. You would use this when saving something rendered with OpenGL back to disk (like game screenshots), or even when using OpenGL-rendered images on UIKit user interface elements, like I’m doing in my project.

The process is very similar to the one we just went over, but backwards, with a CGContext as the middleman.

We first start by rendering whatever image we want. You can do this from the back buffer, or from a different render target. I don’t know that it makes a difference in performance either way (these are not fast operations, so don’t try to do them every frame!). Then you capture the pixels from the image you just rendered into a plain array:

unsigned char buffer[width*(height+30)*4];
glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE, &buffer);

By the way, notice the total awesomeness of the array declaration with non-constant variables. Thank you C99 (and gcc). Of course, that might not be the best thing to do with large images, since you might blow the stack size, but that’s another issue.

Anyway, once you have those pixels, you need to go through the slightly convoluted CGContext again to put it in a format that can be consumed directly by UIImage like this:

CGImageRef iref = CGImageCreate(width,height,8,32,width*4,CGColorSpaceCreateDeviceRGB(),
                            kCGBitmapByteOrderDefault,ref,NULL, true, kCGRenderingIntentDefault);
uint32_t* pixels = (uint32_t *)malloc(imageSize);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), 
                            kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);    
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage* image = [[UIImage alloc] initWithCGImage:outputRef];
free(pixels);

Again, we do the same flip of the y axis to avoid having inverted images.

Here the trickiest part was getting the color spaces correct. Apparently, even though it looks very flexible, the actual combinations supported in CGBitmapContextCreate are pretty limited. I kept getting errors because I kept passing an alpha channel combination that it didn’t like.

At this point, you’ll have the OpenGL image loaded an a UIImage, and you can do anything you want with it: Slap it on a button, save it to disk, or anything that strikes your fancy.

Using Multiple OpenGL Views And UIKit

The iPhone includes OpenGL ES for graphics rendering (thank you Apple for not coming up with a custom API!). Specifically, it uses OpenGL ES 1.1 plus a few extensions. It’s all very familiar and standard, except for the actual setup, which requires some integration with the iPhone UI.

The now gone CrashLander sample, in spite of some atrocious code, was the best example on how to get a simple, OpenGL app on the iPhone. It covered the basics: creating an OpenGL frame buffer, loading some textures, drawing some polys, and presenting the result to the screen. It was very useful because it was a very small and concise and it made for a great starting point for any OpenGL-based app.

Unfortunately, because it was so basic, it didn’t show how OpenGL can play with the rest of UIKit user interface. Instead, it took the approach from many games of taking control of the full screen and viewing it a a single resource. That’s fine as long as you’re making those kind of games, but I found myself having to mix OpenGL and UIKit quite a bit in my current project. And eventually, I even had to use multiple OpenGL views in different parts of the app.

Single OpenGL View

To render with OpenGL to a view, you have to first set the class-static method +layerClass so it creates the right type of layer. Specifically, we need a CAEAGLLayer:

+ (Class)layerClass
{
    return [CAEAGLLayer class];
}

You can create a view of that type in code, or you can lay it out on the Interface Builder. I actually like the Interface Builder quite a bit, so that’s how I end up creating all of mine.

Then, you need to create an OpenGL context:

    m_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
    [EAGLContext setCurrentContext:m_eaglContext];

Finally, you need to create a valid frame buffer object from the CAEAGLLayer from the view you just created:

    glGenFramebuffersOES(1, &buffer.m_frameBufferHandle);
    glGenRenderbuffersOES(1, &buffer.m_colorBufferHandle);
    glGenRenderbuffersOES(1, &buffer.m_depthBufferHandle);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);
    [m_eaglContext renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:drawable];
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &buffer.m_width);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &buffer.m_height);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_depthBufferHandle);
    glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, buffer.m_width, buffer.m_height);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, buffer.m_frameBufferHandle);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle); 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, buffer.m_depthBufferHandle)

It’s all pretty standard frame buffer OpenGL stuff. The renderBufferStorage:fromDrawable function is the one that actually allocates buffer storage for that particular view.

After that, you’re home free and you can use OpenGL the way you’ve always done. The only difference is that to present, you call this functions instead:

    [m_eaglContext presentRenderbuffer:GL_RENDERBUFFER_OES];

Done.

Integrating OpenGL and UIKit

After you have the basic OpenGL app set up, you can treat it as a full-screen, exclusive resource and do all your output through it. That works fine for a lot of games, but there are a lot of games and apps that benefit from using some amount of UIKit functionality. After all, UIKit is a great UI API, and it seems like a shame to reinvent it all if it’s not necessary.

The good news is that OpenGL rendering is happen in a view, so you can do everything you can do with views: Animate it, do fancy transitions, add them to nav bars or tab bars, etc. The bad news is that, unless you’re careful, performance will suffer.

Apple has some best practices on what to do with OpenGL views, but it comes down to avoid doing transforms on them (instead, use the transforms in OpenGL), and avoid putting other UIKit elements on top of it, especially transparent ones. Although, I seem to be able to get away with small buttons and such without affecting the frame rate, so that’s always a plus.

So having realized that, now you can make your application transition between the OpenGL view, and other views. Maybe you have a table view with settings, or some sort of UIKit dialog box, or a high-score table. You can safely do all of that with UIKit (and add your own graphics and custom rendering if you want to give it a totally different look).

One thing to watch out for: Whenever your OpenGL view isn’t visible, make sure to stop doing frame updates (for simulation and rendering). Otherwise, the rest of the UI is going to be very unresponsive (and you might not find that out until you run it on the device, because the simulator is very fast). The UIViewController viewWillAppear and viewDidDisappear functions are perfect to find out when you should start and stop frame updates.

Multiple Views

Things get more complicated when you want to have multiple views with OpenGL rendering. I don’t mean multiple viewports in the same screen, but multiple views in the sense of multiple UIViews. My current project, for example, has different views connected with a UITabBarController. Some of the are custom UIViews and some of them are OpenGL views.

My first thought was to try to have different OpenGL contexts for each view, but that would complicate things because I wanted to share the same textures and other resources. I know there’s a shared groups option, but that was definitely not the way to go.

The best solution I found was to use multiple frame buffers, binding each of them to the correct OpenGL view. That way, when I switch views, I switch frame buffers and everything works correctly.

On the -viewDidLoad function for each view controller with an OpenGL view, I call the function listed above to create a new frame buffer bound to that view, and it returns an index. Then, in the -viewWillAppear function, I set the frame buffer corresponding to that index:

    const FrameBuffer& buffer = m_frameBuffers[bufferIndex];
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, buffer.m_frameBufferHandle);   
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, buffer.m_colorBufferHandle);
    glViewport(....);

That’s it. Now you can switch between different OpenGL views and UIKit views without any problems. Just make sure to only render the visible views! That can be trickier than it sounds because the viewWillAppear and viewDidDisappear functions only get called for views that were layed out in the Interface Builder. If you added them by hand, you need to call those methods yourself (why??!?!). So keep tabs on that, otherwise everything will slow down to a halt.

And To Wrap It Up…

You thought I had forgotten about the promise to slowly unveil art from my current project, uh? Fear not, here comes the next teaser. This is an actually screenshot taken a minute ago. Incidentally, this is not one of the views using OpenGL. We’ll have to save those for another day 🙂

screenshot

Tea Time! 1.1 Update Gets Its Way

bugLast Friday, I decided to fix a bug in Tea Time!. Not so much a bug in Tea Time! actually, but a bug in Apple’s UIPickerView control that showed up when subclassing it. It turns out that it was possible to scroll the picker wheel just so, and one of the rows would come up blank (see screenshot). As far as I could tell, the UIPickerView class was unhappy that I was pre-allocating all the views I was going to show in the picker and handing them out whenever they were requested. So I had to allocate them on the fly or reuse the one in reusingView:

- (UIView *)pickerView:(UIPickerView *)pickerView
        viewForRow:(NSInteger)row forComponent:(NSInteger)component
        reusingView:(UIView *)view;

Once I did that, the bug mysteriously went away. Ah, the joys of dealing with code you have no source to.

So being the tinkerer that I am, I couldn’t just stop there, and I spent an hour fixing something else that was annoying me to no end: When you start Tea Time! there’s always a “click” sound that the picker plays because I’m selecting the last used tea configuration during initialization.

I looked high and low how to fix that, but there seemed to be no way to do it. Nothing in the published SDK information that I could see. But digging through the header file for UIPickerView, I saw this:

@protocol UIPickerViewDataSource, UIPickerViewDelegate;

UIKIT_EXTERN_CLASS @interface UIPickerView : UIView <NSCoding>
{
 //... snip....
  @package
    struct {
        unsigned int needsLayout:1;
        unsigned int delegateRespondsToNumberOfComponentsInPickerView:1;
        unsigned int delegateRespondsToNumberOfRowsInComponent:1;
        unsigned int delegateRespondsToDidSelectRow:1;
        unsigned int delegateRespondsToViewForRow:1;
        unsigned int delegateRespondsToTitleForRow:1;
        unsigned int delegateRespondsToWidthForComponent:1;
        unsigned int delegateRespondsToRowHeightForComponent:1;
        unsigned int showsSelectionBar:1;
        unsigned int allowsMultipleSelection:1;
        unsigned int allowSelectingCells:1;
        unsigned int soundsDisabled:1;
    } _pickerViewFlags;
}

See that last flag? soundsDisabled. Sounds promising, doesn’t it? Unfortunately _pickerViewFlags has @package protection, which means I can’t get to it. Or so the theory goes.

Objective C is surprisingly “good” about letting you get under the hood and bypass protection levels. So I tried accessing those flags with NSObject function setValueForKey, but the stubborn picker would refuse to let me have access and throw an exception instead. Now he was starting to really annoy me. After all, I’m supposed to be working on my other project, not hacking this little app.

I couldn’t stop though. It’s not in my nature.

Oh yeah, Objective C is refusing to give me access to those flags? Fine. I’ll resort to the hackiest or hacks. But I will turn that damn bit off! An object is just a block of memory really, and each member variable is located at a fixed offset from the start of the object. See where this is going? Are you horrified enough? Yup. In the debugger I saw that _pickerViewFlags was exactly 16 bytes from the start UIViewPicker. So I grabbed the pointer to the picker, moved forward to the _pickerViewFlags variable, and stomped the soundsDisabledBit. Mwahahahaha! That will teach you, annoying picker!

Compile, run, and…. “click”. WTF!?!?!? I went in the debugger and checked that I was changing the right bits. Everything was fine. Except that, somehow, the picker was ignoring that. Defeated, I gave up in disgust.

It was that evening when I saw a tweet from @jeff_lamarche about accessing an undocumented function to turn off the sounds in the clicker. Nice timing! He pointed me to his blog post about how to get a listing of all the undocumented SDK functions and how to access them from your programs. I couldn’t believe that there was a function to do exactly what I wanted, but Apple wasn’t exposing it. And for something so simple too!!

It was as simple as

@interface CustomPicker(hidden)
- (void)setSoundsEnabled:(BOOL)isEnabled;
@end

followed by

    [m_picker setSoundsEnabled:NO];
    [m_picker selectRow:settings.m_teaType inComponent:0 animated:NO];
    [m_picker selectRow:settings.m_teaStrength inComponent:1 animated:NO];
    [m_picker selectRow:settings.m_teaFormat inComponent:2 animated:NO];
    [m_picker setSoundsEnabled:YES];

Finally! I had defeated UIPickerView!

Now the catch is, you’re not supposed to use undocumented SDK functions. Apparently that’s grounds for getting your app rejected by Apple and having to resubmit. But I figured I had nothing to lose, and it might be an interesting learning lesson. After all, how exactly does Apple check for undocumented functionality? Do they scan the submitted executable against a set of forbidden entry points? I couldn’t imagine that’s a manual process. But on the other hand, there are reports of other apps getting onto the App Store using undocumented SDK features.

I figured, what the heck, it was a tiny little thing that wasn’t going to hurt anyone. Might as well try it.

teatimeI went ahead submitted the update last Friday, and today (Tuesday) I got the official email with the approval of my new version. Not a bad turnaround time since I imagine they don’t work weekends. It even just made it to the App Store as I was writing this. Shweet!

I’m still left wondering how Apple tries to monitor undocumented SDK usage. Did they not bother with my app since it was so small? Is it really a manual process? (I feel bad for the person in charge of that just thinking about it). Did they catch it and realized it was totally harmless and let it through? Only Apple knows I’m afraid. They really should make their process both more transparent and more consistent and fair for everybody.

But hey, I’m not complaining. No more annoying click at startup 🙂

Speaking at iPhone Conference And A Teaser

360|iDevI missed the iPhone Tech Talk World Tour last Fall. At the time it wasn’t a big deal because I was still soaking in all the information from the iPhone SDK docs. Now, on the other hand, I’m at the point that I’m getting into more advanced stuff, either not covered in the docs, or that it’s just plain tricky. For example, I spent all day today trying to coerce my app into sending an email with an image attachment–in the end I either kind of succeeded, or I bypassed the problem, depending how you look at it. But that’s another story for another day.

So last week, I was excited to learn there was a new iPhone development conference  called 360|iDev (not the greatest conference name–sounds too much like an Xbox development conference). It looks like a very hands-on, for developers by developers kind of conference, instead of half-marketing, half-development like Apple’s offerings (Yes, we’re already on board, you don’t have to sell us on how cool the iPhone is. We know. We’ve banked our life and savings on it actually). The lineup of speakers looks really promising (including Urban Tycoon author, Mike Huntington).

What really excites me the most is meeting the other attendees though. It looks like it’s not going to be a huge conference (hundreds instead of thousands or tens of thousands like GDC), and we’re all staying at the same hotel, so there should be plenty of opportunities to meet people, chat, trade tips, and compare horror stories over beers.

I actually liked the idea so much, that I decided to jump in and submit a session proposal, and the organizers very kindly accepted it (which is great because otherwise I might not have been able to have afforded going :-)). I’ll be talking about my experience going from a AAA console development environment, to a single-person iPhone development team. Both the obvious differences, the not so obvious similarities, and how a single person can really deliver a top-notch iPhone game.

I encourage everybody to have a look at the 360|iDev site and register if you haven’t already. It’s not very expensive (especially if you buy the ticker sooner, rather than later), and it’s going to be a blast. It’s in San Jose in early March, a couple of weeks before GDC, so no conflicts for game developers there. Also, if you’re interested in speaking, check out their call for speakers while they’re still accepting new submissions. I hope to see some of you there!

On a totally different note, a couple of weeks ago, Evan at Veiled Games (Up There), which is one of my favorite indie iPhone developers, promised to put a daily image with a teaser of their upcoming project. This was soon picked up by Gavin at Antair Games, who is letting us peek under the covers of what’s coming up after Sneezies.

I figured it would be fun to follow their steps, but don’t hold your breath of a daily update. Maybe weekly if you’re lucky. Maybe.

So here’s this week’s teaser. Draw whatever conclusions you want from it 🙂

gnome