in iOS

Virtual Memory Paging Is The Lazy Man’s Caching Scheme

One of the unintended side effects of my previous post on the horrible memory situation on the iPhone, was that some people pointed out it was possible to hook up a disk storage back end to the iPhone’s virtual memory system. That’s quite ironic because I think of it as compounding the already dismal situation rather than a solution or a even a stopgap measure.

As far as I’m concerned, virtual memory paging is the lazy man’s caching scheme. There, I’ve said it. Now let me qualify it a bit and justify why I feel that way.

A lot of applications, and especially games, can’t keep in memory every bit of data they need for their execution. There’s just too much data and not enough RAM to keep all the levels, all the textures, all the characters, and all the movies. Even the thought of keeping everything in memory is ridiculous. Instead, games are architected to strike a balance between memory usage and responsive interfaces. Usually that means loading levels, or parts of levels on demand, and keeping them in memory while they can be needed. Some other times it means loading levels of detail for textures, meshes, and animations depending on the player position. The point is, nobody knows as well as the game itself what needs to be in memory, what can be unloaded, and when the best time to do it is.

A different approach would be to ignore thinking about managing memory as a scarce resource and use as much of it as we need. If we ever go over the amount of available physical memory, the virtual memory system will kick in and page out memory to disk to make room for the memory we’ve requested. But there lies the problem: The virtual memory system is a lower-level system that knows absolutely nothing about our game or application. It can only make guesses about what memory is OK to evict and when it’s a good time to do it. If you’re unlucky it will choose to page out memory when you need performance the most, and it might evict a page that you’re about to use in a few milliseconds. If that happens, you can kiss your performance bye-bye.

Because of that, virtual memory paging will always do a much worse job than we could have done ourselves. But hey, it takes no effort or thinking on the part of the programmers, hence the “lazy man’s caching scheme” I was referring to earlier.

However, virtual memory paging is sometimes a necessary evil. The assumption running through the previous three paragraphs is that we, as programmers of the game, can always manage the physical memory so our game runs efficiently on it. That is true for certain platforms, usually with fixed hardware specs, and ones with minimal or no background processes running. Game consoles are perfect examples of this: We know how much RAM we have to start, we know how much the system needs, how much video memory there is, and we can deal with the rest ourselves. We’ll decide what to load and when to load it.

On something like a modern PC (whether it’s Windows, Mac, Linux, or any other flavor), we have none of those guarantees. We don’t know how much memory we’re going to encounter when we run our game, and, what’s even worse, we have no idea how that available memory is going to change during the execution of the program [1]. In a situation like that, we can plan for some minimum memory requirements, but we’re going to need that virtual memory paging to bail us out of tricky low-memory situations. It might sound like a good idea, but that’s one of the main reasons why PC games are often choppy and with inconsistent frame rates (buggy drivers being the other main reason).

Finally coming back to the iPhone, how does virtual memory paging fit there? There’s no doubt that it would help because it would reduce the number of crashes due to programs unexpectedly running out of memory. But at the same time, it would cause most games and apps to be choppy and unresponsive, especially when it’s just launched. But the iPhone is mostly a fixed spec platform, with minimal background processes [2], so we can always do better than the dumb virtual memory system. Maybe that’s not a big deal if you’re writing an app that interfaces with a web site and writes data to a database, but it’s crucial to be able to write responsible games, which are considered near real-time apps.

On the iPhone there should be no need for virtual memory paging (not to mention that it would probably drain the battery a lot faster). Instead of solving the memory problem by throwing more and more complex systems at it, what we really need is a guaranteed amount of memory for our app to run and we can take care of the rest. But Apple needs to take the first step and set that memory aside.

We’re waiting, Apple.

[1] We could go totally hard core and allocate the memory we need and then mark it as not swappable by the virtual memory system. That wouldn’t do any good because if other processes start requesting memory and they run out, the virtual memory system will swap out other pages (and possibly cause other processes to thrash) and kill our performance as well. So things are pretty much out of our hands as soon as virtual memory paging comes into play.

[2] There are a lot of background processes, but they should be relatively well-behaved. None of them should be pulling in massive amounts of data while the game is running.

  1. Since it was me pointing this out, I feel like I should comment. I didn’t suggest at all that the virtual memory pager was the solution for making responsive games.
    I mentioned it purely because in your previous post you stated “Because it has a fixed amount of RAM, the iPhone doesn’t provide virtual memory swapping.” So I thought I’d point out that it does in fact provide this. And for some applications it can actually be extremely useful.

  2. Hi Hendrik,
    This post wasn’t intended to bash your suggestion, or to imply you had put it forward as the perfect solution. It was more that other people saw your comment and they ran away with it and thought it was the best thing ever. And I just couldn’t hold back 🙂 Personally, I didn’t know you could add the disk backend like you suggested, and I think it’s very cool from a tech point of view. Just a horrible solution to a bad situation 🙂 So, really, thanks for bringing that up and sharing it in the first place.

  3. I disagree (a little bit). Your game doesn’t know everything that can be swapped out. You most likely can’t determine whether a page containing code can be swapped out or not. There’s plenty of code in any application that is only ever executed once. Can you swap that out as well as the OS can? 🙂
    The OS can make a pretty good guess based on how long it’s been since it was last used. Any game has quite a few pages that could be swapped out safely by the OS, and the developers have virtually no chance of identifying and exploiting this.

    Apart from that, I don’t see the problem in using it as a fallback. Of course you should do everything to make your game run in the amount of available memory (and if it starts paging more than a page or two, the game will probably be pretty much unplayable). But you seem to assume that the OS will swap out basically random data, and do it all the time. As long as there’s enough memory for everyone, why would it? I’d expect it to just not use the backing file most of the time. But having the backing file available might allow you to shut down more gracefully at least, if you start getting low memory warnings.

  4. Hi Noel,

    Nice post. That somehow reminded me of your article in Game Programming Gems 4 “The Beauty of Weak References and Null Objects”. Although that article is not directly about caching schemes, it can be thought of as a precursor, on top of which a caching scheme can be built. Ever since I read that article, I was always wondering how it can be *efficiently* used as part of a caching system. Now that I’ve come across this post, I want to use it as an excuse to ask your opinion on the subject and clear things up.

    You see, a cache requires a replacement policy using which resources are discarded. Now by definition, an LRU policy discards the least recently used items first and as a result it needs to keep track of what was used when. Given the design that’s discussed in that article, I assume (and assumptions are wrong at times) that this step is implemented by either 1)time-stamping resources or 2)moving them around in an LRU list both on each and every single de-reference and the latter bit is exactly where I’m starting to question the efficiency of this system. None of said approaches is free and they’re certainly not something you want to do on each single de-reference. All in all, I seriously hope that my assumptions are incorrect and there is somehow some obvious solution that I’ve been neglecting so far, but in case it’s not I’m wondering how this issue should be tackled? What approaches do you use to lower the cost? Do we really need to time-stamp the resources or move them around in an LRU list on each single de-reference? Am I missing something?

    Any help is greatly appreciated. 🙂

  5. Ash,

    Glad that you liked this article and the GPG one 🙂

    Since the game knows all about its contents, it can usually do much better than LRU. Levels can be partitioned into sections that make sense (maybe automatically or by a designer). That can be as simple as full load areas, or different, smaller areas, each one loaded at a different LOD level.

    But the point is that, having full knowledge of the data and what the player can do/see, you can make smart, high-level decisions about what data to keep and what data to evict. If you don’t, maybe you need to build a bit more structure into your level creation to allow for those type of optimizations.

  6. I also read and liked your book when it was published back in 2003, so that makes it three. 🙂

    Back to the discussion, I’m not convinced that an LRU system is not a good idea. Pardon my ignorance but the whole point of having a cache is to reduce load times by storing the resources in a faster medium (RAM) and avoiding full reloads from the slower medium (HDD, DVD, Blu-ray). If all resources in a sector are to be evicted before a new sector is reloaded, the cache would be reduced in functionality to a simple buffer. What if the resource that we have just evicted is required in the next sector?

    Now before I go any further, let me say that I’m an indie developer like you. The difference is that I’ve never been part of a team, have never worked on a shipping title and lack your vast experience. 🙂 So I have to learn the most in my limited encounters with people who know what they’re talking about. So any help is greatly appreciated.

    For the sake of discussion, consider an open-world, free-roam style game where the player is free to move around a vast terrain. The way I see it, by having an LRU cache, some amount of memory is set aside and resources are loaded as they are encountered. But resources are not evicted until the cache runs out of budget, at which point the least recently used resources are evicted. The cache can also have several layers. For instance, the HDD can be used as a second-tier cache, so least recently used resources in RAM can be manually swapped out to HDD, the rationale being that loading preprocessed and read-to-use data would take less time than parsing and extracting them from whatever format our resources are originally stored in. This is where the GPG article comes in handy. The additional level of indirection allows us to easily swap out/in resources without worrying about dangling pointers. The same system can also be designed around handles.

    Now everything’s great and all up to this point, the only problem is that we have to eventually unload some resources, even from the second-tier cache, to make room for newly loaded ones, right? The problem is that there is a distinction between those resources that are in cache and are not referenced by any other resources (so they’re only there to boost load times if and only if the same resource is required in the near future) and those resources that are there and are actually referenced. Unloading the first type is good. Unloading the second type is not, because we have no idea how to reload them back. Consider a vertex buffer for instance. The data inside a vertex buffer is the result of lots processings and is not usually available as a single file (like a texture). It’s usually stored as part of a mesh and god knows how much processing it’s gone under. Maybe we have combined a couple of smaller buffers into a bigger one. Maybe we have re-arranged its content for better cache coherence. The point is that (unlike a texture) it’s not easy to retrieve a single vertex buffer when it’s unloaded. The moral of the story is that being able to make the said distinction is important. To do so we need some sort of reference counting. This is also where that article in GPG shines.

    So to sum it up, not only does the design in the GPG article allow us to swap the resources in and out of memory, but it also enables us to make the distinction between the resources that are in cache but are not referenced and those that are there and are actually used. This is great.

    But all of this system revolves around this fact: we need to be able to associate a time-stamp with those resources so we can evict them when they are old. This is where my previous post kicks in. Reading that time-stamp on every single de-reference is too expensive. So is moving the resources in an LRU list.

    So what do you think?

    I’d be looking forward to hearing from you.

  7. Ash, I’m a big fan of simple solutions. What you’re advocating seems like a pretty good system for a very general resource caching system. If possible, I would rather keep it extremely simple. If you know how much memory you have available, and you know your level layout when you start, you don’t need to do any of that at runtime (my rule is, always precompute everything you can).

    If you’re dealing with systems with unknown amounts of memory, and user-created content that you can’t even pre-process before running, then you might need something as complex as that.

    If you have to have something like that, you probably want to use spatial information instead of time information to decide what to evict (so again, the game having knowledge of where things are wins hands down over a plain, blind caching system). You can do much faster spatial queries than temporal usage ones (probably).

Comments are closed.