Munch's Oddysee was a wild painful birth; we were shipping for Xbox launch and crunching like mad at the end. We had lots of problems to deal with, like trying to rip as much of the fucking awful NetImmerse Engine out as possible to get our frame rate up from 1 fps (literally 6 months before ship we had areas of the game running at 1 fps) (scene graphs FTL). And then there were the DMusic bugs. Generally I found that dealing with XBox tech support guys, the graphics and general functionality guys have been really awesome, really helpful, etc. Eventually we trapped the exception in the kernel debugger and it got fixed.
Anyhoo... in the rush to ship and compromise, one of the things we left out was the ability to clean up our memory use. We just leaked and fragmented and couldn't release stuff right, so we could load up a level, but we couldn't do a level transition. Horrible problem you have to fix, right? Nah, we just rebooted the Xbox. When you do a level transition in Munch, you might notice the loading screen pops up and is totally still for a few seconds, and then it starts moving, and the screen flashes briefly during that time. That's your Xbox rebooting, giving us a fresh memory slate to play with. (shortly after launch the Xbox guys forbid developers from using this trick, but quite a few games in the early days shipped with this embarassing delay in their level load).
For Stranger's Wrath we used my "Fixed Restoring" Small Allocator, which is a standard kind of page-based allocator. We were somewhat crazy and used a hefty amount of allocations; our levels were totally data driven and objects could be different types and sizes depending on designer prefs, so we didn't want to lock into a memory layout. The different levels in the game generally all get near 64 MB, but they use that memory very differently. We wanted to be able to use things like linked lists of little nodes and STL hashes and not worry about our allocator. So the Small Allocator provides a small block allocator with (near) zero size overhead (in the limit of a large # of allocations). That is, there are pages of some fixed size, say 16 KB, and each page is assigned to one size of allocation. Allocations of that size come out of that page just by incrementing a pointer, so they're very fast and there's zero header size (there is size overhead due to not using a whole page all the time).
That was all good, except that near the end when all the content started coming together I realized that some of our levels didn't quite fit no matter how hard we squeezed - after hard crunching they were around 65 MB and we needed to get down to 63 or so, and also the Fixed Restoring Allocator was wasting some space due to the page granularity. I was assigning pages to each size of allocation, rounding up to the next alignment of 8 or 16 or 32. Pages would be given out as needed for each size bucket. So if a size bucket was never used, pages for that size would never be allocated.
This kind of scheme is pretty standard, but it can have a lot of waste. Say you allocate a lot of 17 byte items - that gets rounded up to 24 or 32 and you're wasting 7 bytes per item. Another case is that you allocate a 201 byte item - but only once in the entire game ! You don't need to give it a whole page, just let it allocate from the 256-byte item page.
Now in a normal scenario you would just try to use a better general purpose allocator, but being a game dev near shipping you can do funny things. I ran our levels and looked at the # of allocations of each size and generated a big table. Then I just hard-coded the Fixed Restoring allocator to make pages for exactly the sizes of allocations that we do a lot of. So "Stranger's Wrath" shipped with allocations of these sizes :
static const int c_smallBlockSizes[] = { 8,16,24,32,48,64,80,96,116,128,136,156,192,224,256,272 };(The 116, 136, 156, and 272 are our primary GameObject and renderable types). (yes, we could've also just switched those objects to a custom pool allocator for the object, but that would've been a much bigger code change, which is not something I would want to do very close to shipping when we're trying to be in code lockdown and get to zero bugs).
I remember Sean doing something very similar when we were running out of memory in Terra Nova back in 1996.
ReplyDeleteFunny, we ended up rebooting between levels in MechAssault 1 as well (we had similar problems with wild dynamic allocations).
ReplyDeleteYep, to make Terra Nova run in 8M (or was it 4M?) I made the standard malloc detect a certain set of fixed sizes and allocate them each from their own small heap.
ReplyDeleteThe actual issue was really to prevent fragmentation, though; after running for a while on a level we started thrashing from not being able to load a particular enemy model, which was 64K, and we had more than 64K free but it was fragmented to hell and back.
Er, but the main part was it wasn't just all small blocks, it was just a hard-coded list of "problem" sizes that was even sparser and more random than the SW one.
ReplyDelete