Pursuant to that, the new idea is to make Oodle a handful of related pieces. You can use one or more of the pieces, and each is easy to plug in at the last minute.
1. Async File IO ; the new idea with this is that it's cross platform, all nice and async, does the LZ decompression on a thread, can do DVD packaging and DVD emulation, can handle the PS3/Xenon console data transfers - but it just looks like regular files to you. This is less ambitious than the old system ; it no longer directly provides things like paging data in & out, or hot-loading artist changes; you could of course still do those things but it leaves it more up to the client to do that.
The idea is that if you just write your game on the PC using lazy loose file loads, boom you pop in Oodle and hardly touch the code at all, and you automatically get your files packed up nice and tight into like an XBLA downloadable pack, or a DVD for PS3, or whatever, and it's all fast and good. Oh, and it also integrates nicely with Bink and Miles and Granny so that you can things like play a Bink video while loading a level, and the data streamers share the bandwidth and schedule seeks correctly.
2. Texture goodies. We'll provide the most awesome threaded JPEG decoders, and also probably a better custom lossy and custom lossless texture compressors that are specifically designed for modern games (with features like good alpha-channel support, support for various bit depths and strange formats like X16Y16 , etc.). Maybe some nice DXTC realtime encoding stuff and quality offline encoding stuff. Maybe also a whole custom texture cache thing, so you can say Oodle use 32 MB for textures and do all the paging and decompression and such.
3. Threading / Async utilities. You get the threaded work manager, the thread profiler, we'll probably do "the most awesome" multithreaded allocator. We'll try to give these specific functions that address something specific that people will need to ship a game. eg. a customer is near ship and their allocator is too slow and taking too much memory so they don't fit in the 256 MB of the console. Boom plug in Oodle and you can ship your game.
Anyway, that's just the idea, it remains to be worked out a bit. One thing I'm definitely doing is the low level IO is now going through a page cache.
As I'm writing it I've been realizing that the page cache is a super awesome paradigm for games in general these days. Basically the page cache is just like an OS virtual memory manager. There's a certain limited amount of contiguous physical memory. You divide it into pages, and dynamically assign the pages to the content that's wanted at the time.
Now, the page cache can be used just for file IO, and then it's a lot like memory mapped files. The client can use the stdio look-alike interface, and if they do that, then the page cache just automatically does the "right thing", prefetching ahead pages as they sequentially read through a file, etc.
But since we're doing this all custom and we're in a video game environment where people are willing to get more manual and lower to the bone, we can do a lot more. For example, you can tell me whether a file should sequential prefetch or not. You can manually prefetch at specific spots in the file that you expect to jump to. You can prefetch whole other files that you haven't opened yet. And perhaps most importantly, you can assign priorities to the various pages, so that when you are in a low memory situation (as you always are in games), the pages will be used for the most important thing. For example you can prefetch the whole next file that you expect to need, but you would do that at very low priority so it only uses pages if they aren't needed for anything more urgent.
The next awesome thing I realized about the page cache is that - hey, that can just be the base for the whole game memory allocator. So maybe you give 32 MB to page cache. That can be used for file IO, or video playback - or maybe you want to use it to pop up your in game "pause menu" GUI. Or say you want to stream in a compressed file - you map pages to read in the packed bits, and then you map pages as you decompress; as you decompress you toss the pages with the packed bits and leave the uncompressed pages in the cache.
Say you have some threaded JPEG decompress or something - it grabs a page to decompress to. The other cool thing is because all this is manual - it can grab the page with a certain priority level. If no page is available, it can do various things depending on game knowledge to decide how to respond to that. For example if it's just decompressing a high res version of a texture you already have in low res, it can just abort the decompress. If it's a low priority prefetch, it can just go to sleep and wait for a page to become available. If it's high priority, like you need this texture right now, that can cause other pages that are less important to get dropped.
Pages could also cache procedurally generated textures, data from a network, optional sounds, etc. etc.
I think about it this way - in the future of multicore and async and threading and whatnot, you might have 100 jobs to run per frame. Some of those jobs need large temp work memory. You can't just statically assign memory to various purposes, you want it to be used by the jobs that need it in different ways.
5 comments:
I wonder if you could end up with a priority inversion problem due to an interaction between your job scheduler and page priorities. If, for example, you have a low priority job occupying pages with high priority, which ends up stalling a high priority job that is waiting for low priority pages.
Yes, completely, I worry a lot about getting into problems like that, as I'm basically writing more and more of my own OS.
I basically have my own thread scheduler, my own mutexing stuff, and now my own virtual memory manager.
There's a lot of complicated and subtle stuff that goes into those things to avoid priority inversion and thundering herds, and so on.
Part of the risk of exposing more low level control to the game is that it makes it more possible for the game to fuck things up in that way.
Windows for example has lots of very clever stuff to prevent broken code from hurting the system too bad. I know people like Jeff and Casey complain about how the Windows Scheduler is too heavy handed about not doing exactly what you tell it to do, but personally I think it does a pretty amazing job of mostly doing what you ask for but also preventing fuckups.
(for example it does things like priority boost low-priority threads that are time-starved if the high-priority threads are not making progress because they are spinning waiting for the low priority thread to do something that it never gets time to do)
pardon my ignorance, but what's Oodle?
Oodle's my new middleware product for RAD.
If you're a game developer with requests or comments on what you'd like to see in Oodle, please email me.
no i'm not a game developer, just a follower who landed here from molly rocket forums :)
Post a Comment