3/05/2012

03-05-12 - Oodle Handle Table

A few months ago I created a new core structure for Oodle which has worked out really well and tied the whole concept together. I'm going to make a few notes about it here.

The foundation is a lock-free weak reference handle table. One of the goals of Oodle library design was that I didn't want to force a coding structure on the client. eg. if it was me I would use a PagingResource base class with virtual OnLoad functions or whatever, and probably reference counting. But I understand many people don't want to do that, and one of the nice thing about RAD products is that they fit into *your* engine structure, they don't force an engine structure on you. So initially I was trying to write the Oodle systems so that each system is very independent and don't force any overall concept on you. But that was a bit of a mess, and bug prone; for example object lifetime management was left up to the client, eg. if you fired an async task off, you could delete it before it was done, which could be a crash if the SPU was still using that memory.

The weak reference handle table fixes those bugs in a relatively low overhead way (and I mean "low overhead" both in terms of CPU time, but more importantly in terms of coding structure or intellectual burden).

Handles can be locked (with a mutex per handle (*)) which gives you access to any payload associated with them. It also prevents deletion. Simple status checks, however, don't require taking the lock. So other people can monitor your handle while you work on it without blocking.

(* = not an OS mutex of course, but my own; it's one of the mutexes described in the previous serious. The mutex in Oodle uses only a few bits per handle, plus more data per thread which gives you the actual OS waits; game work loads typically involve something like 1000's of objects but only 10's of threads, so it's much better to use a per-thread "waitset" type of model)

One of the crucial things has been that the state of handles can basically only make one state transition, from "not done" to "done". Once they are done, they can never go back to not done; if you decide you have more followup work you create a new handle. This is as opposed to a normal Event or something which can flip states over and over. The nice thing about the single state transition is it makes waiting on that event much simpler and much easier to do race-free. There's no issue like "eventcount" where you have to do a prepare_wait / commit_wait.

The state of the handle is in the same word as the GUID which tells you if the handle is alive. So you only have to do one atomic op and it tells you if the weak reference is alive, and if so what the status is.

The other big thing is that "foward dependencies" (that I call "permits") are built in to the handle table, so all the systems automatically get depenedencies and can wait on each other. (as described previously here : Worker Thread system with reverse dependencies )

You can make any kind of handle, eg. some async SPU job, and then mark it as depending on an IO, so the SPU job will get kicked when the IO is done.

A little example image (click for high res) :

The row of pink is the IO thread doing a bunch of reads. There are pre-made SPU jobs to decompress each chunk, and you can see they fire as each read completes. Then another thread (the brown below the 4 SPUs) waits on each SPU handle and processes completion for them.

This all comes from


(for each chunk:)
OodleAsyncHandle h1 = fire read
OodleAsyncHandle h2 = fire SPU decompress , depends on h1

add h2 to list
wait for all handles in list


Aside :

There's only one disadvantage I haven't figured out yet. If something depends on an SPU job, then when the SPU work item completes it needs to check if it "permits" anything else, and if so mark them as ready to run.

The problem is that it can't do that work from the SPU. (actually it probably could do some of it from the SPU, by chasing pointers and doing DMA's for each chunk, and using the SPU atomic cache line stuff; but in the end it will run into a problem that it needs to call some system call on the PPU).

So when an SPU job completes I need to send back to the PPU a message that "hey this handle is done see if it permits anything and do the processing for that". The problem is there seems to be no good mechanism for that on the PS3, which kind of blows my mind. I mean, I can just tack my messages onto a lock-free queue from the SPU, that part is fine, but then I need a way to signal the PPU (or run an interrupt on the PPU) to get the PPU to process that queue (and the threads that need the processing might be asleep and need to get woken), and that is where it's not great.

3 comments:

Wes said...

Sounds like a pleasant to use API (my favorite kind). Good stuff.

Arseny Kapoulkine said...

You can use event flags from SPU - it does not require any specific threading environment, AFAIK (although not sure if it works on raw). Do they have any issues?

cbloom said...

Yeah, event flags from SPU to PPU (that can wake a PPU thread) seem to have this huge latency problem. I'm investigating this issue a bit more, so maybe a followup will come.

old rants