3/29/2012

03-29-12 - Computer Notes to Self

1. If your TEMP env var is set to anything other than "C:\Users\you\AppData\Local\Temp" , some stupid apps (eg. windows Installer) may fail. This failure can show up in some weird ways such as "access denied" type errors.

2. Some dumb apps can fail when run on a subst'ed drive (such as Installer).

3. Windows crash dumps don't work unless you have enough virtual memory. They claim 16M is enough.

4. Once in a while I run Procmon and filter only for writes to see if there is any fucking rogue service that's thrashing my disk (such as Indexing or Superfetch or any of that bloody rot). This time I found that IpHlpSvc was logging tons of shite. You can disable it thusly :

regedit
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Tracing\IpHlpSvc\Enable-FileTracing
value 0

5. The basic process for examining a crash dump is this :


Run WinDbg

Set symbol search path to :

"SRV*c:\windows\symbols*http://msdl.microsoft.com/download/symbols"

(if you do it after loading the .dmp, then use the command ".reload" )

Load the .dmp, probably from "c:\windows\minidump"

command :

!analyze -v

command "lmv" will list drivers with info

6. Windows comes with a "driver verifier" (verifier.exe). It's pretty cool. If you enable all the checks on all your drivers, it will make your computer too slow to be usable. What I do is enable it for all the non-Microsoft drivers, and that seems to be fast enough to stand. What it does is sort of stress the drivers so that when one of them does something bad, you get a blue screen and crash dump rather than just a hard freeze with no ability to debug. It also enables lots of memory corruption and overrun checks on the drivers (it seems to force a debug allocator on them which puts gaurd pages around allocs, you may wind up with BSODs due to memory trashes even on a system that is apparently stable).

7. I wanted to reduce the number of drivers I had to examine to just the ones I actually use, and was somewhat surprised to find almost a hundred drivers installed on my machine but disabled. The biggest culprit is USB; every time you plug something in, it installs a custom driver and then you get it forever. You can get rid of them thusly :

SET DEVMGR_SHOW_NONPRESENT_DEVICES=1

open Device Manager
Menu -> View -> Show hidden devices

now you should see lots of crud ghosted out.

3/27/2012

03-27-12 - DXT is not enough

I've been sent this GDC talk a few times now, so I will write some notes about it. (BTW thanks to all who sent it; I don't really follow game dev news much, so I rely on you all to tell me when there's something interesting I should know about).

There's nothing wrong with the general subject matter of the talk, and the points are along the right track in a vague sort of way, but there's just absolutely no effort put into it. I put more effort into the average blog post. If you aren't going to actually do the testing against other methods and measurement on a real (public) data set and see if your ideas are actually good, then just don't do a talk.

Anyhoo, a quick summary of the talk in plain text :

JPEG and then realtime DXTC is kind of bonkers (I agree). To make DXTC smaller, he applies Zip, then pre-huffman, pre-delta of colors, rearranging the colors & indices to be in two separate blocks, and then "codebooks", and finally 8x8 and even 16x16 blocks.

There are a lot of problems with this talk. The first is the assumption that you are using Zip on the back end. (BTW Zip is not a type of LZW at all). Zip is ancient and has a tiny window, there's no reason to use zip. If you just use a better back end, most of what he does next is irrelevant. Essentially a lot of what he does (such as the codebooks and the pre-huffman) are just ways of extending Zip, effectively making the sliding window larger.

Second, whenever you are doing these things you need to consider the memory use and processor use tradeoffs.

For example, reorganizing the DXTC data to separate the colors and the indeces does in fact help. (I do it in Oodle, optionally). But that doesn't make it a clear win. It actually takes a huge amount of CPU. Just swizzling memory around like that can be slower than a very advanced LZ decompressor. (unless you are lucky enough to be on a PC which has an amazing big cache, amazing fast memory, and an amazing out of order processor that can hide cache misses). So you have to consider what is the speed cost of doing that reorg vs. other ways you could use the CPU time to improve compression (eg. running LZMA or LZX or whatever instead of Zip). Even on a PC, the reorg will ruin large block write-combining. For me, reorg took me from 500 MB/s to 300 MB/s or so, and the gain is only a few percent, not obviously worth it. (my back end is much better than Zip so the gains are much smaller, or not there at all).

The only real idea in the talk is going to 8x8 blocks. That is in fact a valid thing to do, but here the rigor of the talk completely falls apart, and instead we get "look at this blurry slide in a terrible lighting environment, can you see the loss?". Errmmm, okay. To be fair it's no worse than the typical GDC graphics talk where you get "look at this picture of my global illumination technique, can you see any artifacts?" ; ermm, well you have chosen the scene to show me, and I'm a hundred feet away looking at a slide, and I can't zoom in or examine the areas I think look funny, so of course it should look good, but in fact, yes I do see lighting artifacts!

Any time you start introducing loss you have to ask : how does this loss I'm injecting compare to other ways I could reduce bitrate and increase loss? An easy one to check is just to halve the resolution of your image (in both dimensions). That's a 4:1 compression, and quite often looks just fine visually (eg. on smooth data it is one of the best possible ways to create loss). And of course since we're in this domain you need to compare against JPEG-DXTC.

CRUNCH addresses this subject much better, even doing some actual tests, and it has some much more interesting ideas.

See my previous writings on DXTC in general.

Now some actual rigor :

DXT1 is a 4 bpp (bits per pixel) format. Additional lossless compression can get you below 4 bpp, but getting to 1 bpp is unrealistic. Here I will show the results for compressors of increasing compression : zip, rrlzhlw, and lzma. The "reorg" here is just separating colors and indices; other reorgs do not help if the back end compressor is rrlzhlw or better.

zip rrlzhlw lzma reorg zip reorg rrlzhlw reorg lzma
kodim01.bmp 3.187 2.962 2.786 2.98 2.794 2.683
kodim02.bmp 2.984 2.738 2.574 2.703 2.575 2.484
kodim03.bmp 2.768 2.534 2.373 2.494 2.344 2.254
kodim04.bmp 3.167 2.931 2.751 2.913 2.727 2.625
kodim05.bmp 3.463 3.272 3.155 3.238 3.108 2.999
kodim06.bmp 3.039 2.827 2.626 2.755 2.635 2.514
kodim07.bmp 2.862 2.622 2.489 2.634 2.469 2.366
kodim08.bmp 3.416 3.197 3.073 3.211 3.041 2.936
kodim09.bmp 2.919 2.701 2.497 2.658 2.525 2.4
kodim10.bmp 3.074 2.838 2.644 2.803 2.638 2.525
kodim11.bmp 3.001 2.827 2.655 2.791 2.668 2.542
kodim12.bmp 2.86 2.645 2.446 2.583 2.451 2.343
kodim13.bmp 3.517 3.331 3.182 3.299 3.159 3.042
kodim14.bmp 3.296 3.104 2.94 3.078 2.922 2.803
kodim15.bmp 3.067 2.835 2.675 2.798 2.632 2.547
kodim16.bmp 2.779 2.565 2.362 2.543 2.401 2.276
kodim17.bmp 3.077 2.849 2.659 2.788 2.653 2.544
kodim18.bmp 3.495 3.315 3.181 3.255 3.106 3.025
kodim19.bmp 3.09 2.878 2.685 2.827 2.698 2.571
kodim20.bmp 2.667 2.486 2.302 2.406 2.308 2.22
kodim21.bmp 3.087 2.893 2.7 2.804 2.712 2.582
kodim22.bmp 3.39 3.213 3.046 3.168 3.005 2.901
kodim23.bmp 3.221 2.985 2.826 2.949 2.758 2.646
kodim24.bmp 3.212 2.986 2.86 3.009 2.826 2.724
clegg.bmp 2.987 2.75 2.598 2.712 2.576 2.459
FRYMIRE.bmp 1.502 1.318 1.224 1.417 1.3 1.209
LENA.bmp 3.524 3.332 3.209 3.304 3.136 3.062
MONARCH.bmp 3.28 3.055 2.916 2.999 2.835 2.741
PEPPERS.bmp 3.381 3.2 3.073 3.131 2.962 2.881
SAIL.bmp 3.425 3.234 3.123 3.197 3.047 2.967
SERRANO.bmp 1.601 1.39 1.289 1.484 1.352 1.26
TULIPS.bmp 3.511 3.27 3.164 3.227 3.061 2.974
total 97.849 91.083 86.083 90.158 85.424 82.105
gain 7.691 5.659 3.978

What you should be able to see : reorg zip is roughly the same as rrlzhlw (without reorg), and reorg rrlzhlw is about the same as lzma (without reorg). Note that reorg is *slow* ; rrlzhlw without reorg decodes quite a bit faster than zip with reorg, so speed is not a good reason to prefer that. (I suppose simplicity of coding is one advantage it has). The gain from reorging decreases as you go to better back-ends.

I should also point out that doing something like reorg lzma is kind of silly. If you really want the maximum compression of DXTC textures, then surely a domain-specific context-based arithmetic coder will do better, and be faster too. (see for example "Lossless Compression of Already Compressed Textures" , Strom and Wennersten ; not a great paper, just the very obvious application of normal compression techniques to ETC (similar to DXTC) texture data).

In the next post I'll ramble a bit about future possibilities.

3/12/2012

03-12-12 - Comparing Compressors

It's always hard to compare compressors fairly in a way that's easily understood by the layman. I think the basic LZH compressors in Oodle are very good, but do they compress as much as LZMA ? No. Are they as fast as LZO? No. So if I really make a fair comparison chart that lists lots of other compressors, I will be neither the fastest nor the highest ratio.

(The only truly fair way to test, as always, is to test in the actual target application, with the actual target data. Other than that, it's easiest to compare "trumps", eg. if compressor A has the same speed as B, but more compaction on all files, then it is just 100% better and we can remove B from consideration)

I wrote before about the total time method of comparing compressors. Basically you assume the disk has some given speed D. Then you see what is the total time to load the compressed file (eg. compressed size/D) and the time to do the decompression.

"Total time" is not really the right metric for various reasons; it assumes that one CPU is fully available to compression and not needed for anything else. It assumes single threaded operation only. But the nice thing about it is you can vary D and see how the best compressor changes with D.

In particular, for two compressors you can solve for the disk speed at which their total time is equal :


D = disk speed
C = decompressor speed
R = compression ratio (compressed size / raw size) (eg. unitless and less than 1)

disk speed where two compressors are equal :

D = C1 * C2 * (R1 - R2) / (C1 - C2)

at lower disk speeds, the one with more compression is preferred, and at higher disk speeds the faster one with lower compression is preferred.

The other thing you can do is show "effective speed" instead of total time. If you imagine the client just gets back the raw file at the end, they don't know if you just loaded the raw file or if you loaded the compressed file and then decompressed it. Your effective speed is :


D = disk speed
C = decompressor speed
R = compression ratio (compressed size / raw size) (eg. unitless and less than 1)

S = 1 / ( R/D + 1/C )

So for example, if your compressor is "none", then R = 1.0 and C = infinity, so S = D - your speed is the disk speed.

If we have two compressors that have a different ratio/speed tradeoff, we can compare them in this way. I was going to compare my stuff to Zip/Zlib, but I can't. On the PC I'm both faster than Zip and get more compression, so there is no tradeoff. (*1) (*2)

(*1 = this is not anything big to brag about, Zip is ancient and any good modern compressor should be able to beat it on both speed and ratio. Also Zlib is not very well optimized; my code is also not particularly optimized for the PC, I optimize for the consoles because they are so much slower. It's kind of ironic that some of the most pervasive code in the world is not particularly great).

(*2 = actually there are more dimensions to the "Pareto space"; we usually show it as a curve in 2d, but there's also memory use, and Zip is quite low in memory use (which is why it's so easy to beat - all you have to do is increase the window size and you gain both ratio and speed (you gain speed because you get more long matches)); a full tradeoff analysis would be a manifold in 3d with axes of ratio,speed, and size)

Anyhoo, on my x64 laptop running single threaded and using the timing technique here I get :


zlib9 : 24,700,820 ->13,115,250 =  1.883 to 1, rate= 231.44 M/s

lzhlw : 24,700,820 ->10,171,779 =  2.428 to 1, rate= 256.23 M/s

rrlzh : 24,700,820 ->11,648,928 =  2.120 to 1, rate =273.00 M/s

so we can at least compare rrlzh (the faster/simpler of my LZH's) with lzhlw (my LZH with Large Window).

The nice thing to do is to compute the effective speed S for various possible disk speeds D, and make a chart :

On the left is effective speed vs. disk speed, on the right is a log-log plot of the same thing. The blue 45 degree line is the "none" compressor, eg. just read the uncompressed file at disk speed. The axis is MB/sec, and here (as is most common for me) I use M = millions, not megas (1024*1024) (but the numbers I was showing at GDC were megas, which makes everything seem a little slower).

We see that on the PC, lzhlw is the better choice at any reasonable disk speed. They are equal somewhere around D = 280 MB/sec, but it's kind of irrelevant because at that point they are worse than just loading uncompressed.

The gap between lines in a log-log plot is the *ratio* of the original numbers; eg. the speedup multipler for LZH over RAW is maximum at the lowest speeds (1 MB/sec, = 0 on the log-log chart) and decreases as the disk gets faster.

3/08/2012

03-08-12 - Oodle Coroutines

I wrote a year ago ( 03-11-11 | Worklets , IO , and Coroutines ) about coroutines for doing tasks with external delays. (a year ago! wtf)

This is a small followup to say that yes, in fact coroutines are awesome for this. I never bothered to try to preserve local variables, it's not worth it. You can just store data that you want to save across a "yield" into a struct. In Oodle whenever you are doing a threaded task, you always have a Work context, so it's easy to just stuff your data in there.

I've always been a big fan of coroutines for game scripting languages. You can do things like :


Walk to TV
Turn on TV
if exists Couch
  Walk to Couch

etc. You just write it like linear imperative code, but in fact some of those things take time and yield out of the coroutine.

So obviously the same thing is great for IO or SPU tasks or whatever. You can write :


Vec3 * array = malloc(...);

io_read( array , ... );  //! yields

Mat3 m = camera.view * ...;

spu_transform(array, m); //! yields

object->m_array = array;

and it just looks like nice linear code, but actually you lose execution there at the ! marks and you will only proceed after that operation finishes.

To actually implement the coroutines I have to use macros, which is a bit ugly, but not intolerable. I use the C switch method as previously described; normally I auto-generate the labels for the switch with __COUNTER__ so you can just write :


 .. code ..

YIELD

 .. code ..

and the YIELD macro does something like :

  N = __COUNTER__;
  work->next = N;
  return eCoroutine_Refire;
  }
case N:
  {

(note the braces which mean that variables in one yield chunk are not visible to the next; this means that the failure of the coroutine to maintain local variables is a compile error and thus not surprising).

The exception is if you want to jump around to different blocks of the coroutine, then you need to manually specify a label and you can jump to that label.

Note that yielding without a dependancy is kind of pointless; these coroutines are not yielding to share the CPU, they are yielding because they need to wait on some async handle to finish. So generally when you yield it's because you have some handle (or handles) to async tasks.

The way a yielding call like "spu_transform(array, m);" in the previous example has to be implemented is by starting an async spu task, and then setting the handle as a dependency. It would be something like :


#define spu_transform(args) \
  Handle h = start_spu_transform(args); \
  Work_SetDeps(work,h); \
  YIELD

The coroutine yield will then stop running the work, and the work now has a dependency, so it won't resume until the dependency is done. eg. it waits for the spu task to complete.

I use coroutines basically every time I have to do some processing on a file. For one thing, to minimize memory use I need to stream the file through a double-buffer. For another, you often need to open the file before you start processing, and that needs to be part of the async operation chain as well. So a typical processing coroutine looks something like :


int WorkFunc( Work * work )
{
COROUTINE_START

  Handle h = ioq_open_file( work->filename );

COROUTINE_YIELD_TO(h)

  if open failed -> abort

  get file size

  h = start read chunk 0
  chunkI = 1; // the next chunk is 1

YIELD(h)

  // read of chunkI^1 just finished

  // start the next read :  
  h = ioq_read( chunk[chunkI] );
  chunkI ^= 1;

  // process the chunk we just received :
  process( chunk[chunkI] );

  if done
  {
    ioq_close_file();
    return
  }

YIELD_REPEAT
}

where "YIELD_REPEAT" means resume at the same label so you repeat the current block.

The last block of the coroutine runs over and over, ping-ponging on the double buffer, and yields if the next IO is not done yet when the processing of each block is done.

3/06/2012

03-06-12 - The Worker Wake and Semaphore Delay Issue

Let's say you have a pool of worker threads, and some of them may be asleep. How should you wake them up?

The straightforward way is to use a semaphore which counts the work items, and wait the worker threads on the semaphore. Workers will go to sleep when there is no work for them, and wake up when new work is pushed.

But this is rather too aggressive about waking workers. If you push N items (N less than the number of worker threads) it will wake N workers. But by the time some of those workers wake there may be nothing for them to do.

Let's look at a few specific issues.

First of all, when you're making a bunch of work items, you might want to delay inc'ing the semaphore until you have made all the items, rather than inc'ing it each time. eg. instead of :


1 : incremental push :

push item A
inc sem
push item B 
inc sem
...

instead do

2 : batch push :

push item A
push item B
inc sem twice

There are a few differences. The only disadvantage of batch pushing is that the work doesn't start getting done until all the pushes are done. If you're creating a lot of jobs and there's a decent amount of processing to get them started, this adds latency.

But what actually happens with incremental push? One possibility is like this :


bad incremental push :

push item A
inc sem

sem inc causes work thread to wake up
pusher thread loses execution

worker pops item A
worker does item A
worker sees empty queue and goes to sleep

pusher thread wakes up

push item B 
inc sem
...

That's very bad. The possible slight loss of efficiency from batch push is worth it to avoid this kind of bad execution flow.

There's a related issue when you are creating work items from a worker thread itself. Say a work item does some computation and also creates another work item :


Worker pops item A
does some stuff
push new work item B
inc sem
do some other stuff
item A done

Is this okay? Typically, no.

The problem is if the other worker threads are asleep, that inc sem might wake one up. Then the original worker finishes item A and sees nothing else to do and goes to sleep. It would have been much better if the worker just stayed awake and did item B itself.

We can fix this pretty easily. For work items pushed on worker threads, I typically use "batch push" (that is, delayed semaphore increment) with an extra wrinkle - I delay it up until my own thread tries to do a semaphore decrement.

That is, the way a worker thread runs is something like :


decrement semaphore (wait if count <= 0)
pop item
do work item (may create new work)

decrement semaphore (wait if count <= 0)
pop item
do work item ...

instead we do :

decrement semaphore (wait if count <= 0)
pop item
do work item (may create new work)

 push new work items but don't post the semaphore
 instead set N = number of incs to sem that are delayed

decrement semaphore AND add N (*)
pop item
do work item ...

The key operation is at (*) , where I post the sem for any work items I made, and also try to dec one.

The gain can be seen from a special case - if I made one work item, then the operation at (*) is a nop - I had one increment to the sem delayed, and I want to take one for myself, so I just take the one I had delayed. (if I made two, I post one and take one for myself). etc.

There is one extra little optimization you can do for the edge case - if you have some threads that are creating work items and some threads doing them, there is a sort of "performance race" between them. You really want them to be running along side with the creator feeding the poppers, neither running too fast. If the poppers are running slightly faster than the creators, you can fall off a huge performance cliff when the poppers see no work available and go into an OS sleep. Now, obviously you use a spin in your semaphore, but an enhanced version is something like this :


delayed/batched work creation :

push various items
...
inc sem N times


work popper :

spin { try pop queue }
try dec sem
if didn't get pop , dec sem (may wait)

In words, the work popper can "shortcut" the delayed sem inc. That is, the pusher has created a delay between the queue push and the sem inc, but the delay only applies to thread wakeups!! (this is the important point). The delay does not apply to the work being available to already running worker threads.

That is, if the pusher is using delay sem incs, and the popper is using sem shortcuts - then an active pusher makes work available to active workers as soon as possible. The thing that is delayed is only thread wakeup, so that we can avoid waking threads that don't need to wake up, or threads that will steal the execution context from the pusher, etc.

Here's an example of how things can go wrong if you aren't careful about these things :

Each work item is creating a followup work item, but that wakes up the other worker thread, who quickly grabs the item, and I go back to sleep.

(you might ask why the work item is creating a followup instead of just doing more work; it's because the followup is dependent on an IO; in this particular case the IO is running faster than the computation, so the IO dependency for each item is already done, and they can be run immediately)

With delayed push & early pop it's all cleaned up :

03-06-12 - Oodle Handle Table - WFMO

The other big advantage of a unified handle system is that you can do things like a true WFMO (wait for multiple objects).

Often you have the case that you have launched several asynchronous operations (let's call them A,B, and C), and you wish to do something more, but only after all three are done.

You can always do this manually by just waiting on all three :


Wait(A);
Wait(B);
Wait(C);
*- now all three are done

(note : the invariant at * is only true if the state transition from "not done" to "done" is one way; this is an example of what I mentioned last time that reasoning and coding about these state transitions is much easier if it's one way; if it was not then you would have absolutely no idea what the state of the objects was at *).

Obviously the disadvantage of this is that your thread may have to wake up and go to sleep several times before it reaches *. You can minimize this by waiting first on the item you expect to finish last, but that only works in some cases.

The overhead of these extra thread sleeps/wakes is enormous; the cost of a thread transition is on the order of 5000-50,000 clocks, whereas the cost of an OS threading call like signalling an event or locking a mutex is on the order of 50-500 clocks. So it's worth really working on these.

So to fix it we use a true WFMO call like WFMO(A,B,C).

What WFMO does is esentially just use the forward permits system that the unified handle table uses for all dependencies. It creates a pseudo-handle H which does nothing and has no data :


WFMO(A,B,C) :

create handle H
set H depends on {A,B,C}
  sets A to permit H , etc.
  sets H needs 3 permits to run
Wait(H);
free handle H

The result is just a single thread sleep, and then when your thread wakes you know all ABC are done and there is no need to poll their status and possibly sleep again.

3/05/2012

03-05-12 - Oodle Handle Table

A few months ago I created a new core structure for Oodle which has worked out really well and tied the whole concept together. I'm going to make a few notes about it here.

The foundation is a lock-free weak reference handle table. One of the goals of Oodle library design was that I didn't want to force a coding structure on the client. eg. if it was me I would use a PagingResource base class with virtual OnLoad functions or whatever, and probably reference counting. But I understand many people don't want to do that, and one of the nice thing about RAD products is that they fit into *your* engine structure, they don't force an engine structure on you. So initially I was trying to write the Oodle systems so that each system is very independent and don't force any overall concept on you. But that was a bit of a mess, and bug prone; for example object lifetime management was left up to the client, eg. if you fired an async task off, you could delete it before it was done, which could be a crash if the SPU was still using that memory.

The weak reference handle table fixes those bugs in a relatively low overhead way (and I mean "low overhead" both in terms of CPU time, but more importantly in terms of coding structure or intellectual burden).

Handles can be locked (with a mutex per handle (*)) which gives you access to any payload associated with them. It also prevents deletion. Simple status checks, however, don't require taking the lock. So other people can monitor your handle while you work on it without blocking.

(* = not an OS mutex of course, but my own; it's one of the mutexes described in the previous serious. The mutex in Oodle uses only a few bits per handle, plus more data per thread which gives you the actual OS waits; game work loads typically involve something like 1000's of objects but only 10's of threads, so it's much better to use a per-thread "waitset" type of model)

One of the crucial things has been that the state of handles can basically only make one state transition, from "not done" to "done". Once they are done, they can never go back to not done; if you decide you have more followup work you create a new handle. This is as opposed to a normal Event or something which can flip states over and over. The nice thing about the single state transition is it makes waiting on that event much simpler and much easier to do race-free. There's no issue like "eventcount" where you have to do a prepare_wait / commit_wait.

The state of the handle is in the same word as the GUID which tells you if the handle is alive. So you only have to do one atomic op and it tells you if the weak reference is alive, and if so what the status is.

The other big thing is that "foward dependencies" (that I call "permits") are built in to the handle table, so all the systems automatically get depenedencies and can wait on each other. (as described previously here : Worker Thread system with reverse dependencies )

You can make any kind of handle, eg. some async SPU job, and then mark it as depending on an IO, so the SPU job will get kicked when the IO is done.

A little example image (click for high res) :

The row of pink is the IO thread doing a bunch of reads. There are pre-made SPU jobs to decompress each chunk, and you can see they fire as each read completes. Then another thread (the brown below the 4 SPUs) waits on each SPU handle and processes completion for them.

This all comes from


(for each chunk:)
OodleAsyncHandle h1 = fire read
OodleAsyncHandle h2 = fire SPU decompress , depends on h1

add h2 to list
wait for all handles in list


Aside :

There's only one disadvantage I haven't figured out yet. If something depends on an SPU job, then when the SPU work item completes it needs to check if it "permits" anything else, and if so mark them as ready to run.

The problem is that it can't do that work from the SPU. (actually it probably could do some of it from the SPU, by chasing pointers and doing DMA's for each chunk, and using the SPU atomic cache line stuff; but in the end it will run into a problem that it needs to call some system call on the PPU).

So when an SPU job completes I need to send back to the PPU a message that "hey this handle is done see if it permits anything and do the processing for that". The problem is there seems to be no good mechanism for that on the PS3, which kind of blows my mind. I mean, I can just tack my messages onto a lock-free queue from the SPU, that part is fine, but then I need a way to signal the PPU (or run an interrupt on the PPU) to get the PPU to process that queue (and the threads that need the processing might be asleep and need to get woken), and that is where it's not great.

3/03/2012

03-03-12 - Stranger's Wrath HD

I just noticed that Stranger's Wrath HD for PS3 (on PSN) finally came out a few months ago. So I bought it and started playing through.

It seems fine from what I've seen so far. No major bugs or framerate problems or anything like that. It supposedly has some "HD" up-resing, but I can't see any, it looks just like the XBox, and that's not entirely bad. (actually the only difference I notice is that the menus seem to have been redone and they use a horrible janky-looking font; I don't think that we had such terrible looking text on the XBox). (maybe the content is up-res'ed and I just can't tell because the screen resolution is also higher?)

Anyhoo, I'm pleased to say it holds up decently still. Some of the realtime art is fantastic, for such an old game it still looks really good, mainly due to all the use of "decorators". The first 30 minutes of play are pretty terrible, but it gets better.

Every time Stranger jumps and does the Al Pacino hoo-ah I bust out laughing. WTF were we thinking? And lots of his monologues and his vocal pacing is just super bizarre. That's a good thing, bizarre is good.

In unrelated news, I tried Rage on PS3 ; holy texture popping batman! I do like the art style; the lighting is like super blown out, really flat looking; it looks cool, it reminds me of Anime. Actually the best thing about Rage is the menus; the menus are superb, really fast, responsive, circular, simple, no loading or unnecessary animation or whatever; every game developer should copy the Rage menus.

old rants