9/30/2010

09-30-10 - Coder News

Ignacio wrote a nice article on GPU radiosity via hemicubes . I always wanted to try that, I'm jealous.

I suspect that you could be faster & higher quality now by doing GPU ray tracing instead of render-to-textures. The big speed advantage of GPU ray tracing would come from the fact that you can make the rays to sample into your lightmaps for lots of lightmap texels and run them all in one big batch. Quality comes from the fact that you can use all the fancy raytracing techniques for sampling, monte carlo methods, etc. and it's rather easier to do variable-resolution sampling (eg. start with 50 rays per lightmap texel and add more if there's high variance). In fact, you could set up all the rays to sample into your lightmap texels for your whole world, and then run N bounces by justing firing the whole gigantic batch N times. Of course the disadvantage to this is you have to implement your whole renderer twice, once as a raytracer and once for normal rendering, avoiding that is the whole point of using the hemicube render to texture method.

Was talking to Dave and randomly mentioned that I thought the new Rvalue references were not that interesting because you can basically do everything they give you using RVO and swaps. But then I did some more reading and have changed my mind. Rvalue references are awesome!

Want Speed Pass by Value. � C++Next (Dave Abrahams series on value semantics)
C++ Rvalue References Explained (good intro)
A Brief Introduction to Rvalue References (Howard E. Hinnant, Bjarne Stroustrup, and Bronek Kozicki)
Rvalue References C++0x Features in VC10, Part 2 - Visual C++ Team Blog - Site Home - MSDN Blogs
rvalue reference (the proposal document)
InformIT C++ Reference Guide The rvalue Reference Proposal, Part I

Basically it lets you write templates that know an object is a temporary, so you can mutate it or move from it without making a copy. I think one problem we sometime have as coders is that we think about our existing code and think "bah I don't need that" , but it's only because we have been so conditioned not to write code in that way because we don't have the right tools. C++0x makes it possible to do things in templates that you always wished you could do. That doesn't necessarily mean doing lots more complicated things, it means doing the little things and getting them exactly right. "auto" , "decltype" , "[[base_check]]", etc. all look pretty handy.

9/28/2010

09-28-10 - Branchless LZ77 Decoder

It occurred to me that if you do an LZ77 where the literal/match flag is sent via a run length of literals, then a run of literals is the same as a match, but the source is the next few bytes of the compressed buffer, rather than some previous location in the decompressed buffer.

That is, you're decompressing from "comp" buffer into "dest" buffer. A "match" is just a copy from the "dest" buffer, and literals are just at copy from the "comp" buffer.

So, let's say we do a byte-wise LZ77 , use one bit to flag literal or match, then 7 bits for the length. Our branchless decoder is something like :


{
    U8 control = *comp++;
    int source = control >> 7;
    int length = (control & 127) + 1;
    U8 * lit_ptr = comp;
    U8 * mat_ptr = dest - *((U16 *)comp);
    U8 * copy_from_ptr = select( lit_ptr, mat_ptr, source );
    memcpy( dest, copy_from_ptr, length );
    dest += length;
    comp += (source<<1);
}

Where "select(a,b,c)" = c ? ( a : b ); or something like that. (sometimes better implemented with a negative and; on the PC you might use cmov, on the spu you might use sel, etc.)

While this should be very fast, compression will not be awesome because the division for literals and matches is not ideal and 7 bits of length is a lot for literals but not enough for matches, and offsets are always 16 bits which is too little. We can do a slightly better version using a 256 entry lookup table for the control :


{
    U8 control = *comp++;
    int source = source_table[control];
    int length = length_table[control];
    U8 * lit_ptr = comp;
    U8 * mat_ptr = dest - *((U16 *)comp);
    U8 * copy_from_ptr = select( lit_ptr, mat_ptr, source );
    memcpy( dest, copy_from_ptr, length );
    dest += length;
    comp += (source<<1);
}

for example with the table you could let the match lengths be larger and sparse. But it would probably be better to just have a branch that reads more bytes for long lengths.

Adding things like optionally larger offsets starts to make the branchless code complex enough that eating one branch is better. If you're willing to do a lot of variable shifts it's certainly possible, for example you could grab 1 control byte, look it up in various tables. The tables tell you some # of bits for match length, some # for match offset, and some # for literal run length (they add up a multiple of 8 and use some portion of the control byte as well). Unfortunately variable shifting is untenably slow on many important platforms.

BTW one useful trick for reducing branches in your LZ decoder is to put the EOF case out in some rare case, rather than as your primary loop condition, and you change your primary loop to be an unconditional branch. On PC's this doesn't change much but on some architectures an unconditional branch is much cheaper than a conditional one, even it's predictable. That is, instead of :


while ( ! eof )
{
   .. do one decode step ..
   if ( mode 1 )
     ..
   if ( mdoe 2 )
    ...
}

You do :

for(;;)
{
   .. do one decode step ..
   if ( mode 1 )
     ..
   if ( mode 2 )
   {
     if ( rare case )
     {
        if ( eof )
          break;
     }
   }
}

Also, obviously you don't actually use "memcpy" , but whatever you use for the copy has an internal branch. And of course we have turned our branch into a tight data dependency. On some platforms that's not much better than a branch, but on many it is much better. Unfortunately unrolling doesn't help the data dependency much because of the fact that LZ can copy from its own output, so you have to wait for the copy to be done before you can start the next one (there's also an inherent LHS here, though that stall is the least of our worries).

9/21/2010

09-21-10 - Waiting on Thread Events Part 2

So we've seen the problem, let's look at some solutions.

First of all let me try to convince you that various hacky solutions won't work. Say you're using Semaphores. Instead of doing Up() to signal the event, you could do Increment(1000); That will cause the Down() to just run through immediately until it's pulled back down, so the next bunch of tests will succeed. This might in fact make the race never happen in real life, but the race is still there.

1. Put a Mutex around the Wait. The mutex just ensures that only one thread at a time is actually in the wait on the event. In particular we do :


WaitOnMessage_4( GUID )
{

    {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            return;
    }

    for(;;)
    {
        MutexInScope( WaiterMutex );

        {
            MutexInScope(ReceiveMutex);
            pop everything on the receive FIFO
            mark everything I received as done
            if ( the GUID I wanted is done )
                return;
        }

        Wait( ReceiveEvent );
    }
}

note that the WaiterMutex is never taken by the Worker thread, only by "main" threads. Also note that it must be held around the whole check for the GUID being done or you have another race. Also we have the separate ReceiveMutex for the pop/mark phase so that when we are in the wait we don't block other threads from checking status.

Now when multiple threads try to wait, the first will get in the mutex and actually wait on the Event, the second will try to lock the mutex and stall out there and wait that way.

The problem with this is that threads that can proceed don't always get to. It's not broken, you will make progress, but it's nowhere near optimal. Consider if N threads come in and wait on different GUIDs. Now the worker finishes something, so that wakes up a Wait and he goes around the loop which releases the Mutex - if you just so happened to wake up the one guy who could proceed, then it's great, he returns, but if you woke up the wrong guy, he will take the mutex and go back to sleep. (also note when the mutex unlocks it might immediately switch thread execution to one of the people waiting on locking it because of the Window fair scheduling heuristics). So you only have a 1/N chance of making optimal progress, and in the worst case you might make all N threads wait until all N items are done. That's bad. So let's look at other solutions :

2. Put the Event/Semaphore on each message instead of associated with the whole Receive queue. You wait on message->event , and the worker signals each message's event when it's done.

This also works fine. It is only race-free if GUIDs can only be used from one thread, if multiple threads can wait on the same GUID you are back to the same old problem. This also requires allocating a Semaphore per message, which may or may not be a big deal depending on how fine grained your work is.

On the plus side, this lets the Wait() be on the exact item you want being done, not just anything being done, so your thread doesn't wake up unnecessarily to check if its item was done.


WaitOnMessage_5A( GUID )
{
    Event ev;

    {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            return;

        ev = message(GUID)->event
    }

    Wait( ev );
}

we're glossing over the tricky part of this one which is the maintenance of the event on each message. Let's say we know that GUIDs are only touched by one thread at a time. Then we can do the event destruction/recycling when we receive a done message. That way we know that event is always safe for us to wait on outside of the mutex because the only person who will ever Clear or delete that event is us.

I think you can make this safe for multiple threads thusly :


WaitOnMessage_5B( GUID )
{
    Semaphore ev;

    {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            return;

        message(GUID)->users ++;
        ev = message(GUID)->semaphore;
    }

    Down( ev );

    {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        ASSERT( the GUID I wanted is done );

        message(GUID)->users --;
        if ( message(GUID)->users == 0 )
            delete message(GUID)->semaphore;
    }
}

and you make worker signal the event by doing Up(1000); you don't have to worry about that causing weird side effects, since each event is only ever signalled once, and the maximum number of possible waits on each semaphore is the number of threads. Since the semaphore accesses are protected by mutex, I think you could also do lazy semaphore creation, eg. don't make them on messages by default, just make them when somebody tries to wait on that message.

3. Make the Event per thread. You either put the ReceiveEvent in the TLS, or you just make it a local on the stack or somewhere for each thread that wants to wait. Then we just use WaitOnMessage_3 , but the ReceiveEvent is now for the thread. The tricky bit is we need to let the worker thread know what events it needs to signal.

The easiest/hackiest way is just to have the worker thread always signal N events that you determine at startup. This way will in fact work fine for many games. A slightly cleaner way is to have a list of the events that need signalling. But now you need to protect access to that with a mutex, something like :


WaitOnMessage_6( GUID , LocalEvent )
{
    {
    MutexInScope(EventListMutex)
        add LocalEvent to EventList
    }

    for(;;)
    {
        {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            return;
        }

        Wait( LocalEvent );
    }

    {
    MutexInScope(EventListMutex)
        remove LocalEvent from EventList
    }

}

and worker does :

    do work
    push message to receive FIFO
    
    {
    MutexInScope(EventListMutex)
        signal everything in EventList
    }

one nice thing about this is that you can push GUID with LocalEvent and then only signal events that want to hear about that specific message.

Note with the code written as-is it works but has very nasty thread thrashing. When the worker signals the events in the list, it immediately wakes up the waiting threads, which then try to lock the EventListMutex and immediately go back to sleep, which wakes the worker back up again. That's pretty shitty so a slightly better version is if the worker does :


    do work
    push message to receive FIFO
    
    TempList;
    {
    MutexInScope(EventListMutex)
        copy everything in EventList to TempList
    }
    -*-
    signal everything in TempList

this fixes our thread thrashing, but it now gives us the usual lock-free type of restrictions - events in TempList are no longer protected by the Mutex, so that memory can't be reclaimed until we are sure that the worker is no longer touching them. (in practice the easiest way to do this is just to use recycling pooled allocators which don't empty their pool until application exit). Note that if an event is recycled and gets a different id, this might signal it, but that's not a correctness error because extra signals are benign. (extra signals just cause a wasteful spin around the loop to check if your GUID is done, no big deal, not enough signals means infinite wait, which is a big deal).

NOTE : you might think there's a race at -*- ; the apparent problem is that the worker has got the event list in TempList, and it gets swapped out, then on some main thread I add myself to EventList, run through and go to sleep in the Wait. Then worker wakes up at -*- and signals TempList - and I'm not in the list to signal! Oh crap! But this can't happen because if that was the work item I needed, it was already put in the receive FIFO and I should have seen it and returned before going into the Wait.

We can also of course get rid of the Mutex on EventList completely by doing the usual lockfree gymnastics; instead make it a message passing thing :


WaitOnMessage_6B( GUID , LocalEvent )
{
    Send Message {Add,LocalEvent,GUID}

    for(;;)
    {
        {
        MutexInScope(ReceiveMutex);
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            break;
        }

        Wait( LocalEvent );
    }

    Send Message {Remove,LocalEvent,GUID}
}

and worker does :

    pop message from "send FIFO" to get work
    do work
    push message to "receive FIFO"

    pop all event messages
        process Add/Remove commands 

    signal everything in EventList

but this isn't tested and I have very little confidence in it. I would want to Relacey this before using it in real code.

4. Just don't do it. Sometimes domain-specific solutions are best. In particular there's a very simple solution I can use for the current problem I'm having.

The thing that made me hit this issue is that I made my work-stealing Worklet system able to wait on IO's. In this case the "worker" is the IO service thread and the messages are IO requests. So now the main thread can wait on IO and so can the Worklets. It's important that they really go to a proper sleep if they are blocked on IO.

But I can solve it in a simple way in that case. The Worklets already have a way to go sleep when they have no work to do. So all I do is if the Worklets only have work which depends on an uncompleted IO, they push their Work back onto the main work dispatch list and go to sleep on the "I have no work" event. Then, whenever the main thread receives any IO wakeup event, it immediately goes and checks the Worklet dispatch list and sees if anything was waiting on an IO completion. If it was, then it hands out the work and wakes up some Worklets to do it.

This solution is more direct and also has the nice property of not waking up unnecessary threads and so on.

09-21-10 - Waiting on Thread Events

A small note on a common threading pattern.

You have some worker thread which takes messages and does something or other and then puts out results. You implement the message passing in some way or other, either with a mutex or a lock-free FIFO or whatever, it doesn't matter for our purposes here.

So your main thread makes little work messages, sends them to the worker, he does them, sends them back. The tricky bit comes when the main thread wants to wait on a certain message being done. Let's draw the structure :


   main      ---- send --->     worker
  thread    <--- receive ---    thread

First assume our messages have some kind of GUID. If the pointers are never recycled it could be the pointer, but generally that's a bad idea, but a pointer + a counter would be fine. So the main thread says Wait on this GUID being done. The first simple implementation would be a spin loop :


WaitOnMessage_1( GUID )
{

    for(;;)
    {
        pop everything on the receive FIFO
        mark everything I received as done
    
        is the GUID I wanted in the done set?
           if so -> return

        yield processor for a while
    }

}

and that works just fine. But now you want to be a little nicer and not just spin, you want to make your thread actually go to sleep.

Well you can do it in Win32 using Events. (on most other OS'es you would use Semaphores, but it's identical here). What you do is you have the worker thread set an event when it pushes a message, and now we can wait on it. We'll call this ReceiveEvent, and our new WaitOnMessage is :


WaitOnMessage_2( GUID )
{

    for(;;)
    {
        Clear( ReceiveEvent );

        pop everything on the receive FIFO
        mark everything I received as done
    
        is the GUID I wanted in the done set?
           if so -> return

        Wait( ReceiveEvent );
    }

}

that is, we clear the receive event so it's unsignalled, we see if our GUID is done, if not we wait until the worker has done something, then we check again. We aren't sleeping on our specific work item, but we are sleeping on the worker doing something.

In the Worker thread it does :


    pop from "send FIFO"

    do work

    push to "receive FIFO"
    
    Signal( ReceiveEvent );

note the order of the push and the Signal (Signal after push) is important or you could have a deadlock due to a race because of the Clear. While the code as-is works fine, the Clear() is considered dangerous and is actually unnecessary - if you remove it, the worst that will happen is you will run through the Wait() one time that you don't need to, which is not a big deal. Also Clear can be a little messy to implement for Semaphores in a cross-platform way. In Semaphore speak, Wait() is Down() , Signal() is Up(), and Clear() is trylock().

Okay, so this is all fine and we're happy with ourselves, until one day we try to WaitOnMessage() from two different threads. I've been talking as if we have one main thread and one worker thread, but the basic FIFO queues work fine for N "main" threads, and we may well might want to have multiple threads generating and waiting on work. So what's the problem ?

First of all, there's a race in the status check, so we put a Mutex around it :


WaitOnMessage_3A( GUID )
{

    for(;;)
    {
        Mutex 
        {
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            break;
        }

        Wait( ReceiveEvent );
    }

}

you need that because one thread could come in and pop the queue but not process it, and then another thread comes in and sees its GUID is not done and goes to sleep incorrectly. We need the pop and the flagging done to act like they are atomic.

Now you might be inclined to make this safe by just putting the Mutex around the whole function. In fact that works, but it means you are holding the mutex while you go into your Wait sleep. Then if another thread comes in to check status, it will be forced to sleep too - even if its GUID is already done. So that's very bad, we don't want to block status checking while we are asleep.

Why is the _3A version still broken ? Consider this case : we have two threads, thread 1 and 2, they make their own work and send it off then each call WaitOnMessage which is :


WaitOnMessage_3( GUID )
{

    for(;;)
    {
        Mutex 
        {
        pop everything on the receive FIFO
        mark everything I received as done
        if ( the GUID I wanted is done )
            break;
        }

        [A]

        Wait( ReceiveEvent );

        [B]
    }

}

Thread 1 runs through to [A] and then swaps out. Thread 2 runs through to [B], which waits on the event, the worker thread then does all the work, sets the event, then thread 2 gets the signal and clears the event. Now thread 1 runs again at [A] and immediately goes into an infinite Wait.

D'oh !

I think this is long enough just describing the problem, so I'll get at some solutions in the next post.

9/20/2010

09-20-10 - A small followup on Fast Means

I wrote previously about various ways of tracking a local mean over time .

I read about a new way that is somewhat interesting : Probabilistic Sliding Windows. I saw this in Ryabko / Fionov where they call it "Imaginary Sliding Windows", but I like Probabilistic better.

This is for the case where you want to have a large window but don't want to store the whole thing. Recall a normal sliding window mean update is :


U8 window[WINDOW_SIZE];
int window_i = 0;
int accum = 0;

// add in head and remove tail :
accum += x_t - window[window_i];
window[window_i] = x_t;
window_i = (window_i + 1) & (WINDOW_SIZE-1);

mean = (accum/WINDOW_SIZE);

// WINDOW_SIZE is power of 2 for speed of course

that's all well and good except when you want WINDOW_SIZE to be 16k or something. In that case you can use histogram probabilitic sliding window. You keep a histogram and accumulator :

int count[256] = { 0 };
int accum = 0;

at each step you add the new symbol and randomly remove something that you have :


// randomly remove something :
r = random_from_histogram( count );
count[r] --;
accum -= r;

//add new :
count[x_t] ++;
accum += x_t;

It's very simple and obvious - instead of knowing which symbol leaves the sliding window at the tail, we generate one randomly from the histogram that we have of symbols in the window.

Now, random_from_histogram is a bit of a problem. If you just do a flat random draw :


// randomly remove something :
for(;;)
{
    int r = randmod(256);
    if ( count[r] > 0 )
    {
        count[r] --;
        accum -= r;
        break;
    }
}

then you will screw up the statistics in a funny way; you will draw unlikely symbols too often, so you will skew towards more likely symbols. Maybe you could compute exactly what that skewing is and compensate for it somehow. To do an unbiased draw you basically have to do an arithmetic decode. You generate a random in [0,accum-1) and then look it up in the histogram.

Obviously this method is not fast and is probably useless for compression, but it is an interesting idea. More generally, probabilistic approximate statistics updates are an interesting area. In fact we do this already quite a lot in all modern compressors (for example there's the issue of hash table collisions). I know some of the PAQs also do probabilistic state machine updates for frequency counts.

There's also a whole field of research on this topic that I'd never seen before. You can find it by searching for "probabilistic quantiles" or "approximate quantiles". See for example : "Approximate Counts and Quantiles over Sliding Windows" or "On Probabilistic Properties of Conditional Medians and Quantiles" (PDF) . This stuff could be interesting for compression, because tracking things like the median of a sliding window or the N most common symbols in a window are pretty useful things for compressors.

For example, what I would really like is a fast probabilistic way of keeping the top 8 most common symbols in the last 16k, and semi-accurately counting the frequency of each and the count of "not top 8".

9/16/2010

09-16-10 - Modern Arithmetic Coding from 1987

I was just reading through my old paper "New Techniques in Context Modeling and Arithmetic Encoding" because I wanted to read what I'd written about PPMCB (since I'm playing with similar things now and PPMCB is very similar to the way modern context mixers do this - lossy hashing and all that). (thank god I wrote some things down or I would have lost everything; I only wish I'd written more, so many little heuristics and experimental knowledge has been lost).

Anyway, in there I found this little nugget that I had completely forggoten about. I wrote :

    
    Step 3 uses a method introduced by F. Rubin [3], and popularized by G.V. Cormack�s DMC
    implementation [1].  It is:
    
    while ( R < 256 )
       {
       output(L>>16);
       R <<= 8;
       L = (L << 8) & 0xFFFFFF;
       if ( (L + R) > 0xFFFFFF ) R = 0xFFFFFF - L;
       }
    
    This method is approximate, and therefore produces slightly longer output than the canonical
    CACM-87 style normalization [5].  However, the output is byte-aligned, and the normalization loop is
    almost always only performed once; these properties make this method extremely fast.
    
    The line:
    if ( (L + R) > 0xFFFFFF ) R = 0xFFFFFF - L;
    is where the extra output comes from.  It can be seen that R is shrunk to less than it need be, so
    extra normalizations are needed, and more bytes are output.  However, this only happens 1/65536th of
    the time, so the excess output is negligible.
    
    
Well what do you fucking know. This is a "carryless range coder" (Rant on New Arithmetic Coders) which was apparently rediscovered 20+ years later by the russians. ( see also (Followup on the "Russian Range Coder") ). The method of reducing range to avoid the carry is due to Rubin :

F. Rubin, "Arithmetic Stream Coding Using Fixed Precision Registers", IEEE Trans. Information Theory IT-25 (6) (1979), p. 672 - 675

And I found it in Cormack's DMC which amazingly is still available at waterloo : dmc.c

Cormack in 1987 wrote in the dmc.c code :

    
    while ((max-min) < 256)
    {
        if(bit)max--;
        putchar(min >> 16);
        outbytes++;
        min = (min << 8) & 0xffff00;
        max = (max << 8) & 0xffff00;
        if (min >= max) max = 0x1000000;
    }
    
    
which is a little uglier than my version above, but equivalent (he has a lot of ugly +1/-1 stuff cuz he didn't get that quite right).

And actually in the Data Compression Using Dynamic Markov Modelling paper that goes with the dmc.c code, they describe the arithmetic coder due to Guazzo and it is in fact an "fpaq0p" style carryless arithmetic coder (it avoids carries by letting range get very small and only works on binary alphabets). I don't have the original paper due to Guazzo so I can't confirm that attribution. The one in the paper does bit by bit output, but as I've stressed before that doesn't characterize the coder, and Cormack then did bytewise output in the implementation.

Anyway, I had completely forgotten about this stuff, and it changes my previous attribution of byte-wise arithmetic coding to Schindler '98 ; apparently I knew about it in '95 and Cormack did it in '87. (The only difference between the Schindler '98 coder and the NTiCM '95 coder is that I was still doing (Sl*L)/R while Schindler moved to Sl*(L/R) which is a small but significant change).

Apparently the "russian range coder" is actually a "Rubin arithmetic coder" (eg. avoid carries by shrinking range), and the "fpaq0p binary carryless range coder" is actually a "Guazzo arithmetic coder" (avoid carries by being binary and ensuring only range >= 2).

09-16-10 - A small followup on Cumulative Probability Trees

See original note . Then :

I mentioned the naive tree where each level has buckets of 2^L sums. For clarity, the contents of that tree are :

C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 CC13 C14 C15
C0-C1 C2-C3 C4-C5 C6-C7 C8-C9 C10-C11 C12-C13 C14-C15
C0-C3 C4-C7 C8-C11 C12-C15
C0-C7 C8-C15
C0-C15

I said before :


But querying a cumprob is a bit messy. You can't just go up the tree and add, because you may already be counted in a parent. So you have to do something like :

sum = 0;

if ( i&1 ) sum += Tree[0][i-1];
i>>=1;
if ( i&1 ) sum += Tree[1][i-1];
i>>=1;
if ( i&1 ) sum += Tree[1][i-1];
..

This is O(logN) but rather uglier than we'd like. 

A few notes on this. It's easier to write the query if our array is 1 based. Then the query is equivalent to taking a value from each row where the bit of "i" is on. That is, line up the bits of "i" with the bottom bit at the top row.

That is :


sum = 0;

if ( i&1 ) sum += Tree[0][i];
if ( i&2 ) sum += Tree[1][i>>1];
if ( i&4 ) sum += Tree[2][i>>2];
...

Some SIMD instructions sets have the ability to take an N-bit mask and turn it into an N-channel mask. In that case you could compute Tree[n][i>>n] in each channel and then use "i" to mask all N, then do a horizontal sum.

Also the similarity to Fenwick trees and his way of walking the tree can be made more obvious. In particular, we obviously only have to do the sums where i has bits that are on. On modern CPU's the fastest way is just to always do 8 sums, but in the old days it was beneficial to do a minimum walk. The way you do that is by only walking to the bits that are on, something like :


sum = 0;
while ( i )
{
  int k = BitScan(i);
  sum += Tree[k][i>>k];
  i -= (1 << k);
}

we can write this more like a Fenwick query :

sum = 0;
while ( i )
{
  int b = i & -i; // isolate the bottom bit of i
  int k = BitScan(b); // b -> log2(b)
  sum += Tree[k][i>>k];
  i = i & (i-1); // same as i -= b; turns off bottom bit
}

but then we recall that we made the FenwickTree structure by taking these entries and merging them. In particular, one way to build a Fenwick Tree is to take entries from simple table and do "if my bottom bit is zero, do i>>=1 and step to the next level of the table".

What that means is when we find the bottom bit pos is at bit 'k' and we look up at (i>>k) - that is the entry that will just be in slot "i" in the Fenwick structure :


sum = 0;
while ( i )
{
  sum += FenwickTree[i];
  i = i & (i-1); // turn off bottom bit
}

so the relationship and simplification is clear.

Fast decoding would involve a branchless binary search. I leave that as an exercise for the reader.

Not that any of this is actually useful. If you're just doing order-0 "deferred summation" is better, and if you're doing high order, then special structures for small/sparse contexts are better, and if your doing low orders escaped from high order it doesn't really work because you have to handle exclusion.

A better way to do excluded order-0 would be to have a 256-bit flag chunk for excluded or not excluded, then you just keep all the character counts in an array, and you sum them on demand with SIMD using the binary-tree summing method. At each step you sum adjacent pairs, so in step one you take the 256 counts and sum neighbors and output 128. Repeat. One tricky thing is that you have to output two sums - the sum above symbol and the sum below symbol (both excluded by the flags). And the decoder is a bit tricky, but maybe you can just do that sum tree and then binary search down it. Unfortunately the ability to do horizontal pair adds (or the way to swizzle into doing that) and the ability to byte arithmetic is one of the areas where the SIMD instruction sets on the various platforms differ greatly, so you'd have to roll custom code for every case.

9/14/2010

09-14-10 - Challenges in Data Compression 2.5 - More on Sparsity

Sparsity & Temporality show up as big issues in lots of funny ways.

For example, they are the whole reason that we have to do funny hand-tweaks of compressors.

eg. exactly what you use as your context for various parts of the compressor will make a huge difference. In PPM the most sensitive part is the SEE ; in CM it looks like the most sensitive part is the contexts that are used for the mixer itself (or the APM or SSE) (not the contexts that are looked up to get statistics to mix).

In these sensitive parts of the coder, you obviously want to use as much context as possible, but if you use too much your statistics become too sparse and you start making big coding mistakes.

This is why these parts of the model have to be carefully hand tweaked; eg. use a few bits from here, compact it into a log2-ish scale, bucketize these things. You want only 10-16 bits of context or something but you want as much good information in that context as possible.

The problem is that the ideal formation of these contexts depends on the file of course, so it should be figured out adaptively.

There are various possibilities for hacky solutions to this. For example something that hasn't been explored very much in general in text compression is severely asymmetric coders. We do this in video coding for example, where the encoder spends a long time figuring things out then transmits simple commands to the decoder. So for example the encoder could do some big processing to try to figure out the ideal compaction of statistics and sends it to the decoder. (* maybe some of these do exist)

If sparsity wasn't an issue, you would just throw every bit of information you have at the model. But in fact we have tons of information that we don't use because we aren't good enough at detecting what information is useful and merging up information from various sources, and so on.

For example an obvious one is : in each context we generally store only something like the number of times each character has occurred. We might do something like scale the counts so that more recent characters count more. eg. you effictively do {all counts} *= 0.9 and then add in the count of the new character as ++. But actually we have way more information than that. We have the exact time that each character occurred (time = position in the file). And, for each time we've used the context in the past, we know what was predicted from it and whether that was right. All of that information should be useful to improve coding, but it's just too much because it makes secondary statistics too sparse.

BTW it might pop into your head that this can be attacked using the very old-school approaches to sparsity that were used in Rissanen's "Context" or DMC for example. Their approach was to use a small context, then as you see more data you split the context, so you get richer contexts over time. That does not work, because it is too conservative about not coding from sparse contexts; as I mentioned before, you cannot tell whether a sparse context is good or not from information seen in that context, you need information from an outside source, and what Context/DMC do is exactly the wrong thing - they try to use the counts within a context to decide whether it should be split or not.

09-14-10 - A small note on structured data

We have a very highly structured file at work. A while ago I discovered that I can improve compression of primitive compressors by transposing the data as if it were a matrix with rows of the structure size.

Apparently this is previously known :


J. Abel, "Record Preprocessing for Data Compression", 
Proceedings of the IEEE Data Compression Conference 2004, 
Snowbird, Utah, Storer, J.A. and Cohn, M. Eds. 521, 2004.

A small note :

He suggests finding the period of the data by looking at the most frequent character repeat distance. That is, for each byte find the last time it occurred, count that distance in a histogram, find the peak of the histogram (ignoring distances under some minimum), that's your guess of hte record length. That works pretty well, but some small improvements are definitely possible.

Lets look at some distance histograms. First of all, on non-record-structured data (here we have "book1") our expectation is that correlation roughly goes down with distance, and in fact it does :

book1

(maximum is at 4)

On record-structured data, the peaks are readily apparently. This file ("struct72") is made of 72-byte structures, hence the peaks out at 72 and 144. But we also see strong 4-8-12 correlations, as there are clearly 4-byte words in the structs :

Vertical bar chart

The digram distance histogram makes the structure even more obvious, if you ignore the peak at 1 (which is not so much due to "structure" as just strong order-1 correlation), the peak at 72 is very strong :

Vertical bar chart

When you actually run an LZ77 on the file (with min match len of 3 and optimal parse) the pattern is even stronger; here are the 16 most used offsets on one chunk :


 0 :       72 :      983
 1 :      144 :      565
 2 :      216 :      282
 3 :      288 :      204
 4 :      432 :      107
 5 :      360 :      106
 6 :      720 :       90
 7 :      504 :       88
 8 :      792 :       78
 9 :      648 :       77
10 :      864 :       76
11 :      576 :       73
12 :     1008 :       64
13 :     1080 :       54
14 :     1152 :       51
15 :     1368 :       49

Every single one is a perfect multiple of 72.

A slightly more robust way to find structure than Jurgen's approach is to use the auto-correlation of the histogram of distances. This is a well known technique from audio pitch detection. You take the "signal" which is here out histogram of distance occurances, and find the intensity of auto-correlation for each translation of the signal (this can be done using the fourier transform). You will then get strong peaks only at the fundamental modes. In particular, in our example "struct72" file you would get a peak at 72, and also a strong peak at 4, because in the fourier transform the smaller peaks at 8,12,16,20, etc. will all add onto the peak at 4. That is, it's correctly handling "harmonics". It will also detect cases where a harmonic happened to be a stronger peak than the fundamental mode. That is, the peak at 144 might have been stronger than the one at 72, in which case you would incorrectly think the fundamental record length was 144.

Transposing obviously helps with compressors that do not have handling of structured data, but it hurts compressors that inherently handle structured data themselves.

Here are some compressors on the struct72 file :


struct72                                 3,471,552

struct72.zip                             2,290,695
transpos.zip                             1,784,239

struct72.bz2                             2,136,460
transpos.bz2                             1,973,406

struct72.pmd                             1,903,783
transpos.pmd                             1,864,028

struct72.pmm                             1,493,556
transpos.pmm                             1,670,661

struct72.lzx                             1,475,776
transpos.lzx                             1,701,360

struct72.paq8o                           1,323,437
transpos.paq8o                           1,595,652

struct72.7z                              1,262,013
transpos.7z                              1,642,304

Compressors that don't handle structured data well (Zip,Bzip,PPMd) are helped by transposing. Compressors that do handle structured data specifically (LZX,PAQ,7Zip) are hurt quite a lot by transposing. LZMA is the best compressor I've ever seen for this kind of record-structured data. It's amazing that it beats PAQ considering it's an order of magnitude faster. It's also impressive how good LZX is on this type of data.

BTW I'm sure PAQ could easily be adapted to manually take a specification in which the structure of the file is specified by the user. In particular for simple record type data, you could even have a system where you give it the C struct, like :


struct Record
{
  float x,y,z;
  int   i,j,k;
  ...
};

and it parses the C struct specification to find what fields are where and builds a custom model for that.

ADDENDUM :

It's actually much clearer with mutual information.


I(X,Y) = H(X) + H(Y) - H(XY)

In particular, the mutual information for offset D is :


I(D) = 2*H( order0 ) - H( bigrams separated by distance D )

This is much slower to compute than the character repeat period, but gives much cleaner data.

On "struct72" : (note : the earlier graphs showed [1,144] , here I'm showing [1,256] so you can see peaks at 72,144 and 216).

struct72

Here's "book1" for comparison :

Vertical bar chart

(left column is actually in bits for these)

and hey, charts are fun, here's book1 after BWT :

Vertical bar chart

09-14-10 - Threaded Stdio

Many people know that fgetc is slow now because we have to link with multithreaded libs, and so it does a mutex and all that. Yes, that is true, but the solution is also trivial, you just use something like this :

#define macrogetc(_stream)     (--(_stream)->_cnt >= 0 ? ((char)*(_stream)->_ptr++) : _filbuf(_stream))

when you know that your FILE is only being used by one thread.

In general there's a nasty issue with multithreaded API design. When you have some object that you know might be accessed from various threads, should you serialize inside every access to that object? Or should you force the client to serialize on the outside?

If you serialize on the inside, you can have severe performance penalties, like with getc.

If you serialize on the outside, you make it very prone to bugs because it's easy to access without protection.

One solution is to introduce an extra "FileLock" object. So the "File" itself has no accessors except "Lock" which gives you a FileLock, and then the "getc" is on the FileLock. That way somebody can grab a FileLock and then locally do fast un-serialized access through that structure. eg:


instead of :

c = File.getc();

you have :

FileLock fl = File.lock();

c = fl.getc();

Another good solution would be to remove the buffering from the stdio FILE and introduce a "FileView" object which has a position and a buffer. Then you can have mutliple FileViews per FILE which are in different threads, but the FileViews themselves must be used only from one thread at a time. Accesses to the FILE are serialized, accesses to the FileView are not. eg :


eg. FILE * fp is shared.

Thread1 has FileView f1(fp);
Thread2 has FileView f2(fp);

FileView contains the buffer

you do :

c = f1.getc();

it can get a char from the buffer without synchronization, but buffer refill locks.

(of course this is a mess with readwrite files and such; in general I hate dealing with mutable shared data, I like the model that data is either read-only and shared, or writeable and exclusively locked by one thread).

(The FileView approach is basically what I've done for Oodle; the low-level async file handles are thread-safe, but the buffering object that wraps it is not. You can have as many buffering objects as you want on the same file on different threads).

Anyway, in practice, macrogetc is a perfectly good solution 99% of the time.

Stdio is generally very fast. You basically can't beat it except in the trivial way (trivial = use a larger buffer). The only things you can do to beat it are :

1. Double-buffer the streaming buffer filling and use async IO to fill the buffer (stdio uses synchronous IO and only fills when the buffer is empty).

2. Have an accessor like Peek(bytes) to the buffer that just gives you a pointer directly into the buffer and ensures it has bytes amount of data for you to read. This eliminates the branch to check fill on each byte input, and eliminates the memcpy from fread.

(BTW in theory you could avoid another memcpy, because Windows is doing IO into the disk cache pages, so you could just lock those and get a pointer and process from them directly. But they don't let you do this for obvious security reasons. Also if you knew in advance that your file was in the disk cache (and you were never going to use it again so you don't want it to get into the cache) you could do uncached IO, which is microscopically faster for data not in the cache, because it avoids that memcpy and page allocation. But again they don't let you ask "is this in the cache" and it's not worth sweating about).

Anyway, the point is you can't really beat stdio for byte-at-a-time streaming input, so don't bother. (on Windows, where the disk cache is pretty good, eg. it does sequential prefetching for you).

9/12/2010

09-12-10 - The defficiency of Windows' multi-processor scheduler

Windows' scheduler is generally pretty good, but there appears to be a bit of shittiness with its behavior on multicore systems. I'm going to go over everything I know about it and then we'll hit the bad point at the end.

Windows as I'm sure you know is a strictly exclusive high priority scheduler. That means if there are higher priority threads that can run, lower priority threads will get ZERO cpu time, not just less cpu time. (* this is not quite true because of the "balanced set manager" , see later).

Windows' scheduler is generally "fair", and most of its lists are FIFO. The scheduler is preemptive and happens immediately upon events that might change the scheduling conditions. That is, there's no delay or no master scheduler code that has to run. For example, when you call SetThreadPriority() , if that affects scheduling the changes will happen immediately (eg. if you make the current thread lower than some other thread that can run, you will stop running right then, not at the end of your quantum). Changing processor affinitiy, Waits, etc. all cause immediate reschedule. Waits, Locks, etc. always put your thread onto a FIFO list.

The core of the scheduler is a list of threads for each priority, on each processor. There are 32 priorities, 16 fixed "realtime" priorities, and 16 "dynamic" priorities (for normal apps). Priority boosts (see later) only affect the dynamic priority group.

When there are no higher priority threads that can run, your thread will run for a quantum (or a few, depending on quantum boosts and other things). When quantum is done you might get switched for another thread of same priority. Quantum elapsed decision is now based on CPU cycles elapsed (TSC), not the system timer interrupt (change was made in Vista or sumfin), and is based on actual amount of CPU cycles your thread got, so if your cycles are being stolen you might run for more seconds than a "quantum" might indicate.

Default quantum is an ungodly 10-15 millis. But amusingly enough, there's almost always some app on your system that has done timeBeginPeriod(1), which globally sets the system quantum down to 1 milli, so I find that I almost always see my system quantum at 1 milli. (this is a pretty gross thing BTW that apps can change the scheduler like that, but without it the quantum is crazy long). (really instead of people doing timeBeginPeriod(1) what they should have done is make the threads that need more responsiveness be very high priority and then go to sleep when not getting input or events).

So that's the basics and it all sounds okay. But then Windows has a mess of hacks designed to fix problems. Basically these are problems with people writing shitty application code that doesn't thread right, and they're covering everyone's mistakes, and really they do a hell of a good job with it. The thing they do is temporary priority boosts and temporary quantum boosts.

Threads in the foreground process always get a longer quantum (something like 60 millis on machines with default quantum length!). Priority boosts affect a thread temporarily, each quantum the priority boost wears off one step, and the thread gets extra quantums until the priority boost is gone. So a boost of +8 gives you 8 extra quantums to run (and you run at higher priority). You get priority boosts for :


 IO completion (+variable)

 wakeup from Wait (+1)
   special boost for foreground process is not disableable

 priority inversion (boosts owner of locked resource that other thread is waiting on)
   this is done periodically in the check for deadlock
   boosts to 14

 GUI thread wakeup (+2) eg. from WaitMsg

 CPU starvation ("balanced set manager")
    temporary kick to 15

boost for IO completion is :
    disk is 1
    network 2
    key/mouse input 6
    sound 8
(boost amount comes from the driver at completion time)

Look at like for example the GUI thread wakeup boost. What really should have been happening there is you should have made a GUI message processing thread that was super minimal and very high priority. It should have just done WaitMsg to get GUI input and then responded to it as quickly as possible, maybe queued some processing on a normal priority thread, then gone back to sleep. The priority boost mechanism is basically emulating this for you.

A particularly nasty example is the priority inversion boost. When a low priority thread is holding a mutex and a high priority thread tries to lock it, the high priority thread goes to sleep, but the low priority thread might never run if there are medium priority threads that want to run, so the high priority thread will be stuck forever. To fix this, Windows checks for this case in its deadlock checker. All of the "INFINITE" waits in windows are not actually infinite - they wait for 5 seconds or so (delay is setable in the registry), after which time Windows checks them for being stalled out and might give you a "process deadlocked" warning; in this check it looks for the priority inversion case and if it sees it, it gives the low priority thread the big boost to 14. This has the wierd side effect of making the low priority thread suddenly get a bunch of quanta and high priority.

The other weird case is the "balanced set manager". This thing is really outside of the normal scheduler; it sort of sits on the side and checks every so often for threads that aren't running at all and gives them a temporary kick to 15. This kick is different than the others in that it doesn't decay (it would get 15 quanta which is a lot), it just runs a few quanta at 15 then goes back to its normal priority.

You an use SetThreadPriorityBoost to disable some of these (such as IO completion) but not all (the special foreground process stuff for example is not disabled by this, and probably not the balanced set or priority inversion stuff either is my guess).

I'm mostly okay with this boosting shit, but it does mean that actually accounting for what priority your thread has exactly and how long you expect it to run is almost impossible in windows. Say you make some low-priority worker thread at priority 6 and your main thread at priority 8. Is your main thread actually higher priority than the background worker? Well, no, not if he got boosted for any of various reasons, he might actually be at higher priority right now.

Okay, that's the summary of the normal scheduler, now let's look at the multi-processor defficiency.

You can understand what's going on if you see what their motivation was. Basically they wanted scheduling on each individual processor to still be as fast as possible. To do this, each processor gets its own scheduling data; that is there are the 32 priority lists of threads on each processor. When a processor has to do some scheduling, it tries to just work on its own list and schedule on itself. In particular, simple quantum expiration thread switches can happen just on the local processor without taking a system-wide lock. (* this was changed for Vista/Win7 or sumpin; old versions of the SMP scheduler took a system-wide spinlock to dispatch; see references at bottom).

(mutex/event waits take the system-wide dispatch lock because there may be threads from other processors on the wait lists for those resoures)

But generally Windows does not move threads around between processors, and this is what can create the problems. In particular, there is absolutely zero code to try to distribute the work load evenly. That's up to you, or luck. Even just based on priorities it might not run the highest priority threads. Let's look at some details :

Each thread has the idea of an "ideal" processor. This is set at thread creation in a global round-robin and a processor round-robin. This is obviously a hacky attempt to balance load which is sometimes good enough. In particular if create a thread per core, this will give you an "ideal" processor on each core, so that's what you want. It also does assign them "right" for hyperthreading, that is to minimize thread overlap, eg. it will assign core0-hyperthread0 then core1-hyperthread0 then core0-hyperthread1 then core1-hyperthread1. You can also change it manually with SetThreadIdealProcessor , but it seems to me it mostly does the right thing so there's not much need for that. (note this is different than Affinity , which forces you to run only on those processors , you don't have to run on your ideal proc, but we will see some problems later).

When the scheduler is invoked and wants to run a thread - there's a very big difference between the cases when there exist any idle processors or not. Let's look at the two cases :

If there are any idle processors : it pretty much does exactly what you would want. It tries to make use of the idle processors. It prefers whole idle cores over just idle hyperthreads. Then from among the set of good choices it will prefer your "ideal", then the ones it last ran on and the one the scheduler is running on right now. Pretty reasonable.

If there are not any idle processors : this is the bad stuff. The thread is only schedules against its "ideal" processor. Which means it just gets stuffed on the "ideal" processor's list of threads and then schedules on that processor by normal single-proc rules.

This has a lot of bad consequences. Say proc 0 is running something at priority 10. Proc 1-5 are all running something at priority 2. You are priority 6 and you want to run, but your ideal proc happened to be 0. Well, tough shit, you get stuck on proc 0's list and you don't run even though there's lots of priority 2 work getting done.

This can happen just by bad luck, when you happen to run into other apps threads that have the same ideal proc as you. But it can also happen due to Affinity masks. If you or somebody else has got a thread locked to some proc via the Affinity mask, and your "ideal" processor is set to try to schedule you there, you will run into them over and over.

The other issue is that even when threads are redistributed, it only happens at thread ready time. Say you have 5 processors that are idle, on the other processor there is some high priority thread gobbling the CPU. There can be threads waiting to run on that processor just sitting there. They will not get moved over to the idle processors until somebody kicks a scheduling event that tries to run them (eg. the high priority guy goes to sleep or one of the boosting mechanisms kicks in). This is a transient effect and you should never have long term scenarios where one processor is overloaded and other processors are idle, but it can happen for a few quanta.

References :

Windows Administration Inside the Windows Vista Kernel Part 1
Windows Administration Inside the Windows Vista Kernel Part 2
Sysinternals Freeware - Information for Windows NT and Windows 2000 - Publications
Inside the Windows NT Scheduler, Part 1 (access locked)
Available here : "Inside the Windows NT Scheduler" .doc version but make sure you read the this errata

Stupid annoying audio-visual formatted information. (god damnit, just use plain text people) :

Mark Russinovich Inside Windows 7 Going Deep Channel 9
Dave Probert Inside Windows 7 - User Mode Scheduler (UMS) Going Deep Channel 9
Arun Kishan Inside Windows 7 - Farewell to the Windows Kernel Dispatcher Lock Going Deep Channel 9

(I haven't actually watched these, so if there's something good in them, please let me know, in text form).

Also "Windows Internals" book, 5th edition.

ADDENDUM : The real interesting question is "can we do anything about it?". In particular, can you detect cases where you are badly load-balanced and kick Windows in some way to adjust things? Obviously you can do some things to load-balance within your own app, but when other threads are taking time from you it becomes trickier.

09-12-10 - PPM vs CM

Let me do a quick sketch of how PPM works vs how CM works to try to highlight the actual difference, sort of like I did for Huffman to Arithmetic , but I won't do as good of a job.

PPM :

make a few contexts of previous characters
  order4 = * ((U32 *)(ptr-4));
  etc.. 

look up observed counts from each context :

counts_o4 = lookup_stats( order4 );
counts_o2 = lookup_stats( order2 );
counts_o1 = lookup_stats( order1 );
counts_o0 = lookup_stats( order0 );

estimate escape probability from counts at each context :

esc_o4 = estimate_escape( order4 );
esc_o2 = estimate_escape( order2 );
...

code in order from most likely best to least :

if ( arithmetic_code( (1-esc_o4) * counts_o4 ) ) return; else arithmetic_code( esc_o4 );
exclude counts_o4
if ( arithmetic_code( (1-esc_o2) * counts_o2 ) ) return; else arithmetic_code( esc_o2 );
exclude counts_o2
...

update counts :
counts_o4 += sym;
...

Now let's do context mixing :

CM :

make a few contexts of previous characters
  order4 = * ((U32 *)(ptr-4));
  etc.. 

look up observed counts from each context :

counts_o4 = lookup_stats( order4 );
counts_o2 = lookup_stats( order2 );
counts_o1 = lookup_stats( order1 );
counts_o0 = lookup_stats( order0 );

estimate weights from counts at each context :

w_o4 = estimate_weight( order4 );
w_o2 = estimate_weight( order2 );
...

make blended counts :
counts = w_o4 * counts_o4 + w_o2 * counts_o2 + ...

now code :
arithmetic_code( counts );
...

update counts :
counts_o4 += sym;
...

It should be clear we can put them together :

make a few contexts of previous characters
  order4 = * ((U32 *)(ptr-4));
  etc.. 

look up observed counts from each context :

counts_o4 = lookup_stats( order4 );
counts_o2 = lookup_stats( order2 );
counts_o1 = lookup_stats( order1 );
counts_o0 = lookup_stats( order0 );
if ( CM )
{
    estimate weights from counts at each context :

    w_o4 = estimate_weight( order4 );
    w_o2 = estimate_weight( order2 );
    ...

    make blended counts :
    counts = w_o4 * counts_o4 + w_o2 * counts_o2 + ...

    // now code :
    arithmetic_code( counts );
    ...
}
else PPM
{
    estimate escape probability from counts at each context :

    esc_o4 = estimate_escape( order4 );
    esc_o2 = estimate_escape( order2 );
    ...

    code in order from most likely best to least :

    if ( arithmetic_code( (1-esc_o4) * counts_o4 ) ) return; else arithmetic_code( esc_o4 );
    exclude counts_o4
    if ( arithmetic_code( (1-esc_o2) * counts_o2 ) ) return; else arithmetic_code( esc_o2 );
    exclude counts_o2
    ... 
}

update counts :
counts_o4 += sym;
...

In particular if we do our PPM in a rather inefficient way we can make them very similar :
make a few contexts of previous characters
  order4 = * ((U32 *)(ptr-4));
  etc.. 

look up observed counts from each context :

counts_o4 = lookup_stats( order4 );
counts_o2 = lookup_stats( order2 );
counts_o1 = lookup_stats( order1 );
counts_o0 = lookup_stats( order0 );

accumulate into blended_counts :
blended_counts = 0;
if ( CM )
{
    estimate weights from counts at each context :

    w_o4 = estimate_weight( order4 );
    w_o2 = estimate_weight( order2 );
    ...


}
else PPM
{
    estimate escape probability from counts at each context :

    esc_o4 = estimate_escape( order4 );
    esc_o2 = estimate_escape( order2 );
    ...

    do exclude :
    exclude counts_o4 from counts_o2
    exclude counts_o4,counts_o2 from counts_o1
    ...

    make weights :

    w_o4 = (1 - esc_04);
    w_o2 = esc_04 * (1 - esc_02);
    ...

}

make blended counts :
blended_counts += w_04 * counts_o4;
blended_counts += w_02 * counts_o2;
...

arithmetic_code( blended_counts );

update counts :
counts_o4 += sym;
...

Note that I haven't mentioned whether we are doing binary alphabet or large alphabet or any other practical issues, because it doesn't affect the algorithm in a theoretical way.

While I'm at it, let me take the chance to mark up the PPM pseudocode with where "modern" PPM differs from "classical" PPM : (by "modern" I mean 2002/PPMii and by "classical" I mean 1995/"before PPMZ").

PPM :

make a few contexts of previous characters
  order4 = * ((U32 *)(ptr-4));
  etc.. 
also make non-continuous contexts like
skip contexts : AxBx
contexts containing only a few top bits from each byte
contexts involving a word dictionary
contexts involving current position in the stream 

look up observed counts from each context :

counts_o4 = lookup_stats( order4 );
counts_o2 = lookup_stats( order2 );
counts_o1 = lookup_stats( order1 );
counts_o0 = lookup_stats( order0 );

possibly rescale counts using a "SEE" like operator
eg. use counts as an observation which you then model to predict coding probabilities

estimate escape probability from counts at each context :

esc_o4 = estimate_escape( order4 );
esc_o2 = estimate_escape( order2 );
...
secondary estimate escape using something like PPMZ
also not just using current context but also other contexts and side information

code in order from most likely best to least :

use LOE to choose best order to start from, not necessarily the largest context
also don't skip down through the full set, rather choose a reduced set

if ( arithmetic_code( (1-esc_o4) * counts_o4 ) ) return; else arithmetic_code( esc_o4 );
exclude counts_o4
if ( arithmetic_code( (1-esc_o2) * counts_o2 ) ) return; else arithmetic_code( esc_o2 );
exclude counts_o2
...

update counts :
counts_o4 += sym;
...
do "partial exclusion" like PPMii, do full update down to coded context
  and then reduced update to parents to percolate out information a bit
do "inheritance" like PPMii - novel contexts updated from parents
do "fuzzy updates" - don't just update your context but also neighbors
  which are judged to be similar in some way


And that's all folks.

09-12-10 - Context Weighting vs Escaping

The defining characteristic of PPM (by modern understanding) is that you select a context, try to code from it, and if you can't you escape to another context. By contrast, context weighting selects multiple contexts and blends the probabilities. These are actually not as different as they seem, because escaping is the same as multiplying probabilities. In particular :

    context 0 gives probabilities P0 (0 is the deeper context)
    context 1 gives probabilities P1 (1 is the parent)
    how do I combine them ?

    escaping :

    P = (1 - E) * P0 + E * (P1 - 0)

    P1 - 0 = Probablities from 1 with chars from 0 excluded

    weighting :

    P = (1 - W) * P0 + W * P1

    with no exclusion in P1

In particular, the only difference is the exclusion. Specifically, the probabilities of non-shared symbols are the same, but the probabilities of symbols that occur in both contexts are different. In particular the flaw with escaping is probably that it gives too low a weight to symbols that occur in both contexts. More generally you should probably be considering something like :

    I = intersection of contexts 0 and 1

    P = a * (P0 - I) + b * (P1 - I) + c * PI

that is, some weighting for the unique parts of each context and some weighting for the intersection.

Note that PPMii is sort of trying to compensate for this because when it creates context 0, it seeds it with counts from context 1 in the overlap.

Which obviously raises the question : rather than the context initialization of PPMii, why not just look in your parent and take some of the counts for shared symbols?

(note that there are practical issues about how your compute your weight amount or escape probability, and how exactly you mix, but those methods could be applied to either PPM or CW, so they aren't a fundamental difference).

09-12-10 - Challenges in Data Compression 1 - Finite State Correlations

One of the classic shortcomings of all known data compressors is that they can only model "finite context" information and not "finite state" data. It's a little obtuse to make this really formally rigorous, but you could say that structured data is data which can be generated by a small "finite state" machine, but cannot be generated by a small "finite context" machine. (or I should say, transmitting the finite state machine to generate the data is much smaller than transmitting the finite context machine to generate the data, along with selections of probabilistic transitions in each machine).

For example, maybe you have some data where after each occurance of 011 it becomes more likely to repeat. To model that with an FSM you only need a state for 011, and it loops back to itself and increases P. To model it with finite contexts you need an 011 state, an 011011 , 011011011 , etc. But you might also have correlations like : every 72 bytes there is a dword which is equal to dword at -72 bytes xor'ed with the dword at -68 bytes and plus a random number which is usually small.

The point is not that these correlations are impossible to model using finite contexts, but the correct contexts to use at each spot might be infinitely large.

Furthermore, you can obviously model FSM's by hard-coding them into your compressor. That is, you assume a certain structure and make a model of that FSM, and then context code from that hard-coded FSM. What we can't do is learn new FSM's from the data.

For example, say you have data that consists of a random dword, followed by some unknown number of 0's, and then that same dword repeated, like

 
DEADBEEF0000000000000000DEADBEEF 
you can model this perfectly with an FCM if you create a special context where you cut out the run of zeros. So you make a context like
DEADBEEF00
and then if you keep seeing zeros you leave the context alone, if it's not a zero you just use normal FCM (which will predict DEADBEEF). What you've done here is to hard-code the finite state structure of the data into your compressor so that you can model it with finite contexts.

In real life we actually do have this kind of weird "finite state" correlation in a lot of data. One common example is "structured data". "Structured data" is data where there is a strong position-based pattern. That is, maybe a sequence of 32 bit floats, so there's strong correlation to (pos&3), or maybe a bunch of N-byte C structures with different types of junk in that.

Note that in this sense, the trivial mode changes of something like HTML or XML or even English text is not really "structured data" in our sense, even though they obviously have structure, because that structure is largely visible through finite contexts. That is, the state transitions of the structure are given to us in actual bytes in the data, so we can find the structure with only finite context modeling. (obviously English text does have a lot of small-scale and large-scale finite-state structure in grammars and context and so on).

General handling of structured data is the big unsolved problem of data compression. There are various heuristic tricks floating around to try to capture a bit of it. Basically they come down to hard coding a specific kind of structure and then using blending or switching to benefit from that structure model when it applies. In particular, 4 or 8 byte aligned patterns are the most common and easy to model structure, so people build specific models for that. But nobody is doing general adaptive structure detection and modeling.

09-12-10 - Challenges in Data Compression 2 - Sparse Contexts

A "sparse" context is one with very little information in it. Exactly what defines "sparse" depends on the situation, but roughly it means it hasn't been seen enough times that you can trust the evidence in it. Obviously a large-alphabet context needs a lot more information to become confident than a binary context, etc.

In the most extreme case, a sparse context might only have 1 previous occurance. Let's take the binary case for clarity, so we have {n0=1,n1=0} or {n0=0,n1=1} in a single context.

There are two issues with a sparse context :

1. What should our actual estimate for P0 (the probability of a zero) be? There's a lot of theory about ideal estimators and you can go read papers on the unbiased estimator or the KT estimator or whatever that will give you formulas like


P0 = (n0 + k) / (n0 + n1 + 2k)

k = 0 or 1/2 or whatever depending on the flavor of the month

but this stuff is all useless in the sparse case. At {n0=1,n1=0} the actual probabilitiy of a 0 might well be 99% (or it could be 10%) and these estimators won't help you with that.

2. Making an estimate of the confidence you have in the sparse context. That is, deciding whether to code from it at all, or how to weight it vs other contexts. Again the important thing is that the statistics in the context itself do not really help you solve this problem.

Now let me take a second to convince you how important this problem is.

In classical literature this issue is dismissed because it is only a matter of "initialization" and the early phase, and over infinite time all your contexts become rich, so asymptotically it doesn't affect ratio at all. Not only is that dismissive of practical finite size data, but it's actually wrong even for very large data. In fact sparse contexts are not just an issue for the early "learning" phase of a compressor.

The reason is that a good compressor should be in the "learning" phase *always*. Basically to be optimal, you want to be as close to the end of sparse contexts as possible without stepping over the cliff into your statistics becoming unreliable. There are two reasons why.

1. Model growth and the edges of the model. The larger your context, or the more context information you can use, the better your prediction will be. As you get more data, the small contexts might become non-sparse, but that just means that you should be pushing out more into longer contexts. You can think of the contexts as tree structured (even if they actually aren't). Around the root you will have dense information, the further out you go to the leaves the sparser they will be.

For example, PPM* taught us that using the longest context possible is helpful. Quite often this context has only occurred once. As you get further into the file, you longest possible context simply gets longer, it doesn't get less sparse.

2. Locality and transients. Real files are not stationary and it behooves you to kill old information. Furthermore, even in dense contexts there is a subset of the most recent information which is crucial and sparse.

For example, in some context you've seen 50 A's and 20 B's. Now you see two C's. Suddenly you are in a sparse situation again. What you have is an insufficient sample sitation. Should I still use those 50 A's I saw, or am I now 99% likely to make more C's ? I don't have enough statistics in this context to tell, so I'm in a sparse situation again.

There are various attempts at addressing this, but like all the problems in this serious there's been no definitive attack on it.

9/06/2010

09-06-10 - WeightedDecayRand

I'm sure I've talked to people about this many times but I don't think I've ever written it up. Anyway, here's one from the files of game developer arcana. I'm sure this is common knowledge, but one of those things that's too trivial to write an article about.

When you use random numbers in games, you almost never actually want random numbers. Rather you want something that *feels* sort of random to the player. In particular the biggest difference is that real random numbers can be very bunchy, which almost always feels bad to the player.

Common cases are like random monster spawns or loot drops. It's really fucking annoying when you're trying to cross some terrain in an Ultima type game and you get unlucky with the random number generator and it wants to spawn a random encounter on every single fucking tile. Or if you're killing some mobs who are supposed to drop a certain loot 1 in 10 times, and you kill 100 of them and they don't fucking drop it because you keep getting unlucky in rand().

What we want is not actually rand. If you design a monster to drop loot roughly 1 in 10 times, you want the player to get it after 5 - 20 kills. You don't really want them to get it on first kill, nor do you want them to sit around forever not getting it.

In all cases, to do this what we need is a random generator which has *state* associated with the event. You can't use a stateless random number generator. So for example with loot drop, the state remembers that it has dropped nothing N times and that can start affecting the probability of next drop. Basically what you want to do for something like that is start with a very low probability to drop loop (maybe even 0) and then have it ratchet up after non-drop. Once the loot is dropped, you reset the probability back to 0. So it's still random, but a more controllable experience for the designer.

Now, this state generally should decay in some way. That is, if you kill mobs and get no loot and then go away and do a bunch of stuff, when you come back it should be back to baseline - it doesn't remember what happened a long time ago. And then the next important issue is how does that state decay? Is it by time or by dicrete events? Whether time or event driven makes the most sense depends on the usage.

The simplest version of a stateful semi-random generator is simply one that forbids repeats. Something like :


int Draw()
{
    for(;;)
    {
        int i = RandMod(m_num);
        if ( i != m_last )
        {
            m_last = i;
            return i;
        }
    }
}

which is an okay solution in some cases. But more generally you want to not just forbid repeats, rather allow them but make them less likely. And for N-ary events you don't just care about the last one, but all the last ones.

A simple binary stateful semirandom generator is like this :


int Draw()
{
    Tick();

    float p = frandunit() * ( m_p0 + m_p1 );
    if ( p < m_p0 )
    {
        m_p0 -= m_repeatPenalty;
        m_p0 = MAX(m_p0,0.f);
        return 0;
    }
    else
    {
        m_p1 -= m_repeatPenalty;
        m_p1 = MAX(m_p1,0.f);       
        return 1;
    }
}

You should be able to see that this is a simple weighted coin flip and we penalize repeats by some repeat parameter. Here we have introduced the idea of a Tick() - the tick is some function that push the probabilities back towards 50/50 , either by time evolution or by a fixed step.

More generally you want N-ary with various parameters. Here's some code :


//-----------------------------------------------------------

class WeightedDecayRand
{
public:

    // redrawPenalty in [0,1] - 0 is a true rand, 1 forbids repeats
    // restoreSpeed is how much chance of redraw is restored per second or per draw
    explicit WeightedDecayRand(int num,float redrawPenalty,float restoreSpeed);
    ~WeightedDecayRand();

    int Draw();
    void Tick(float dt);
    
    int DrawFixedDecay();
    int DrawTimeDecay(double curTime);

private:
    int     m_num;
    float * m_weights;  
    float   m_redrawPenalty;
    float   m_restoreSpeed;
    float   m_weightSum;
    double  m_lastTime;

    FORBID_CLASS_STANDARDS(WeightedDecayRand);
};

//-----------------------------------------------------------

WeightedDecayRand::WeightedDecayRand(int num,float redrawProbability,float restoreSpeed) :
    m_num(num),
    m_redrawPenalty(redrawProbability),
    m_restoreSpeed(restoreSpeed)
{
    m_weights = new float [num];
    for(int i=0;i < num;i++)
    {
        m_weights[i] = 1.f;
    }
    m_weightSum = (float) num;
}

WeightedDecayRand::~WeightedDecayRand()
{
    delete [] m_weights;
}

int WeightedDecayRand::Draw()
{
    float p = frandunit() * m_weightSum;
    for(int i=0;;i++)
    {
        if ( i == m_num-1 || p < m_weights[i] )
        {
            // accepted !
            m_weightSum -= m_weights[i];
            m_weights[i] -= m_redrawPenalty;
            m_weights[i] = MAX(m_weights[i],0.f);
            m_weightSum += m_weights[i];
            return i;
        }
        else
        {
            p -= m_weights[i];
        }
    }
}

void WeightedDecayRand::Tick(float dt)
{
    m_weightSum = 0;
    for(int i=0;i < m_num;i++)
    {
        if ( m_weights[i] < 1.f )
        {
            m_weights[i] += dt * m_restoreSpeed;
            m_weights[i] = MIN(m_weights[i],1.f);
        }
        m_weightSum += m_weights[i];
    }
}

int WeightedDecayRand::DrawFixedDecay()
{
    int ret = Draw();
    
    Tick( 1.f );
    
    return ret;
};

int WeightedDecayRand::DrawTimeDecay(double curTime)
{
    if ( curTime != m_lastTime )
    {
        Tick( (float)(curTime - m_lastTime) );
        m_lastTime = curTime;
    }

    int ret = Draw();
        
    return ret;
};

//-----------------------------------------------------------

09-06-10 - Cross Platform SIMD

I did a little SIMD "Lookup3" (Bob Jenkin's hash), and as a side effect, it made me realize that you can almost get away with cross platform SIMD these days. All the platforms do 16-byte SIMD, so that's nice and standard. The capabilities and best ways of doing things aren't exactly the same, but it's pretty close, and you can mostly cover the differences. Obviously to get super maximum optimal you would want to special case per platform, but even then having a base cross-platform SIMD implementation to start from would let you get all your platforms going easier and identify the key-spots to do platform specific work.

Certainly for "trivial SIMDifications" this works very well. Trivial SIMDification is when you have a scalar op that you are doing a lot of, and you change that to doing 4 of them at a time in parallel with SIMD. That is, you never do horizontal ops or anything else funny, just a direct substitution of scalar ops to 4-at-a-time vector ops. This works very uniformly on all platforms.

Basically you have something like :

U32 x,y,z;

    x += y;
    y -= z;
    x ^= y;
and all you do is change the data type :
simdU32 x,y,z;

    x += y;
    y -= z;
    x ^= y;
and now you are doing four of them at once.

The biggest issue I'm not sure about is how to define the data type.

From a machine point of view, the SIMD register doesn't have a type, so you might be inclined to just expose a typeless "qword" and then put the type information in the operator. eg. have a generic qword and then something like AddQwordF32() or AddQwordU16() . But this is sort of a silly argument. *All* registers are typeless, and yet we find data types in languages to be a convenient way of generating the right instructions and checking program correctness. So it seems like the ideal thing is to really have something like a qwordF32 type, etc for each way to use it.

The problem is how you actually do that. I'm a little scared that anything more than a typedef might lead to bad code generation. The simple typedef method is :


#if SPU
typedef qword simdval;
#if XENON
typedef __vector4 simdval;
#else if SSE
typedef __m128 simdval;
#endif

But the problem is if you want to make them have their type information, like say :

typedef __vector4 simdF32;
typedef __vector4 simdU32;

then when you make an "operator +" for F32 and one for U32 - the compiler can't tell them apart. (if for no good reason you don't like operator +, you can pretend that says "Add"). The problem is the typedef is not really a first class type, it's just an alias, so it can't be used to select the right function call.

Of course one solution is to put the type in the function name, like AddF32,AddU32,etc. but I think that is generally bad code design because it ties operation to data, which should be as indepednent as possible, and it just creates unnecessary friction in the non-simd to simd port.

If you actually make them a proper type, like :


struct simdF32 { __vector4 m; };
struct simdU32 { __vector4 m; };

then you can do overloading to get the right operation from the data type, eg :

RADFORCEINLINE simdU32 operator + ( const simdU32 lhs, const simdU32 rhs )
{
    return _mm_add_epi32(lhs,rhs);
}

RADFORCEINLINE simdF32 operator + ( const simdF32 lhs, const simdF32 rhs )
{
    return _mm_add_ps(lhs,rhs);
}

The problem is that there is some reason to believe that anything but the fundamental type is not handled as well by the compiler. That is, qword,__vector4, etc. get special very good handling by the compiler, and anything else, even a struct which consists of nothing but that item, gets handled worse. I haven't actually seen this myself, but there are various stories around the net indicating this might be true.

I think the typedef way is too just too weak to actually be useful, I have to go the struct way and hope that modern compilers can handle it. Forunately GCC has the very good "vector" keyword thingy, so I don't have to do anything fancy there, and MSVC is now generally very good at handling mini objects (as long as everything inlines).

Another minor annoying issue is how to support reinterprets. eg. I have this simdU32 and I want to use it as a simdU16 with no conversion. You can't use the standard C method of value at/address of, because that might go through memory which is a big disaster.

And the last little issue is whether to provide conversion to a typeless qword. One argument for that would be that things like Load() and Store() could be implemented just from qword, and then you could fill all your data types from that if they have conversion. But if you allow implicit conversion to/from typeless, then all your types can be directly made into each other. That usually confuses the hell out of function overloading among other woes.

9/04/2010

09-04-10 - Holy fuck balls

MSVC 2005 x64 turns this :

static RADFORCEINLINE void rrMemCpySmall(void * RADRESTRICT to,const void * RADRESTRICT from, SINTa size)
{
    U8 * RADRESTRICT pto = (U8 * RADRESTRICT) to;
    const U8 * RADRESTRICT pfm = (const U8 * RADRESTRICT) from;
    for(SINTa i=0;i < size;i++) pto[i] = pfm[i];
}

into

call memcpy

god damn you, you motherfuckers. The whole reason I wrote it out by hand is because I know that size is "small" ( < 32 or so) so I don't want the function call overhead and the rollup/rolldown of a full memcpy. I just want you to generate rep stosb. If I wanted memcpy I'd fucking call memcpy god dammit. Friggle fracking frackam jackam.


In other news, I know it's "cool" to run with /W4 or MSVC or /Wall on GCC (or even /Wextra), but I actually think it's counterproductive. The difference between /W3 and /W4 is almost entirely warnings that I don't give a flying fuck about, shit like "variable is unused". Okay fine, it's not used, fucking get rid of it and don't tell me about it.

Shit like "variable initialized but not used" , "conditional is constant", "unused static function removed" is completely benign and I don't want to hear about it.

I've always been peer-pressured into running with max warning settings because it's the "right" thing to do, but actually I think it's just a waste of my time.


MoveFileTransacted is pretty good. There's other transactional NTFS shit but really this is the only one you need. You can write your changes to a temp file (on your ram drive, for example) then use MoveFileTransacted to put them back on the real file, and it's nice and safe.

BTW while I'm ranting about ram drives; how the fuck is that not in the OS? And my god people are getting away with charging $80 for them. AND even those expensive ones don't support the only feature I actually want - dynamic sizing. It should be the fucking size of the data on it, not some static predetermined size.

9/03/2010

09-03-10 - Page file on Ramdrive = 3LIT3

One of the awesome new "tweaks" that computer buffoons are doing is making ramdisks and putting their windows swap file on the ram disk. Yes, brilliant, that speeds up your swap file alright. So awesome.

(BTW yes I do know that it actually makes sense in one context : some of the fancy ram drives can access > 3 GB RAM in 32 bit windows, in which case it gives you a way to effectively access your excess RAM in an easy way on the old OS; but these geniuses are on Win7-64).

(And of course the right thing to do is to disable page file completely; sadly Windows has not kept up with this modern era of large RAMs and the page file heuristics are not well suited to use as an emergency fallback).


What I have copied from the tweakers is putting my firefox shite on a ram disk :

SuperSpeed Firefox By Putting Profile and SQLite Database in RAMDisk Raymond.CC Blog
Guide Move 99% of All Firefox Writes off your SSD

I can get away with a 128M ram disk and it massive reduces the amount of writes to my SSD. Reducing writes to the SSD is nice for my paranoia (it also speeds up firefox startup and browsing), but the really strong argument for doing this is that I've caught SQLite eating its own ass a few times. I really hate it when I'm just sitting there doing nothing and all of a sudden my fan kicks in and my CPU shoots up, I'm like "WTF is going on god dammit". Well the last few times that's happened, it's been the SQL service fucking around on its files for no god damn good reason. I'm not sure if putting the firefox SQL DB files on the ramdisk actually fixes this, but it makes it cheaper when it does happen anyway.

Also, don't actually use the above linked methods. Instead do this :

Do "about:config" and add "browser.cache.disk.parent_directory" . Obviously also check browser.cache.disk.enable and browser.cache.disk.capacity ; capacity is in KB BTW.

Run "firefox -ProfileManager". Make a new profile and call it "ramprofile" or whatever. Change the dir of that profile to somewhere on the ramdisk. Look at your other profile (probably "default") and see what dir it's in. Make "ramprofile" the default startup profile.

Quit firefox and copy the files from your default profile dir to your ramprofile dir. Run firefox and you are good to go.

Do not set your ramdisk to save & restore itself (as some of fancy ramdisks can do these days). Instead put the profile dir copy operation in your machine startup bat. It's somewhat faster to zip up your default profile dir and have your machine startup bat copy the zip over to the ramdisk and unzip it there.

There's another advantage of this, which is that your whole Firefox profile is now temporary for the session, unless you explicitly copy it back to your hard disk. That's nice. If you pick up tracking cookies or whatever while browsing, or some malicious script changes your home page or whatever, they go away when you reboot.

old rants