7/09/2011

07-09-11 - LockFree - Thomasson's simple MPMC

Warming back up into this stuff, here's some very simple algos to study.

first fastsemaphore :


class fastsemaphore
{
    rl::atomic<long> m_count;
    rl::rl_sem_t m_waitset;

public:
    fastsemaphore(long count = 0)
    :   m_count(count)
    {
        RL_ASSERT(count > -1);
        sem_init(&m_waitset, 0, 0);
    }

    ~fastsemaphore()
    {
        sem_destroy(&m_waitset);
    }

public:
    void post()
    {
        if (m_count($).fetch_add(1) < 0)
        {
            sem_post(&m_waitset);
        }
    }

    void wait()
    {
        if (m_count($).fetch_add(-1) < 1)
        {
            // loop because sem_wait returns non-zero for spurious failure
            while (sem_wait(&m_waitset));
        }
    }
};

Most code I post will be in Relacy notation, which is just modified C++0x. Note that C++0x atomics without explicit memory ordering specifications (such as the fetch_adds here) default to memory_order_seq_cst (sequential consistency).

Basically your typical OS "semaphore" is a very heavy kernel-space object (on Win32 for example, semaphores are cross-process). Just doing P or V on it even when you don't modify wait states is very expensive. This is just a user-space wrapper which only calls to the kernel semaphore if it is at an edge transition that will cause a thread to either go to sleep or wake up.

So this is a simple thing that's nice to have. Note that m_count is always 0 or negative. If it's negative it's the (minus) number of threads that are sleeping on that semaphore. (between post and wakeup a thread can be sleeping but no longer counted, so we should say that threads which are not in minus m_count are either running or pending-running).

( see here )

Now we can look at Thomasson's very simple MPMC bounded blocking queue :


template<typename T, std::size_t T_depth>
class mpmcq
{
    rl::atomic<T*> m_slots[T_depth];
    rl::atomic<std::size_t> m_push_idx;
    rl::atomic<std::size_t> m_pop_idx;
    fastsemaphore m_push_sem;
    fastsemaphore m_pop_sem;

public:
    mpmcq() 
    :   m_push_idx(T_depth), 
        m_pop_idx(0),
        m_push_sem(T_depth),
        m_pop_sem(0)
    {
        for (std::size_t i = 0; i < T_depth; ++i)
        {
            m_slots[i]($).store(NULL);
        }
    }

public:
    void push(T* ptr)
    {
        m_push_sem.wait();

        std::size_t idx = m_push_idx($).fetch_add(1) & (T_depth - 1);

        rl::backoff backoff;

        while (m_slots[idx]($).load())
        {
            backoff.yield($);
        }

        RL_ASSERT(! m_slots[idx]($).load());

        m_slots[idx]($).store(ptr);

        m_pop_sem.post();
    }


    T* pop()
    {
        m_pop_sem.wait();

        std::size_t idx = m_pop_idx($).fetch_add(1) & (T_depth - 1);

        T* ptr;
        rl::backoff backoff;

        while ( (ptr = m_slots[idx]($).load()) == NULL )
        {
            backoff.yield($);
        }
        
        m_slots[idx]($).store(NULL);

        m_push_sem.post();

        return ptr;
    }
};

First let's understand what's going on here. It's just an array of slots with a reader index and writer index that loop around. "pop_sem" counts the number of filled slots - so the popper waits on that semaphore to see filled slots be non-zero. "push_sem" counts the number of available slots - so the pusher waits on that being greater than zero to be able to fill a slot.

So the producer and consumer both nicely go to sleep and wake each other when they should. Also because we use "fastsemaphore" they have reasonably low overhead when they are in the non-sleeping case.

Now, why is the weird backoff logic there? It's because of the "M" (for multiple) in MPMC. If this was an SPSC queue then it could be much simpler :


    void push(T* ptr)
    {
        m_push_sem.wait();

        std::size_t idx = m_push_idx($).fetch_add(1) & (T_depth - 1);

        RL_ASSERT(! m_slots[idx]($).load());

        m_slots[idx]($).store(ptr);

        m_pop_sem.post();
    }


    T* pop()
    {
        m_pop_sem.wait();

        std::size_t idx = m_pop_idx($).fetch_add(1) & (T_depth - 1);

        /* (*1) */

        T* ptr = m_slots[idx]($).exchange(NULL);
        RL_ASSERT( ptr != NULL );

        m_push_sem.post();

        return ptr;
    }

which should be pretty obviously correct for SPSC.

But, now consider you have multiple consumers and the queue is completely full.

Consumer 1 gets a pop_idx = 2. But then at (*1) it swaps out and doesn't run any more.

Consumer 2 gets a pop_idx = 3 and runs through and posts to the push semaphore.

Now a producer runs and gets push_idx = 2. It believes there is an empty slot it can write to, but it looks in slot 2 and there's still something there (because consumer 1 hasn't cleared it's slot yet). So, it has to do the backoff-yield loop to give consumer 1 some CPU time to let it run.

So the MPMC with backoff-yield works, but it's not great. As long as the queue is near empty it works reasonably well, but when it's full it acts like a mutex-based queue, in that one consumer being swapped out can block all your pushers from running (and because it's just a busy wait here, the normal OS hacks to rescue you (like Windows priority boosts) won't work here (this kind of thing is exactly why the Windows scheduler has so many hacks and why despite your whining you really do want it to be like that)).

7/08/2011

07-08-11 - Who ordered Event Count -

Say you have something like a lock-free queue , and you wish to go into a proper OS-level thread sleep when it's empty (eg. not just busy wait).

Obviously you try something like :


popper :

item = queue.pop();
if ( item == NULL )
{
  Wait(handle);
}


pusher :

was_empty = queue.push();
if ( was_empty )
{
  Signal(handle);
}

where we have extended queue.push to atomically check if the queue was empty and return a bool (this is easy to do for most lock-free queue implementations).

This doesn't work. The problem is that between the pop() and the Wait(), somebody could push something on the queue. That means the queue is non-empty at that point, but you go to sleep anyway, and nobody will ever wake you up.

Now, one obvious solution is to put a Mutex around the whole operation and use a "Condition Variable" (which is just a way of associating a sleep/wakeup with a mutex). That way the mutex prevents the state of the queue from changing while you decide to go to sleep. But we don't want to do that today because the whole point of our lock-free queue is that it's lock-free, and a mutex would spoil that. (actually I suppose the classic solution to this problem would be to use a counting semaphore and inc it on each push and dec it on each pop, but that has even more overhead if it's a kernel semaphore). Basically we want these specific fast paths and slow paths :


fast path :
  when popper can get an item without sleeping
  when pusher adds an item and there are no waiters

slow path :
  when popper has no work to do and goes to sleep
  when pusher needs to wake a popper

we're okay with the sleep/wakeup part being slow because that involves thread transitions anyway, so it's always slow and complicated. But in the other case where a core is just sending a message to another running core it should be mutex-free.

So, the obvious answer is to do a kind of double-check, something like :


popper :

item = queue.pop(); 
if ( item ) return item;  // *1
atomic_waiters ++;   // *2
item = queue.pop();  // *3
if ( item ) { atomic_waiters--; return item; }
if ( item == NULL )
{
  Wait(handle);
}
atomic_waiters--;


pusher :

queue.push();
if ( atomic_waiters > 0 )
{
  Signal(handle);
}

this gets the basics right. First popper has a fast path at *1 - if it gets an item, it's done. It then registers the fact that it's going to wait to a shared variable at *2. Then it double checks at *3. The double check is not an optimization to avoid sleeping your thread, it's crucial for correctness. The issue is that the popper could swap out between *1 and *2, and the pusher could then run completely, and it will see waiters == 0 and not do a signal. So the double-check at *3 catches this.

There's a performance bug with this code as written - if the queue goes empty then you do lots of pushes, all those pushes send the signal. You might be tempted to fix that by moving the "atomics_waiters--" line to the pusher side (before the signal), but that creates a race. You could fix that but then you spot a bigger problem :

This code doesn't work at all if you have multiple pushers or poppers. The problem is "lost wakeups". Basically if there are multiple poppers going into wait at the same time, the pusher may think it's done the wakeups it needs to, but it hasn't, and a popper goes into a wait forever.

To fix this you need a real proper "event count". What a proper event_count does is register the waiter at a certain point in time. The usage is like this :


popper :

item = queue.pop(); 
if ( item ) return item;
count = event_count.get_event_count();
item = queue.pop();
if ( item ) { event_count.cancel_wait(count); return item; }
if ( item == NULL )
{
  event_count.wait_on_count(count);
}


pusher :

queue.push();
event_count.signal();

Now, as before get_event_count() registers in an atomic visible variable that I want to wait on something (most people call this function prepare_wait()), but it also records the current "event count" to identify the wait (this is just the number of pushes, or the number of signals if you like). Then wait_on_count() only actually does the wait if the event_count is still on the same count as when I did get_wait_count - if the internal counter has advanced the wait is not done. signal() is a fast nop if there are no waiters, and increments the internal count.

This eliminates the lost wakeup problem, because if the "event_count" has advanced (and signaled some other popper, and won't signal again) then you will simply not go into the Wait.

Basically it's exactly like a Windows Event in that you can wait and signal, but with the added feature that you can record a place in time on the event, and then only do the Wait if that time has not advanced between your recording and the call to Wait.

It turns out that event_count and condition variables are closely related; in particular, one can very easily be implemented in terms of the other. (I should note that the exact semantics of pthread cond_var are *not* easy to implement in this way, but a "condition variable" in the broader sense need not comply with their exact specs).

Maybe in the future I'll get into how to implement event_count and condition_var.

BTW eventcount is the elegant solution to the problem of Waiting on Thread Events discussed previously.


ADDENDUM : if you like, eventcount is a way of doing Windows' PulseEvent correctly.

"Event" on Win32 is basically a "Gate" ; it's either open or closed. SetEvent makes it open. When you Wait() on the Event, if the gate is open you walk through, if it's closed you stop. The normal way to use it is with an auto-reset event, which means when you walk through the gate you close it behind yourself (atomically).

The idea of the Win32 PulseEvent API is to briefly open the gate and let through someone who was previously waiting on the gate, and then close it again. Unfortunately, PulseEvent is horribly broken by design and almost always causes a race, which leads most people to recommend against ever using PulseEvent (or manual reset events). (I'm sure it is possible to write correct code using PulseEvent, for example the race may be benign and just be a performance bug, but it is wise to follow this advice and not use it).

For example the standard Queue code using PulseEvent :

popper :
  node = queue.pop();
  if ( ! node ) Wait(event);

pusher :
  queue.push(node);
  PulseEvent(event);

is a totally broken race (if the popper is between getting a null node and enterring the wait, it doesn't see the event), and most PulseEvent code is similarly broken.

eventcount's Signal is just like PulseEvent - it only signals people who were previously waiting, it doesn't change the eventcount into a signalled state. But it doesn't suffer from the race of PulseEvent because it has a consistent way of defining the moment in "time" when the event fires.

07-08-11 - Event Count and Condition Variable

If you have either event_count or condition_variable, it's pretty straightforward to get the other from the one you have.

eventcount from condition_variable :

# by Chris M Thomasson
#
# class eventcount {  
# public:  
#   typedef unsigned long key_type;  
#   
#   
# private:  
#   mutable rl::atomic<key_type> m_count;  
#   rl::mutex m_mutex;  
#   rl::condition_variable m_cond;  
#   
#   
#   void prv_signal(key_type key) {  
#     if (key & 1) {  
#       m_mutex.lock($);  
#       while (! m_count($).compare_exchange_weak(key, (key + 2) & ~1,   
#         rl::memory_order_seq_cst));  
#       m_mutex.unlock($);  
#       m_cond.notify_all($);  
#     }  
#   }  
#   
#   
# public:  
#   eventcount() {  
#     m_count($).store(0, rl::memory_order_relaxed);  
#   }  
#   
#   
# public:  
#   key_type get() const {   // aka prepare_wait
#     return m_count($).fetch_or(1, rl::memory_order_acquire);  
#   }  
#   
#   
#   void signal() {  // aka notify_one
#     prv_signal(m_count($).fetch_add(0, rl::memory_order_seq_cst));  
#   }  
#   
#   
#   void signal_relaxed() {  
#     prv_signal(m_count($).load(rl::memory_order_relaxed));  
#   }  
#   
#   
#   void wait(key_type cmp) {  
#     m_mutex.lock($);  
#     if ((m_count($).load(rl::memory_order_seq_cst) & ~1) == (cmp & ~1)) {  
#       m_cond.wait(m_mutex, $);  
#     }  
#     m_mutex.unlock($);  
#   }  
# };  
#
and condition variable from event count :
by Dmitry V'jukov :

   1. class condition_variable  
   2. {  
   3.     eventcount ec_;  
   4.   
   5. public:  
   6.     void wait(mutex& mtx)  
   7.     {  
   8.         int count = ec_.prepare_wait();  
   9.         mtx.unlock();  
  10.         ec_.wait(count);  
  11.         mtx.lock();  
  12.     }  
  13.   
  14.     void signal()  
  15.     {  
  16.         ec_.notify_one();  
  17.     }  
  18.   
  19.     void broadcast()  
  20.     {  
  21.         ec_.notify_all();  
  22.     }  
  23. };   
(note this is a simplified condition variable without all the POSIX compliance crud).

In C++0x you have condition_variable at the stdlib level, so that is probably the best approach for the future. Unfortunately that future is still far away. On Pthreads you also have condition_variable (though a rather more complex one). Unfortunately, on Win32 (pre-Vista) you don't have condition_varaiable at all, so you have to build one of these from OS primitives.

(BTW there are various sources for good condition_var implementations for Win32, such as boost::thread and Win32 pthreads by Alex Terekhov).

ADDENDUM : really eventcount is the more primitive of the two; it's sort of a mistake that C++0x has provided "condition_var" as a primitive. They are not trying to provide a full set of OS-level thread control types (eg. they don't provide semaphore, event, what have you) - they are trying to provide the minimal basic set, and they chose condition_var. They should have done mutex and eventcount, as you can build everything from that.

(actually there's something perhaps even more primitive that eventcount which is "waitset" which can be easily used to build any of the basic blocking thread control devices).

7/06/2011

07-06-11 - Who ordered Condition Variables -

I'm getting back into some low level threading stuff for a week or two and I'll try to write about it because it's very confusing and I always forget the basics.

Condition Variables are explained very strangely around the net. You'll find sites that say they "let you wait on a variable being set to certain value" (not true), "let you avoid polling" (not true), or "the mutex is a left-over from pre-pthreads implementations" (not true).

What a "Condition Variable" really is is a way to receive a Signal and enter a Mutex at the same time ("atomically" if you like).

Why do we want this?

The typical case is we are waiting on some state. When the state is not true, we want our thread to go into a real sleep. So we use an OS Wait() on the thread that's waiting for that state, and an OS Signal() when the state is set to wake the thread. (Wait and Signal might be "Event" on win32, or a semaphore in pthreads, or a futex, etc). Basically :


Waiting thread :

if ( state I want is not set )
{
  Wait(handle);
  // state I want should be set now
  // **


Signalling thread :

if ( I changed state to the one wanted )
  Signal(handle)

simple enough. The problem is, the invariant at (**) is not true. The state you want is NOT necessarily set there, because there's a race. Immediately after you receive the signal, you could be swapped out, and someone else could change the state, and then it would not be what you wanted.

So the obvious thing is to put a mutex around "state". To be less abstract I'll talk about a one item queue (aka a mailbox).


Waiting thread :

Lock mutex;
if ( mailbox empty )
{
  atomically { Unlock(mutex);   Wait(handle); (**) Lock(mutex) }
  // mailbox must be full now
}

Signaling thread :

Lock mutex
if ( mailbox empty )
{
  mailbox = some stuff;
  Signal(handle); Unlock(mutex);
}

now when the mailbox filler signals the handle, the waiter immediately wakes up and tries to lock the mutex and can't; the filler then unlocks and the waiter can run, and it is gauranteed to have an item.

Note that the line marked {atomically} *must* be atomic, otherwise you have a race at (**) just like before.

And this :

  atomically { Unlock(mutex);   Wait(handle); (**) Lock(mutex) }
is exactly what pthread_cond_wait() is.

Personally rather than introducing a new synchronization data type I would have preferred to just get a function that does unlock_wait_lock(). The "cond_var" in pthreads has nothing to do with a "condition"', it's just a waitable handle (and associated mutex and other wiring) in Windows lingo.

There's one more point worth talking about. In the thread that filled the mailbox, we did :


lock mutex
set state
signal
unlock

That's good because it gaurantees that the receiver gets the state it wants and is race free. It's bad because it causes some unnecessary "thread thrashing".

(thread thrashing is a lot like input latency that I mentioned a while ago; it's something you just want to constantly watch and be vigilant about tracking and removing. Any time a thread wakes up and does nothing and goes back to sleep, you are "thrashing" and just wasting massive amounts of CPU. You want to minimize the number of useless thread wakeups)

The alternative is :


lock mutex
set state
unlock
(**)
signal

now, there's a race a ** where the state can get changed before you signal, so the invariant in the receiver is no longer true.

In most cases, however, this is not actually bad, and this form is actually preferred because of its increased efficiency. You have to change the receiver to a "double-check" type of pattern. Something like :


Waiting thread :

Lock mutex;
if ( mailbox empty )
{
  retry:
  cond_wait(mutex,handle); //  atomically { Unlock(mutex); Wait(handle); Lock(mutex) }
  // mailbox may or may not be full now
  if ( mailbox empty ) goto retry;
  // now do work
}

Signaling thread :

Lock mutex
if ( mailbox empty )
{
  mailbox = some stuff;
  Unlock(mutex);
  // intentional race here
  Signal(handle);
}

In general I believe it's a safer design to treat the signal as meaning "wake up and check this condition" instead of "wake up and this condition is definitely set". Then you engineer to minimize the number of wakeups when the condition is not set.

BTW a better design of the primitive would have allowed the signalling thread to do


atomically { Unlock(mutex); Signal(handle); }

which would be the ideal thing. Unfortunately a normal cond_var is not expressed this way. However, apparently on some modern UNIXes cond_var actually *acts* this way. What you do is signal inside the mutex, but the other thread isn't actually woken up until you unlock the mutex. Unfortunately this is a hidden optimization (they should have just provided unlock_and_signal() as one call) and you can't rely on it if you're cross-platform.

Some links on this topic :
Usenet - Condition variables signal with or without mutex locked
condvars signal with mutex locked or not Lo�c OnStage
A word of caution when juggling pthread_cond_signalpthread_mutex_unlock - comp.programming.threads Google Groups

6/28/2011

06-28-11 - String Extraction

So I wrote a little exe to extract static strings from code. It's very simple.

StringExtractor just scans some dirs of code and looks for a tag which encloses a string with parens. eg. :

    MAKE_STRING( BuildHuff )
it takes all the strings it finds in that way and makes a table of indexes and contents, like :


enum EStringExtractor
{
    eSE_Null = 0,
    eSE_BuildHuff = 1,
    eSE_DecodeOneQ = 2,
    eSE_DecodeOneQ_memcpy = 3,
    eSE_DecodeOneQ_memset = 4,
    eSE_DecodeOneQuantum = 5, ...


const char * c_strings_extracted[] = 
{
    0,
    "BuildHuff",
    "DecodeOneQ",
    "DecodeOneQ_memcpy",
    "DecodeOneQ_memset",
    "DecodeOneQuantum", ...

it outputs this to a generated .C and .H file, which you can then include in your project.

The key then is what MAKE_STRING means. There are various ways to set it up, depending on whether you are replacing an old system that uses char * everywhere or not. Basically you want to make a header that's something like :


#if DO_STRINGS_RAW

#define MAKE_STRING( label )   (string_type)( #label )
#define GET_STRING(  index )   (const char *)( index )

#else

#include "code_gen_strings.h"

#define MAKE_STRING( label )  (string_type)( eSE_ ## label )
#define GET_STRING(  index )  c_strings_extracted[ (int)( index ) ]

#endif

(string_type can either be const char * to replace an old system, or if you're doing this from scratch it's cleaner to make it a typedef).

If DO_STRINGS_RAW is on, you run with the strings in the code as normal. With DO_STRINGS_RAW off, all static strings in the code are replaced with indexes and the table lookup is used.

It's important to me that the code gen doesn't actually touch any of the original source files, it just makes a file on the side (I hate code gen that modifies source because it doesn't play nice with editors); it's also important to me that you can set DO_STRINGS_RAW and build just fine without the code gen (I hate code gen that is a necessary step in the build).

Now, why would you do this? Well, for one thing it's just cleaner to get all static strings in one place so you can see what they are, rather than having them scattered all over. But some real practical benefits :

You can make builds that don't have the string table; eg. for SPU or other low-memory console situations, you can run the string extraction to turn strings into indeces, but then just don't link the table in. Now they can send back indeces to the host and you can do the mapping there.

You can load the string table from a file rather than building it in. This makes it optional and also allows localization etc. (not a great way to do this though).

For final builds, if you are using these strings just for debug info, you can easily get rid of all of them in one place just by #defining MAKE_STRING and GET_STRING to nulls.

Anyhoo, here's the EXE :

stringextractor.zip (84k)

(stringextractor is also the first cblib app that uses my new standardized command line interface; all cblib apps in the future will have a common set of -- options; also almost all cblib apps now take either files or dirs on the command line and if you give them dirs they iterate on contents).

(stringextractor also importantly uses a helper to not change the output file if the contents don't change; this means that it doesn't mess up the modtime of the generated file and cause rebuilds that aren't necessary).

Obviously one disadvantage is you can't have spaces or other non-C-compatible characters in the string. But I guess you could fix this by using printf style codes and do printf when you generate the table.

6/24/2011

06-24-11 - Regression

Oodle now can run batch files and generate this :

test_cdep.donetest_huff.donetest_ioqueue.donetest_lzp.donetest_oodlelz.done
r:\test_done_xenon
r:\test_done_ps3passpasspasspass : 128.51pass : 450.50
r:\test_done_win32passpassfailpass : 341.94pass : 692.58
test_cdep.donetest_huff.donetest_ioqueue.donetest_lzp.donetest_oodlelz.done
r:\test_done_xenon
r:\test_done_ps3passpasspasspass : 128.03pass : 450.73
r:\test_done_win32passpasspasspass : 335.55pass : 686.90

Yay!

Something that's important for me is doing constant runs of the speeds of the optimized bits on all the platforms, because it's so easy to break the optimization with an inoccuous check-in, and then you're left trying to find what slowed you down.

Two niggles continue to annoy me :

1. Damn Xenon doesn't have a command line interface (by which I mean you can't actually interact with running programs from a console; you can start programs; the specific problem is that you can't tell if a program is done or still running or crashed from a console). I have my own hacky work-around for this which is functional but not ideal. (I know I could write my own nice xenon-runner with the Dm API's but I haven't bitten that off yet).

2. Damn PS3TM doesn't provide "force connect" from the command line. They provide most of the commands as command line switches, but not the one that I actually want. Because of this I frequently have problems connecting to the PS3 during the regression run, and I have to open up the damn GUI and do it by hand. This is in fact the only step that I can't automate and that's annoying. I mean, why do they even fucking bother with providing the "connect" and "disconnect" options? They never fucking work, the only thing that works is "force disconnect". Don't give me options that just don't work dammit.

(and the whole idea of PS3TM playing nice and not disconnecting other users is useless because it doesn't disconnect idle people, so when someone else is "connected" that usually means they were debugging two days ago and just never disconnected)

(there is a similar problem with the Xenon (similar in the sense that it can't be automated); it likes to get itself into a state where it needs to be cold-booted by physically turning it off; I'm not sure why the "cold boot" xbreboot is not the same as physically power cycling it, so that's mildly annoying too).

6/23/2011

06-23-11 - Map File Graphviz

What I want :

Something that parses the .map and obj's and creates a graph of the size of the executable. Graph nodes should be the size they take in the exe, and connections should be dependencies.

I have all this code for generating graphviz/dotty because I do it for my allocation grapher, but I don't know a good way to get the dependencies in the exe. Getting the sizes of things in the MAP is relatively easy.

To be clear, what you want to see is something like :

s_log_buf is 1024 bytes
s_log_buf is used by Log() in Log.cpp
Log() is called from X and Y and Z
...
just looking at the .map file is not enough, you want to know why a certain symbol got dragged in. (this happens a lot, for example some CRT function like strcpy suddenly shows up in your map you're like "where the fuck did that come from?")

Basically I need the static call-graph or link tables , a list of the dependencies from each function. The main place I need this is on the SPU, because it's a constant battle to keep your image size minimal to fit in the 256k.

I guess I can get it from "objdump" pretty easily, but that only provides dependency info at the obj level, not at the function level, which is what I really want.

Any better solutions?

6/17/2011

06-17-11 - C casting is the devil

C-style casting is super dangerous, as you know, but how can you do better?

There are various situations where I need to cast just to make the compiler happy that aren't actually operational casts. That is, if I was writing ASM there would be no cast there. For example something like :

U16 * p;
p[0] = 1;
U8 * p2 = (U8 *)p;
p2[1] = 7;
is a cast that changes the behavior of the pointer (eg. "operational"). But, something like :
U16 * p;
*p = 1;
U8 * p2 = (U8 *)p;
p2 += step;
p = (U16 *) p2;
*p = 2;
is not really a functional cast, but I have to do it because I want to increment the pointer by some step in bytes, and there's no way to express that in C without a cast.

Any time I see a C-style cast in code I think "that's a bug waiting to happen" and I want to avoid it. So let's look at some ways to do that.

1. Well, since we did this as an example already, we can hide those casts with something like ByteStepPointer :


template<typename T>
T * ByteStepPointer(T * ptr, ptrdiff_t step)
{
    return (T *)( ((intptr_t)ptr) + step );
}

our goal here is to hide the nasty dangerous casts from the code we write every day, and bundle it into little utility functions where it's clear what the purpose of the cast is. So now we can write out example as :
U16 * p;
*p = 1;
p = ByteStepPointer(p,step);
*p = 2;
which is much prettier and also much safer.

2. The fact that "void *" in C++ doesn't cast to arbitrary pointers the way it does in C is really fucking annoying. It means there is no "generic memory location" type. I've been experimenting with making the casts in and out of void explicit :


template<typename T>
T * CastVoid(void * ptr)
{
    return (T *)( ptr );
}

template<typename T>
void * VoidCast(T * ptr)
{
    return (void *)( ptr );
}

but it sucks that it's so verbose. In C++0x you can do this neater because you can template specialize based on the left-hand-side. So in current C++ you have to write
Actor * a = CastVoid<Actor>( memory );
but in 0x you will be able to write just
Actor * a = CastVoid( memory );

There are a few cases where you need this, one is to call basic utils like malloc or memset - it's not useful to make the cast clear in this case because the fact that I'm calling memset is clear enough that I'm treating this pointer as untyped memory; another is if you have some generic "void *" payload in a node or message.

Again you don't want just a play C-style cast here, for example something like :

Actor * a = (Actor *) node->data;
is a bug waiting to happen if you change "data" to an int (among other things).

3. A common annoying case is having to cast signed/unsigned. It should be obvious that when I write :

U32 set = blah;
U32 mask = set & (-set);
that I want the "-" operator to act as (~set + 1) on the bits and I don't care that it's unsigned, but C won't let you do that. (see previous rants about how what I really want in this scenario is a "#pragma requires(twos_complement)" ; warning me about the sign is fucking useless for portability because it just makes me cast, if you want to make a real portable language you have to be able to express capabilities of the platform and constraints of the algorithm).

So, usually what you want is a cast that gives you the signed type of the same register size, and that doesn't exist. So I made my own :


static inline S8  Signed(U8 x)  { return (S8) x; }
static inline S16 Signed(U16 x) { return (S16) x; }
static inline S32 Signed(U32 x) { return (S32) x; }
static inline S64 Signed(U64 x) { return (S64) x; }

static inline U8  Unsigned(S8 x)  { return (U8) x; }
static inline U16 Unsigned(S16 x) { return (U16) x; }
static inline U32 Unsigned(S32 x) { return (U32) x; }
static inline U64 Unsigned(S64 x) { return (U64) x; }

So for example, this code :
mask = set & (-(S32)set);
is a bug waiting to happen if you switch to 64-bit sets. But this :
mask = set & (-Signed(set));
is robust. (well, robust if you include a compiler assert that you're 2's complement)

4. Probably the most common case is where you "know" a value is small and need to put it in a smaller type. eg.

int x = 7;
U8 small = (U8) x;
But all integer-size-change casts are super unsafe, because you can later change the code such that x doesn't fit in "small" anymore.

(often you were just wrong or lazy about "knowing" that the value fit in the smaller type. One of the most common cases for this right now is putting file sizes and memory sizes into 32-bit ints. Lots of people get annoying compiler warnings about that and think "oh, I know this is less than 2 GB so I'll just C-style cast". Oh no, that is a huge maintenance nightmare. In two years you try to run on a larger file and suddenly you have bugs all over and you can't find them because you used C-style casts. Start checking your casts!).

You can do this with a template thusly :


// check_value_cast just does a static_cast and makes sure you didn't wreck the value
template <typename t_to, typename t_fm>
t_to check_cast( const t_fm & from )
{
    t_to to = static_cast<t_to>(from);
    ASSERT( static_cast<t_fm>(to) == from );
    return to;
}

but it is so common that I find the template a bit excessively verbose (again C++0x with LHS specialization would help, you could then write just :
small = check( x );

small = clamp( x );
which is much nicer).

To do clamp casts with a template is difficult. You can use std::numeric_limits to get the ranges of the dest type :

template <typename t_to, typename t_fm>
t_to clamp_cast( const t_fm & from )
{
    t_to lo = std::numeric_limits<t_to>::min();
    t_to hi = std::numeric_limits<t_to>::max();
    if ( from < lo ) return lo; // !
    if ( from > hi ) return hi; // !
    t_to to = static_cast<t_to>(from);
    RR_ASSERT( static_cast<t_fm>(to) == from ); 
    return to;
}
however, the compares inherent (at !) in clamping are problematic, for example if you're trying to clamp_cast from signed to unsigned you may get warnings there (you can also get the unsigned compare against zero warning when lo is 0). (? is there a nice solution to this ? you want to cast to the larger ranger of the two types for the purpose of the compare, so you could make some template helpers that do the compare in the wider of the two types, but that seems a right mess).

Rather than try to fix all that I just use non-template versions for our basic types :


static inline U8 S32ToU8Clamp(S32 i)    { return (U8) CLAMP(i,0,0xFF); }
static inline U8 S32ToU8Check(S32 i)    { ASSERT( i == (S32)S32ToU8Clamp(i) ); return (U8)i; }

static inline U16 S32ToU16Clamp(S32 i)  { return (U16) CLAMP(i,0,0xFFFF); }
static inline U16 S32ToU16Check(S32 i)  { ASSERT( i == (S32)S32ToU16Clamp(i) ); return (U16)i; }

static inline U32 S64ToU32Clamp(S64 i)  { return (U32) CLAMP(i,0,0xFFFFFFFFUL); }
static inline U32 S64ToU32Check(S64 i)  { ASSERT( i == (S64)S64ToU32Clamp(i) ); return (U32)i; }

static inline U8 U32ToU8Clamp(U32 i)    { return (U8) CLAMP(i,0,0xFF); }
static inline U8 U32ToU8Check(U32 i)    { ASSERT( i == (U32)U32ToU8Clamp(i) ); return (U8)i; }

static inline U16 U32ToU16Clamp(U32 i)  { return (U16) CLAMP(i,0,0xFFFF); }
static inline U16 U32ToU16Check(U32 i)  { ASSERT( i == (U32)U32ToU16Clamp(i) ); return (U16)i; }

static inline U32 U64ToU32Clamp(U64 i)  { return (U32) CLAMP(i,0,0xFFFFFFFFUL); }
static inline U32 U64ToU32Check(U64 i)  { ASSERT( i == (U64)U64ToU32Clamp(i) ); return (U32)i; }

static inline S32 U64ToS32Check(U64 i)  { S32 ret = (S32)i; ASSERT( (U64)ret == i ); return ret; }
static inline S32 S64ToS32Check(S64 i)  { S32 ret = (S32)i; ASSERT( (S64)ret == i ); return ret; }

which is sort of marginally okay. Maybe it would be nicer if I left off the type it was casting from in the name.

6/16/2011

06-16-11 - Optimal Halve for Doubling Filter

I've touched on this topic several times in the past . I'm going to wrap up a loose end.

Say you have some given linear doubling filter (linear in the operator sense, not that it's a line). You wish to halve your image in the best way such that the round trip has minimum error.

For a given discrete doubling filter (non-interpolating) find the optimal halving filter that minimizes L2 error. I did it numerically, not analytically, and measured the actual error of down->up vs. original on a large test set.

I generated halving filters for half-widths of 3,4, and 5. Large filters always produce lower error, but also more ringing, so you may not want the largest width halving filter.


upfilter :  linear  :
const float c_filter[4] = { 0.12500, 0.37500, 0.37500, 0.12500 };

 downFilter : 
const float c_filter[6] = { -0.15431, 0.00162, 0.65269, 0.65269, 0.00162, -0.15431 };
fit err = 17549.328

 downFilter : 
const float c_filter[8] = { 0.05429, -0.21038, -0.01115, 0.66724, 0.66724, -0.01115, -0.21038, 0.05429 };
fit err = 17238.310

 downFilter : 
const float c_filter[10] = { 0.05159, 0.00138, -0.21656, -0.00044, 0.66402, 0.66402, -0.00044, -0.21656, 0.00138, 0.05159 };
fit err = 16959.596

upfilter :  mitchell1  :
const float c_filter[8] = { -0.00738, -0.01172, 0.12804, 0.39106, 0.39106, 0.12804, -0.01172, -0.00738 };

 downFilter : 
const float c_filter[6] = { -0.13475, 0.02119, 0.61356, 0.61356, 0.02119, -0.13475 };
fit err = 17496.548

 downFilter : 
const float c_filter[8] = { 0.05595, -0.19268, 0.00985, 0.62688, 0.62688, 0.00985, -0.19268, 0.05595 };
fit err = 17131.069

 downFilter : 
const float c_filter[10] = { 0.05239, 0.00209, -0.19664, 0.01838, 0.62379, 0.62379, 0.01838, -0.19664, 0.00209, 0.05239 };
fit err = 16811.168

upfilter :  lanczos4  :
const float c_filter[8] = { -0.00886, -0.04194, 0.11650, 0.43430, 0.43430, 0.11650, -0.04194, -0.00886 };

 downFilter : 
const float c_filter[6] = { -0.09637, 0.05186, 0.54451, 0.54451, 0.05186, -0.09637 };
fit err = 17332.452

 downFilter : 
const float c_filter[8] = { 0.04290, -0.14122, 0.04980, 0.54852, 0.54852, 0.04980, -0.14122, 0.04290 };
fit err = 17054.006

 downFilter : 
const float c_filter[10] = { 0.03596, 0.00584, -0.13995, 0.05130, 0.54685, 0.54685, 0.05130, -0.13995, 0.00584, 0.03596 };
fit err = 16863.054

upfilter :  lanczos5  :
const float c_filter[10] = { 0.00551, -0.02384, -0.05777, 0.12982, 0.44628, 0.44628, 0.12982, -0.05777, -0.02384, 0.00551 };

 downFilter : 
const float c_filter[6] = { -0.08614, 0.07057, 0.51557, 0.51557, 0.07057, -0.08614 };
fit err = 17323.692

 downFilter : 
const float c_filter[8] = { 0.05112, -0.13959, 0.06782, 0.52065, 0.52065, 0.06782, -0.13959, 0.05112 };
fit err = 16899.712

 downFilter : 
const float c_filter[10] = { 0.04554, 0.00403, -0.13655, 0.06840, 0.51857, 0.51857, 0.06840, -0.13655, 0.00403, 0.04554 };
fit err = 16566.352

------------------------------

6/14/2011

06-14-11 - ProcessSuicide

The god damn lagarith DLL has some crash in its shutdown, so any time I play an AVI with app that uses lagarith, it hangs on exit.

(this is one of the reasons that I need to write my own lossless video format; the other reason is that lagarith can't play back at 30 fps even on ridiculously fast modern machines; and the other standard HuffYUV frequently crashes for me and is very hard to make support RGB correctly)

Anyhoo, I started using this to shut down my app, which doesn't have the stupid "wait forever for hung DLL's to unload" problem :


void ProcessSuicide()
{
    DWORD myPID = GetCurrentProcessId();

    lprintf("ProcessSuicide PID : %d\n",myPID);

    HANDLE hProcess = OpenProcess (PROCESS_ALL_ACCESS, FALSE, myPID); 
        
    if ( hProcess == INVALID_HANDLE_VALUE )
    {
        lprintf("Couldn't open my own process!\n");
        // ?? should probably do something else here, but never happens
        return;
    }
        
    TerminateProcess(hProcess,0);
    
    // ... ?? ... should not get here
    ASSERT(false);
    
    CloseHandle (hProcess);
}

At first I thought this was a horrible hack, but I've been using it for months now and it doesn't cause any problems, so I'm sort of tempted to call it not a hack but rather just a nice way to quit your app in Windows and not ever get that stupid thing where an app hangs in shutdown (which is a common problem for big apps like MSDev and Firefox).

06-14-11 - How to do input for video games

1. Read all input in one spot. Do not scatter input reading all over the game. Read it into global state which then applies for the time slice of the current frame. The rest of the game code can then ask "is this key down" or "was this pressed" and it just checks the cached state, not the hardware.

2. Respond to input immediately. Generally what that means is you should have a linear sequence of events that is something like this :

Poll input
Do actions triggered by input (eg. fire bullets)
Do time evolution of player-action objects (eg. move bullets)
Do environment responses (eg. did bullets hit monsters?)
Render frame
(* see later)

3. On a PC you have to deal with the issue of losing focus, or pausing and resuming. This is pretty easy to get correct if you obeyed #1 - read all your input in one spot, it just zeros the input state while you are out of focus. The best way to resume is when you regain focus you immediately query all your input channels to wipe any "new key down" flags, but just discard all the results. I find a lot of badly written apps that either lose the first real key press, or incorrectly respond to previous app's keys when they didn't have focus.

( For example I have keys like ctrl-alt-q that toggle focus around for me, and badly written apps will respond to that "q" as if it were for them, because they just ask for the global "new key down" state and they see a Q that wasn't there the last time they checked. )

4. Use a remapping/abstraction layer. Don't put actual physical button/keys all around your app. Even if you are sure that you don't want to provide remapping, do it anyway, because it's useful for you as a developer. That is, in your player shooting code don't write

  if ( NewButtonDown(X) ) ...
instead write
  if ( NewButtonDown(BUTTON_SHOOT) ) ...
and have a layer that remaps BUTTON_SHOOT to a physical key. The remap can also do things like taps vs holds, combos, sequences, etc. so all that is hidden from the higher level and you are free to easily change it at a later date.

This is obvious for real games, but it's true even for test apps, because you can use the remapping layer to log your key operations and provide help and such.

(*) extra note on frame order processing.

I believe there are two okay frame sequences and I'm not sure there's a strong argument in one way or the other :


Method 1 :

Time evolve all non-player game objects
Prepare draw buffers for non-player game objects
Get input
Player responds to input
Player-actions interact with world
Prepare draw buffers for player & stuff just made
Kick render buffers

Method 2 :

Get input
Player responds to input
Player-actions interact with world
Time evolve all non-player game objects
Prepare draw buffers for player & stuff just made
Prepare draw buffers for non-player game objects
Kick render buffers

The advantage of Method 1 is that the time between "get input" and "kick render" is absolutely minimized (it's reduced by the amount of time that it takes you to process the non-player world), so if you press a button that makes an explosion, you see it as soon as possible. The disadvantage is that the monsters you are shooting have moved before you do input. But, there's actually a bunch of latency between "kick render" and getting to your eye anyway, so the monsters are *always* ahead of where you think they are, so I think Method 1 is preferrable. Another disadvantage of Method 1 is that the monsters essentially "get the jump on you" eg. if they are swinging a club at you, they get to do that before your "block" button reaction is processed. This could be fixed by doing something like :

Method 3 :

Time evolve all non-player game objects (except interactions with player)
Prepare draw buffers for non-player game objects
Get input
Player responds to input
Player-actions interact with world
Non-player objects interact with player
Prepare draw buffers for player & stuff just made
Kick render buffers

this is very intentionally not "fair" between the player and the rest of the world, we want the player to basically win the initiative roll all the time.

Some game devs have this silly idea that all the physics needs to be time-evolved in one atomic step which is absurd. You can of course time evolve all the non-player stuff first to get that done with, and then evolve the player next.

06-14-11 - A simple allocator

You want to be able to allocate slots, free slots, and iterate on the allocated slot indexes. In particular :


int AllocateSlot( allocator );
void FreeSlot( allocator , int slot );
int GetNextSlot( iterator );

Say you can limit the maximum number of allocations to 32 or 64, then obviously you should use bit operations. But you also want to avoid variable shifts. What do you do ?

Something like this :


static int BottomBitIndex( register U32 val )
{
    ASSERT( val != 0 );
    #ifdef _MSC_VER
    unsigned long b = 0;
    _BitScanForward( &b, val );
    return (int)b;
    #elif defined(__GNUC__)
    return __builtin_ctz(val); // ctz , not clz
    #else
    #error need bottom bit index
    #endif
}

int __forceinline AllocateSlot( U32 & set )
{
    U32 inverse = ~set;
    ASSERT( inverse != 0 ); // no slots free!
    int index = BottomBitIndex(inverse);
    U32 mask = inverse & (-inverse);
    ASSERT( mask == (1UL<<index) );
    set |= mask;
    return index;
}

void __forceinline FreeSlot( U32 & set, int slot )
{
    ASSERT( set & (1UL<<slot) );
    set ^= (1UL<<slot);
}

int __forceinline GetNextSlot( U32 & set )
{
    ASSERT( set != 0 );
    int slot = BottomBitIndex(set);
    // turn off bottom bit :
    set = set & (set-1);
    return slot;
}

/*

// example iteration :

    U32 cur = set;
    while(cur)
    {
        int i = GetNextSlot(cur);
        lprintfvar(i);
    }

*/

However, this uses the bottom bit index, which is not as portably fast as using the top bit index (aka count leading zeros). (there are some platforms/gcc versions where builtin_ctz does not exist at all, and others where it exists but is not fast because there's no direct instruction set correspondence).

So, the straightforward version that uses shifts and clz is probably better in practice.

ADDENDUM : Duh, version of same using only TopBitIndex and no variable shifts :


U32 __forceinline AllocateSlotMask( U32 & set )
{
    ASSERT( (set+1) != 0 ); // no slots free!
    U32 mask = (~set) & (set+1); // find lowest off bit
    set |= mask;
    return mask;
}

void __forceinline FreeSlotMask( U32 & set, U32 mask )
{
    ASSERT( set & mask );
    set ^= mask;
}

U32 __forceinline GetNextSlotMask( U32 & set )
{
    ASSERT( set != 0 ); // iteration over when set == 0
    U32 mask = set & (-set); // lowest on bit
    set ^= mask;
    return mask;
}

int __forceinline MaskToSlot( U32 mask )
{
    int slot = TopBitIndex(mask);
    ASSERT( mask == (1UL<<slot) );
    return slot;
}

(note the forceinline is important because the use of actual references is catastrophic on many platforms (due to LHS), we need these to get compiled like macros).

6/11/2011

06-11-11 - God damn YUV

So I've noticed for the last year or so that x264 videos I was making as test/reference all had weirdly shifted brightness values. I couldn't figure out why exactly and forgot about it.

Yesterday I finally adapted my Y4M converter (which does AVI <-> Yuv4MPEG with RGB <-> YUV color conversion and up/down sample, and uses various good methods of YUV, such as out of gamut chroma spill, lsqr optimized conversion, etc.). I added support for the "rec601" (JPEG) and "bt709" (HDTV) versions of YUV (and by "YUV" I mean YCbCr in gamma-encoded space), with both 0-255 and 16-235 range support. I figured I would stress test it by trying to use it in place of ffmpeg in my h264 pipeline for the Y4M conversion. And I found the old brightness problem.

It turns out that when I make an x264 encode and then play it back through DirectShow (with ffdshow), the player is using the "BT 709" yuv matrix (in 16-235 range) (*). When I use MPlayer to play it back and write out frames, it's using the "rec 601" yuv matrix (in 16-235 range).

(*
this appears to be because there's nothing specified in the stream and ffdshow will pick the matrix based on the resolution of the video - so that will super fuck you, depending on the size of the video you need to pick a different matrix (it's trying to do the right thing for HDTV vs SDTV standard video). Their heuristic is :.

width > 1024 or height >= 600: BT.709
width <=1024 and height < 600: BT.601
*)

(in theory x264 doesn't do anything to the YUV planes - I provide it y4m, and it just works on yuv as bytes that it doesn't know anything about; the problem is the decoders which are doing their own thing).

The way I'm doing it now is I make the Y4M myself in rec601 space, let x264 encode it, then extract frames with mplayer (which seems to always use 601 regardless of resolution). If there was a way to get the Y4M directly out of x264 that would make it much easier because I could just do my own yuv->rgb (the only way I've found to do this is to use ffmpeg raw output).

Unfortunately Y4M itself doesn't seem to have any standardized tag to indicate what kind of yuv data is in the container. I've made up my own ; I write an xtag that contains :


yuv=pc.601
yuv=pc.709
yuv=bt.601
yuv=bt.709

where "bt" implies 16-235 luma (16-240 chroma) and "pc" implies 0-255 (fullrange).

x264 has a bunch of --colormatrix options to tag the color space in the H264 stream, but apparently many players don't respect it, so the recommended practice is to use the color space that matches your resolution (eg. 709 for HD and 601 for SD). (the --colormatrix options you want are bt709 and bt470bg , I believe).

Some notes by other people :


TV capture "SD" mpeg2 720x576i -> same res in mpe4, so use --colormatrix bt601 --fullrange ?
TV capture "HD" mpeg2 1440x1080i -> same res in mpe4, so use --colormatrix bt709 --fullrange ?

look at table E-3 (Colour Primaries) in the H.264 spec:

bt470bg = bt601 625 = bt1358 625 = bt1700 625 (PAL/SECAM)
smpte170m = bt601 525 = bt1358 525 = bt1700 NTSC

(yes, PAL and NTSC have different bt601 matrices here)

yup there's only:
--colormatrix <string> Specify color matrix setting ["undef"]
- undef, bt709, fcc, bt470bg, smpte170m, smpte240m, GBR, YCgCo

ADDENDUM : the color matrix change in bt.709 doesn't make sense to me. While in theory the phosphors of HDTVs match 709 better than 601, that is actually pretty irrelevant, since YCbCr is run in gamma-corrected space, and we do the chroma sub-sample, and so on ( see Mag of nonconst luminance error - Charles Poynton ). The actual practical effect of the 709 new matrix is that we're watching lots of videos with badly shifted brightness and saturation (because they used the 601 matrix and the format/codec/player aren't in agreement about what matrix should be used). In reality, it just made video quality much much worse.

(I also don't understand the 16-235 range that was used in MPEG. Yeah yeah, NTSC needs the top and bottom of the signal for special codes, fine, but why does that have to be hard-coded into the digital signal? The special region at top and bottom is an *analog* thing. The video could have been full range 0-255 in the digital encoding, and then in the DAC output you just squish it into the middle 7/8 of the signal band. Maybe there's something going on that I don't understand, but it just seems like terrible software engineering design to take the weird quirk of one system (NTSC analog output) and push that quirk back up the pipeline to affect something (digital encoding format) that it doesn't need to).

6/08/2011

06-08-11 - Tech Todos

Publicly getting my thoughts together :

1. Oodle. Just finish it! God damn it.

2. JPEG decoder. I got really close to having this done, need to finish it. The main thing left that I want to do is work on the edge-adaptive-bilteral filter a bit more; currently it's a bit too strong on the non-artifact areas, I think I can make it more selective about only working on the ringing and blockiness. The other things I want are chroma-from-luma support and a special mode for text/graphics.

3. Byte-wise LZ with optimal parse. This has been on my list for a long time. I'm not really super motivated though. But it goes along with -

4. LZ string matcher test. Hash tables, hash->list, hash->bintree, hash->MMC, suffix trees, suffix arrays, patricia tries, etc. ? Would be nice to make a thorough test bed for this. (would also help the Oodle LZ encoder which is currently a bit slow due to me not spending any time on the string matcher).

5. Cuckoo hash / cache aware hash ; I did a bunch of hash testing a while ago and want to add this to my tests. I'm very curious about it, but this is kind of pointless.

6. Image doubler / denoiser / etc ; mmm meh I've lost my motivation for this. It's a big project and I have too many other things to do.

7. Make an indy game. Sometimes I get the craving to do something interactive/artistic. I miss being able to play with my work. (I also get the craving to make a "demo" which would be fun and is rather less pain in the butt than making a full game). Anyhoo, probably not gonna happen, since there's just not enough time for this.

ADDENDUM : some I forgot :

8. Finish my video codec ; I still want to redo my back end coder which was really never intented for video; maybe support multiple sizes of blocks; try some more perceptual metrics for encoder decision making; various other ideas.

9. New lossy image codec ; I have an unfinished one that I did for RAD, but I think I should just scrap it and do the new one. I'm interested in directional DCT. Also simple highly asymetric schemes, such as static classes that are encoder-optimized (instead of adaptive models; adaptive models are very bad for LHS). More generally, I have some ideas about trying to make a codec that is more explicitly perceptual, it might be terrible in rmse, but look better to the human eye; one part of that is using the imdiff metrics I trained earlier, another part is block classification (smooth,edge,detail) and special coders per class.

6/07/2011

06-07-11 - How to read an LZ compressed file

An example of the kind of shite I'm doing in Oodle these days.

You have an LZ-compressed file on disk that you want to get decompressed into memory as fast as possible. How do you do this ?

Well, first of all, you make your compressor write in independent chunks so that the decompressor can run on multiple chunks at the same time with threads. But to start you need to know where the chunks are in the file, so the first step is :


1. Fire an async read of the first 64k of the file to get the header.

the header will tell you where all the independent chunks are. (aside : in final Oodle there may also be an option to aglomerate all the headers of all the files, so you may already have this first 64k in memory).

So after that async read is finished, you want to fire a bunch of decomps on the chunks, so the way to do this is :


2. Make a "Worklet" (async function callback) which parses the header ; set the Worklet to run when the IO op #1
finishes.

I used to do this by having the WorkMgr get a signal from IO thread (which still happens) but I now also have a mechanism to just run Worklets directly on the IO thread, which is preferrable for Worklets that are super trivial like this one.

Now, if the file is small you could just have your Worklet #2 read the rest of the file and then fire async works on each one, but if the file is large that means you are waiting a long time for the IO before you start any decomp work, so that's not ideal, instead what we do is :


3. In Worklet #2, after parsing header, fire an async IO for each independent compressed chunk.  For each chunk, create
a decompression Worklet which is dependent on the IO of that chunk (and also neighbors, since due to IO
sector alignment the compression boundaries and IO boundaries are not quite the same).

So what this will do is start a bunch of IO's that then retire one by one, as each one retires it starts up the decomp task for that chunk. This means you start decompressing almost immediately and for large files you keep the CPU and IO busy the whole time.

Finally the main thread needs a way to wait for this all to be done. But the handles to the actual decompression async tasks don't exist until async task #2 runs, so the main thread can't wait on them directly. Instead :


4. At the time of initial firing (#1), create an abstract waitable handle and set it to "pending" state; then
pass this handle through your async chain.  Task #2 should set it to needing "N to go", since it's the first
point that knows the count, and then the actual async decompresses in #3 should decrement that counter.  So
the main thread can wait on it being "0 to go".

You can think of this as a sempahore, though in practice I don't use a semaphore because there are some OS's where that's not possible (sadly).

What the client sees is just :


AsyncHandle h = OodleLZ_StartDecompress( fileName );

Async_IsPending(h); ?

Async_Block(h);

void * OodleLZ_GetFinishedDecompress( h );

if they just want to wait on the whole thing being done. But if you're going to parse the decompressed file, it's more efficient to only wait on the first chunk being decompressed, then parse that chunk, then wait on the next chunk, etc. So you need an alternate API that hands back a bunch of handles, and then a streaming File API that does the waiting for you.

6/04/2011

06-04-11 - Keep Case

I've been meaning to do this for a long time and finally got off my ass.

TR (text replace) and zren (rename) in ChukSH now support "keep case".

Keep case is pretty much what you always want when you do text replacement (especially in source code), and everybody should copy me. For example when I do a find-replace from "lzp1f" -> "lzp1g" what I want is :


lzp1f -> lzp1g  (lower->lower)
LZP1F -> LZP1G  (upper->upper)
Lzp1f -> Lzp1g  (first cap->first cap)
Lzp1G -> Lzp1G  (mixed -> mixed)

The kernel that does this is matchpat in cblib which will handle rename masks like : "poop*face" -> "shit*butt" with keep case option or not.

In a mixed-wild-literal renaming spec like that, the "keep case" applies only to the literal parts. That is, "poop -> shit" and "face -> butt" will be applied with keep-case independently , the "*" part will just get copied.

eg :


Poop3your3FACE -> Shit3your3BUTT

Also, because keep-case is applied to an entire chunk of literals, it can behave somewhat unexpectedly on file renames. For example if you rename

src\lzp* -> src\zzh*

the keep-case will apply to the whole chunk "src\lzp" , so if you have a file like "src\LZP" that will be considered "mixed case" not "all upper". Sometimes my intuition expects the rename to work on the file part, not the full path. (todo : add an option to separate the case-keeping units by path delims)

The way I handle "mixed case" is I leave it up to the user to provide the mixed case version they want. It's pretty impossible to get it right automatically. So the replacement text should be provided in the ideal mixed case capitalization. eg. to change "HelpSystem" to "QueryManager" you need to give me "QueryManager" as the target string, capitalized that way. All mixed case source occurances of "HelpSystem" will be changed to the same output, eg.


helpsystem -> querymanager
HELPSYSTEM -> QUERYMANAGER
Helpsystem -> Querymanager
HelpSystem -> QueryManager
HelpsYstem -> QueryManager
heLpsYsTem -> QueryManager
HeLPSYsteM -> QueryManager

you get it.

The code is trivial of course, but here it is for your copy-pasting pleasure. I want this in my dev-studio find/replace-in-files please !


// strcpy "putString" to "into"
//  but change its case to match the case in src
// putString should be mixed case , the way you want it to be if src is mixed case
void strcpyKeepCase(
        char * into,
        const char * putString,
        const char * src,
        int srcLen);

void strcpyKeepCase(
        char * into,
        const char * putString,
        const char * src,
        int srcLen)
{   
    // okay, I have a match
    // what type of case is "src"
    //  all lower
    //  all upper
    //  first upper
    //  mixed
    
    int numLower = 0;
    int numUpper = 0;
    
    for(int i=0;i<srcLen;i++)
    {
        ASSERT( src[i] != 0 );
        if ( isalpha(src[i]) )
        {
            if ( isupper(src[i]) ) numUpper++;
            else numLower++;
        }
    }
    
    // non-alpha :
    if ( numLower+numUpper == 0 )
    {
        strcpy(into,putString);
    }
    else if ( numLower == 0 )
    {
        // all upper :
        while( *putString )
        {
            *into++ = toupper( *putString ); putString++;
        }
        *into = 0;
    }
    else if ( numUpper == 0 )
    {
        // all lower :
        while( *putString )
        {
            *into++ = tolower( *putString ); putString++;
        }
        *into = 0;
    }
    else if ( numUpper == 1 && isalpha(src[0]) && isupper(src[0]) )
    {
        // first upper then low
        
        if( *putString ) //&& isalpha(*putString) )
        {
            *into++ = toupper( *putString ); putString++;
        }
        while( *putString )
        {
            *into++ = tolower( *putString ); putString++;
        }
        *into = 0;
    }
    else
    {
    
        // just copy putString - it should be mixed 
        strcpy(into,putString);
    }
}


ADDENDUM : on a roll with knocking off stuff I've been meaning to do for a while ...

ChukSH now also contains "fixhtmlpre.exe" which fixes any less-than signs that are found within a PRE chunk.

Hmm .. something lingering annoying going on here. Does blogger convert and-l-t into less thans?

ADDENDUM : yes it does. Oh my god the web is so fucked. I've been doing a bit of reading and it appears this is a common and atrocious hack. Basically the problem is that people use XML for the markup of the data transfer packets. Then they want to sent XML within those packets. So you have to form some shit like :


<data> I want to send <B> this </B> god help me </data>

but putting the less-thans inside the data packet is illegal XML (it's supposed to be plain text), so instead they send

<data> I want to send &-l-tB> this &-l-t/B> god help me </data>

but they want the receiver to see a less-than, not the characters &-l-t , so the receiver parses those codes back into less-than and then treats the data received as its own hunk of XML with internal markups.

Basically people use it as a way to send codes that the current parser will ignore, but the next parser will see. There are lots of pages about how this is against compliance standards but nobody cares and it seems to be widespread.

So anyway, the conclusion is : just changing less thans to &-l-t works fine if you are just posting html (eg. for rants.html it works fine) but for sending to Blogger (or probably any other modern XML-based app) it doesn't.

The method I use now which seems to work on Blogger is I convert less thans to


<code><</code>

How is there not a fucking "literal" tag ? (There is one called XMP but it's deprecated and causes line breaks, and it's really not just a literal tag around a bunch of characters, it's a browser format mode change)

6/03/2011

06-03-11 - Amalgamate

So, here you go : amalgamate code & exe (105k)

The help is this :


amalgamate built May 19 2011, 18:01:28
args: amalgamate
HELP :
usage : amalgamate [-options] <to> <from1> [from2...]
options:
-q  : quiet
-v  : verbose
-c  : add source file's local dir to search paths
-p  : use pragma once [else treat all as once]
-r  : recurse from dirs [list but don't recurse]
-xS : extension filter for includes S=c;cpp;h;inl or whatever
-eS : extension filter for enum of from dir
-iS : add S to include path to amalgamate

from names can be files or dirs
use -i only for include dirs you want amalgamated (not system dirs)

What it does : files that are specified in the list of froms (and match the extension filter for enum of from dir), or are found via #include (and match the extension filter for includes), are concatted in order to the output file. #includes are only taken if they are in one of the -I listed search dirs.

-p (use pragma once) is important for me - some of my #includes I need to occur multiple times, and some not. Amalgamate tells the difference by looking for "pragma once" in the file. eg. stuff like :

#define XX stuff
#include "use_XX.inc"
#define XX stuff2
#include "use_XX.inc"
needs to include the .inc both times. But most headers should only be included once (and those have #pragma once in them).

So for example I made a cblib.h thusly :


amalgamate cblib.h c:\src\cblib c:\src\cblib\LF c:\src\cblib\external -Ic:\src -p -xh;inc;inl -eh

which seems to work. As another test I made an amalgamated version of the test app for rmse_hvs_t that I gave to Ratcliff. This was made with :

amalgamate amalgamate_rmse_hvs_t.cpp main_rmse_hvs_t.cpp rmse_hvs_t.cpp -I. -v -Ic:\src -p

and the output is here : amalgamate_rmse_hvs_t.zip (83k)


But for anything large (like cblib.cpp) this way of sticking files together just doesn't work. It should be obvious why now that we're thinking about it - C definitions last until end of file (or "translation unit" if you like), and many files have definitions or symbols of the same name that are not the same thing - sometimes just by accidental collision, but often quite intentionally!

The accidental ones are things like using "#define XX" in lots of files ; you can fix those by always using your file name in front of definitions that you want to only be in your file scope (or by being careful to #undef) - also local namespacing variables and etc. etc. So you can deal with that.

But non-coincidental collisions are quite common as well. For example I have things like :

replace_malloc.h :
  #define malloc my_malloc

replace_malloc.c :
  void * my_malloc ( return malloc(); }

It's very important that replace_malloc.c doesn't include replace_malloc.h , but when you amalgamate it might (depending on order).

Another nasty one is the common case where you are supposed to do some #define before including something. eg. something like :

#define CB_HUFFMAN_UNROLL_COUNT 16
#include "Huffman.h"
that kind of thing is destroyed by amalgamate (only the first include will have effect, and later people who wanted different numbers don't get what they expected). Even windows.h with the WINNT_VER and LEAN_AND_MEAN gets hosed by this.

You can also get very nasty bugs just by tacking C files together. For example in plain C you could have :

file1 : 
static int x;
file 2 :
int x = 7;
and in C that is not an error, but now two separate variables have become one when amalgamated. I'm sure there are tons of other evil hidden ways this can fuck you.

So I think it's basically a no-go for anything but tiny code bases, or if you very carefully write your code for amalgamation from the beginning (and always test the amalgamated build, since it can pick up hidden bugs).

5/31/2011

05-31-11 - STB style code

I wrote a couple of LZP1 implementations (see previous) in "STB style" , that is, plain C, ANSI, single headers you can just include and use. It's sort of wonderfully simple and easy to use. Certainly I understand the benefit - if I'm grabbing somebody else's code to put in my project, I want it to be STB style, I don't want some huge damn library.

(for example I actually use the James Howse "lsqr.c" which is one file, I also use "divsufsort.c" which is a delightful single file, those are beautiful little pieces of code that do something difficult very well, but I would never use some beast like the GNU Triangulated Surface lib, or OpenCV or any of those big bloated libs)

But I just struggle to write code that way. Like even with something as simple as the LZP's , okay fine you write an ANSI version and it works. But it's not fast and it's not very friendly.

I want to add prefetching. Well, I have a module "mem.h" that does platform-independent prefetching, so I want to include that. I also want fast memsets and memcpys that I already wrote, so do I just copy all that code in? Yuck.

Then I want to support streaming in and out. Well I already have "CircularBuffer.h" that does that for me. Sure I could just rewrite that code again from scratch, but this is going backwards in programming style and efficiency, I'm duplicating and rewriting code and that makes unsafe buggy code.

And of course I want my assert. And if I'm going to actually make an EXE that's fast I want my async IO.

I just don't see how you can write good code this way. I can't do it; it totally goes against my style, and I find it very difficult and painful. I wish I could, it would make the code that I give away much more useful to the world.

At RAD we're trying to write code in a sort of heirarchy of levels. Something like :


very low level : includes absolutely nothing (not even stdlib)
low level : includes only low level (or lower) (can use stdlib)
              low level stuff should run on all platforms
medium level : includes only medium level (or lower)
               may run only on newer platforms
high level : do whatever you want (may be PC only)

This makes a lot of sense and serves us well, but I just have so much trouble with it.

Like, where do I put my assert? I like my assert to do some nice things for me, like log to file, check if a debugger is present and int 3 only if it is (otherwise do an interactive dialog). So that's got to be at least "medium level" - so now I'm writing some low level code and I can't use my assert!

Today I'm trying to make a low level logging faccility that I can call from threads and it will stick the string into a lock-free queue to be flushed later. Well, I've already got a bunch of nice lockfree queues and stuff ready to go, that are safe and assert and have unit tests - but those live in my medium level lib, so I can't use them in the low level code that I want to log.

What happens to me is I wind up promoting all my code to the lowest level so that it can be accessible to the place that I want it.

I've always sort of struggled with separated libs in general. I know it's a nice idea in theory to build your game out of a few independent (or heirarchical) libs, but in practice I've always found that it creates more friction than it helps. I find it much easier to just throw all my code in a big bag and let each bit of code call any other bit of code.

5/20/2011

05-20-11 - LZP1 Variants

LZP = String match compression using some predictive context to reduce the set of strings to match

LZP1 = variant of LZP without any entropy coding

I've just done a bunch of LZP1 variants and I want to quickly describe them for my reference. In general LZP works thusly :


Make some context from previous bytes
Use context to look in a table to see a set of previously seen pointers in that context
  (often only one, but maybe more)

Encode a flag for whether any match, which one, and the length
If no match, send a literal

Typically the context is made by hashing some previous bytes, usually with some kind of shift-xor hash. As always, larger hashes generally mean more compression at the cost of more memory. I usually use a 15 bit hash, which means 64k memory use if the table stores 16 bit offsets rather than pointers.

Because there's no entropy coding in LZP1, literals are always sent in 8 bits.

Generally in LZP the hash table of strings is only updated at literal/match decision points - not for all bytes inside the match. This helps speed and doesn't hurt compression much at all.

Most LZP variants benefit slightly from "lazy parsing" (that is, when you find a match in the encoder, see if it's better to instead send a literal and take the match at the next byte) , but this hurts encoder speed.

LZP1a : Match/Literal flag is 1 bit (eight of them are sent in a byte). Single match option only. 4 bit match length, if match length is >= 16 then send full bytes for additional match length. This is the variant of LZP1 that I did for Clariion/Data General for the Pentium Pro.

LZP1b : Match/Literal is encoded as 0 = LL, 10 = LM, 11 = M (this is the ideal encoding if literals are twice as likely as matches) ; match length is encoded as 2 bits, then if it's >= 4 , 3 more bits, then 5 more bits, then 8 bits (and after that 8 more bits as needed). This variant of LZP1 was the one published back in 1995.

LZP1c : Hash table index is made from 10 bits of backwards hash and 5 bits of forward hash (on the byte to be compressed). Match/Literal is a single bit. If a match is made, a full byte is sent, containing the 5 bits of forward hash and 3 bits of length (4 bits of forward hash and 4 bits of length is another option, but is generally slightly worse). As usual if match length exceeds 3 bits, another 8 bits is sent. (this is a bit like LZRW3, except that we use some backward context to reduce the size of the forward hash that needs to be sent).

LZP1d : string table contains 2 pointers per hash (basically a hash with two "ways"). Encoder selects the best match of the two and send a 4 bit match nibble consisting of 1 selection bit and 3 bits of length. Match flag is one bit. Hash way is the bottom bit of the position, except that when a match is made the matched-from pointer is not replaced. More hash "ways" provide more compression at the cost of more memory use and more encoder time (most LZP's are symmetric, encoder and decoder time is the same, but this one has a slower encoder) (nowadays this is called ROLZ).

LZP1e : literal/match is sent as run len; 4 bit nibble is divided as 0-4 = literal run length, 5-15 = match length. (literal run length can be zero, but match length is always >= 1, so if match length >= 11 additional bytes are sent). This variant benefits a lot from "Literal after match" - after a match a literal is always written without flagging it.

LZP1f is the same as LZP1c.

LZP1g : like LZP1a except maximum match length is 1, so you only flag literal/match, you don't send a length. This is "Predictor" or "Finnish" from the ancient days. Hash table stores chars instead of pointers or offsets.

Obviously there are a lot of ways that these could all be modifed to get more compression (*), but it's rather pointless to go down that path because then you should just use entropy coding.

(* a few ways : combine the forward hash of lzp1c with the "ways" of lzp1d ; if the first hash fails to match escape down to a lower order hash (such as maybe just order-1 plus 2 bits of position) before outputting a literal ; output literals in 7 bits instead of 8 by using something like an MTF code ; write match lengths and flags with a tuned variable-bit code like lzp1b's ; etc. )


Side note : while writing this I stumbled on LZ4 . LZ4 is almost exactly "LZRW1". It uses a hash table (hashing the bytes to match, not the previous bytes like LZP does) to find matches, then sends the offset (it's a normal LZ77, not an LZP). It encodes as 4 bit literal run lens and 4 bit match lengths.

There is some weird/complex stuff in the LZ4 literal run len code which is designed to prevent it from getting super slow on random data - basically if it is sending tons of literals (more than 128) it starts stepping by multiple bytes in the encoder rather than stepping one byte at a time. If you never/rarely compress random data then it's probably better to remove all that because it does add a lot of complexity.

REVISED : Yann has clarified LZ4 is BSD so you can use it. Also, the code is PC only because he makes heavy use of unaligned dword access. It's a nice little simple coder, and the speed/compression tradeoff is good. It only works well on reasonably large data chunks though (at least 64k). If you don't care so much about encode time then something that spends more time on finding good matches would be a better choice. (like LZ4-HC, but it seems the LZ4-HC code is not in the free distribution).

He has a clever way of handling the decoder string copy issue where you can have overlap when the offset is less than the length :


    U32     dec[4]={0, 3, 2, 3};

    // copy repeated sequence
    cpy = op + length;
    if (op-ref < 4)
    {
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        ref -= dec[op-ref];
    }
    while(op < cpy) { *(U32*)op=*(U32*)ref; op+=4; ref+=4; }
    op=cpy;     // correction

This is something I realized as well when doing my LZH decoder optimization for SPU : basically a string copy with length > offset is really a repeating pattern, repeating with period "offset". So offset=1 is AAAA, offset=2 is ABAB, offset=3 is ABCABC. What that means is once you have copied the pattern a few times the slow way (one byte at a time), then you can step back your source pointer by any multiple of the offset that you want. Your goal is to step it back enough so that the separation between dest and source is bigger than your copy quantum size. Though I should note that there are several faster ways to handle this issue (the key points are these : 1. you're already eating a branch to identify the overlap case, you may as well have custom code for it, and 2. the single repeating char situation (AAAA) is by far more likely than any other).

ADDENDUM : I just found the LZ4 guy's blog (Yann Collet, who also did the fast "LZP2"), there's some good stuff on there. One I like is his compressor ranking . He does the right thing ( I wrote about here ) which is to measure the total time to encode,transmit,decode, over a limitted channel. Then you look at various channel speeds and you can see in what domain a compressor might be best. But he does it with nice graphs which is totally the win.

5/13/2011

05-13-11 - Avoiding Thread Switches

A very common threading model is to have a thread for each type of task. eg. maybe you have a Physics Thread, Ray Cast thread, AI decision thread, Render Thread, an IO thread, Prefetcher thread, etc. Each one services requests to do a specific type of task. This is good for instruction cache (if the threads get big batches of things to work on).

While this is conceptually simple (and can be easier to code if you use TLS, but that is an illusion, it's not actually simpler than fully reentrant code in the long term), if the tasks have dependencies on each other, it can create very complex flow with lots of thread switches. eg. thread A does something, thread B waits on that task, when it finishes thread B wakes up and does something, then thread A and C can go, etc. Lots of switching.

"Worklets" or mini work items which have dependencies and a work function pointer can make this a lot better. Basically rather than thread-switching away to do the work that depended on you, you do it immediately on your thread.

I started thinking about this situation :

A very simple IO task goes something like this :


Prefetcher thread :

  issue open file A

IO thread :

  execute open file A

Prefetcher thread :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

IO thread :

  do read on file A
  do close file A

Prefetcher thread :

  register file A to prefetched list

lots of thread switching back and forth as they finish tasks that the next one is waiting on.

The obvious/hacky solution is to create larger IO thread work items, eg. instead of just having "open" and "read" you could make a single operation that does "open, malloc, read, close" to avoid so much thread switching.

But that's really just a band-aid for a general problem. And if you keep doing that you wind up turning all your different systems into "IO thread work items". (eg. you wind up creating a single work item that's "open, read, decompress, parse animation tree, instatiate character"). Yay you've reduced the thread switching by ruining task granularity.

The real solution is to be able to run any type of item on the thread and to immediately execute them. Instead of putting your thread to sleep and waking up another one that can now do work, you just grab his work and do it. So you might have something like :


Prefetcher thread :

  queue work items to prefetch file A
  work items depend on IO so I can't do anything and go to sleep

IO thread :

  execute open file A

  [check for pending prefetcher work items]
  do work item :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

  do IO thread work :

  do read on file A
  do close file A

  [check for pending prefetcher work items]
  do work item :

  register file A to prefetched list

so we stay on the IO thread and just pop off prefetcher work items that depended on us and were waiting for us to be able to run.

More generally if you want to be super optimal there are complicated issues to consider :

i-cache thrashing vs. d-cache thrashing :

If we imagine the simple conceptual model that we have a data packet (or packets) and we want to do various types of work on it, you could prefer to follow one data packet through its chain of work, doing different types of work (thrashing i-cache) but working on the same data item, or you could try to do lots of the same type of work (good for i-cache) on lots of different data items.

Certainly in some cases (SPU and GPU) it is much better to keep i-cache coherent, do lots of the same type of work. But this brings up another issue :

Throughput vs. latency :

You can generally optimize for throughput (getting lots of items through with a minimum average time), or latency (minimizing the time for any one item to get from "issued" to "done"). To minimize latency you would prefer the "data coherent" model - that is, for a given data item, do all the tasks on it. For maxmimum throughput you generally preffer "task coherent" - that is, do all the data items for each type of task, then move on to the next task. This can however create huge latency before a single item gets out.

ADDENDUM :

Let me say this in another way.

Say thread A is doing some task and when it finishes it will fire some Event (in Windows parlance). You want to do something when that Event fires.

One way to do this is to put your thread to sleep waiting on that Event. Then when the event fires, the kernel will check a list of threads waiting on that event and run them.

But sometimes what you would rather do is to enqueue a function pointer onto that Event. Then you'd like the Kernel to check for any functions to run when the Event is fired and run them immediately on the context of the firing thread.

I don't know of a way to do this in general on normal OS's.

Almost every OS, however, recognizes the value of this type of model, and provides it for the special case of IO, with some kind of IO completion callback mechanism. (for example, Windows has APC's, but you cannot control when an APC will be run, except for the special case of running on IO completion; QueueUserAPC will cause them to just be run as soon as possible).

However, I've always found that writing IO code using IO completion callbacks is a huge pain in the ass, and is very unpopular for that reason.

old rants