8/31/2010

08-31-10 - LOOP

Every once in a while I think this is a good idea :

#define LOOP(var,count) for(int var=0;(var)<(int)(count);var++)
#define LOOPBACK(var,count) for(int var=(int)(count)-1;(var)>=0;var--)

but then I never wind up using it, and I find the code that does use it looks really ugly. Maybe if I got in the habit that ugliness would go away.

Certainly adding a "loop" keyword would've been a good idea in C99. Instead we have GCC trying to optimize out signed int wrapping, and many compilers now specifically look for the for(;;) construct and special-case handling it as if it was a loop() keyword.


In other minor news, I'm running with the "NoScript" addon now. I've used FlashBlock for a long time to much happiness, so this is just the next level. It does break 90% of web sites out there, but it has also made me shockingly aware of how many random sites are trying to run scripts that are highly questionable (mainly data-mining and ad-serving).


People sometimes ask me about laptops because I wrote about them before :

cbloom rants 05-07-10 - New Lappy
cbloom rants 04-15-10 - Laptop Part 2
cbloom rants 04-12-10 - Laptop search
cbloom rants 01-20-09 - Laptops Part 3

A small update : I'm still very happy with the Dell Precision 6500. They've updated it with some newer GPU options, so you can now get it with an ATI 7820 , which is a full DX11 part and close to as fast as you can get mobile. Other than that all my advice remains the same - get USB 3 of course, quad i7, LED backlighting, install your own SSD and RAM and do a clean windows install. Do NOT get RAID disks, and do NOT install any Intel or Dell drivers. I have no DPC latency problems or any shite like that. The only thing about it that's slightly less than perfect is the temperature / fan control logic is not quite right.

It looks like there's a problem with the NV GTX 280 / Dual chip / Powermizer stuff. See :

My Solution for Dell XPS M1530 DPC Latency
Dell, DPC Latency, and You - Direct2Dell - Direct2Dell - Dell Community
Dell Latitude DPC Latency Issues

So I recommend staying away from that whole family. The new Vaio Z looks really awesome for a small thin/light, however there is a bit of a nasty problem. They only offer it with SSD's in RAID, and they are custom non-replaceable SSD's, and it appears that the reason for the 4 SSD's in RAID is because they are using cheapo low end flash. There's lots of early reports of problems with this setup, and the fact that you have to run RAID and can't change the drives is a big problem. Also, last time I checked Windows still can't send through TRIM to RAID, so that is borked, but theoretically that will be addressed some day.

8/27/2010

08-27-10 - Cumulative Probability Trees

I've been thinking about Cumulative Probability Trees today; if you prefer, it's partial-sum trees. Basically you have a count for each index C[i] , and you want to be able to quickly get the sum of all counts between 0 and i = S[i]. You want to be able to query and update both in <= logN time. So for example just using S[i] as an array makes query be O(1), but then update is O(N), no good.

The standard solution is Fenwick Trees . They are compact (take no more room than the C[i] table itself). They are O(log(N)) fast. In code :


F[i] contains a partial sum of some binary range
the size of the binary range is equal to the bottom bit on of i
if i&1  - it contains C[i]
if i&3 == 2 - contains Sum of 2 ending with i (eg. C[i]+C[i-1] )
if i&7 == 4 - contains Sum of 4 ending with i
if i&F == 8 - contains Sum of 8 ending with i
(follow the link above to see pictures). To get C[i] from F[i] you basically have to get the partial sums S[i] and S[i-1] and subtract them (you can terminate early when you can tell that the rest of their sum is a shared walk). The logN walk to get S from F is very clever :

sum = 0;
while (i > 0)
  {
    sum += F[i];
    i = i & (i-1);
  }

The i & (i-1) step turns off the bottom bit of i, which is the magic of the Fenwick Tree structure being the same as the structure of binary integers. (apparently this is the same as i -= i & -i , though I haven't worked out how to see that clearly).

If you put F[0] = 0 (F starts indexing at 1), then you can do this branchless if you want :


sum = 0;
UNROLL8( sum += F[i]; i = i & (i-1); );

(for an 8-bit symbol, eg 256 elements in tree).

You can't beat this. The only sucky thing is that just querying a single probability is also O(logN). There are some cases where you want to query probability more often than you do anything else.

One solution to that is to just store the C[i] array as well. That doesn't hurt your update time much, and give you O(1) query for count, but it also doubles your memory use (2*N ints needed instead of N).

One option is to keep C[i], and throw away the bottom level of the Fenwick tree (the odd indexes that just store C[i]). Now your memory use is (3/2)*N ; it's just as fast but a little ugly.

But I was thinking what if we start over. We have the C[i], what if we just build a tree on it?

The most obvious thing is to build a binary partial sum tree. At level 0 you have the C[i], at level 1 you have the sum of pairs, at level 2 you have the sum of quartets, etc :


showing the index that has the sum of that slot :

0:01234567
1:00112233
2:00001111
3:00000000

So update is very simple :

Tree[0][i] ++;
Tree[1][i>>1] ++;
Tree[2][i>>2] ++;
...

But querying a cumprob is a bit messy. You can't just go up the tree and add, because you may already be counted in a parent. So you have to do something like :

sum = 0;

if ( i&1 ) sum += Tree[0][i-1];
i>>=1;
if ( i&1 ) sum += Tree[1][i-1];
i>>=2;
if ( i&1 ) sum += Tree[1][i-1];
..

This is O(logN) but rather uglier than we'd like.

So what if we instead design our tree to be good for query. So we by construction say that our query for cumprob will be this :


sum = Tree[0][i];
sum += Tree[1][i>>1];
sum += Tree[2][i>>2];
...

That is, at each level of the tree, the index (shifted down) contains the amount that should be added on to get the partial sum that preceeds you. That is, if i is >= 64 , then Tree[6][1] will contain the sum from [0..63] and we will add that on.

In particular, at level L, if (i>>L)is odd , it should contain the sum of the previous 2^L items. So how do we do the update for this?


Tree[0][i] ++;
i >>= 1;
if ( i&1 ) Tree[1][i] ++;
i >>= 1;
if ( i&1 ) Tree[2][i] ++;
...

or

Tree[0][i] ++;
if ( i&2 ) Tree[1][i>>1] ++;
if ( i&4 ) Tree[2][i>>2] ++;
...

or

Tree[0][i] ++;
i >>= 1;
Tree[1][i] += i&1;
i >>= 1;
Tree[2][i] += i&1;
...

this is exactly complementary to the query in the last type of tree; we've basically swapped our update and query.

Now if you draw what the sums look like for this tree you get :

These are the table indexes :

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 1 2 3 4 5 6 7
0 1 2 3
0 1
0

These are the table contents :

C0 C0-C1 C2 C2-C3 C4 C4-C5 C6 C6-C7 C8 C8-C9 C10 C10-C11 C12 C12-C13 C14 C14-C15
0 C0-C1 0 C4-C5 0 C8-C9 0 C12-C13
0 C0-C3 0 C8-C11
0 C0-C7
0
it should be obvious that you can take a vertical sum anywhere and that will give you the partial sum to the beginning. eg. if I take a vertical sum in the 13th slot I get (C12-C13)+(0)+(C8-C11)+(C0-C7) = (C0-C13) , which is what I want.

Now what if we slide everything left so we don't have all those zeros in the front, and we'll go ahead and stick the total sum in the top :

C0 C0-C1 C2 C2-C3 C4 C4-C5 C6 C6-C7 C8 C8-C9 C10 C10-C11 C12 C12-C13 C14 C14-C15
C0-C1 0 C4-C5 0 C8-C9 0 C12-C13 0
C0-C3 0 C8-C11 0
C0-C7 0
C0-C15

Now, for each range, we stuff the value into the top slot that it covers, starting with the largest range. So (C0-C7) goes in slot 7 , (C0-C3) goes in slot 3, etc :

C0 -v- C2 -v- C4 -v- C6 -v- C8 -v- C10 -v- C12 -v- C14 -v-
C0-C1 -v- C4-C5 -v- C8-C9 -v- C12-C13 -v-
C0-C3 -v- C8-C11 -v-
C0-C7 -v-
C0-C15

(the -v- is supposed to be a down-arrow meaning you get those cell contents from the next row down). Well, this is exactly a Fenwick Tree.

I'm not sure what the conclusion is yet, but I thought that was like "duh, cool" when I saw it.

8/25/2010

08-25-10 - ProfileParser

I realized a long time ago that the work I did for my AllocParser could basically be applied as-is to parsing profiles. It's the same thing, you have a counter (bytes or clocks), you want to see who counted up that counter, get hierarchical info, etc.

Also Sean and I talked long ago about how to do the lowest impact possible manually-instrumented profiler with full information. Basically you just record the trace and don't do any processing on it during recording. All you have to do is :


Profiler_Push(label)  *ptr++ = PUSH(#label); *ptr++ = rdtsc();
Profiler_Pop( label)  *ptr++ = POP( #label); *ptr++ = rdtsc();

#define PUSH(str)  (U64)(str)
#define POP (str) -(S64)(str)

where ptr is some big global array of U64's , and we will later use the stringized label as a unique id to merge traces. Once your app is done sampling, you have this big array of pushes and pops, you can then parse that to figure out all the hierarichical timing information. In practice you would want to use this with a scoper class to do the push/pop for you, like :

class rrScopeProfiler { rrScopeProfiler() { Push; } ~rrScopeProfiler() { Pop; } };

#define PROFILE(label)  rrScopeProfiler RR_STRING_JOIN(label,_scoper) ( #label );

Very nice! (obviously the pop marker doesn't have to provide the label, but it's handy for consistency checking).

(if you want to get fancy, the profiler push/pop array should really be write-combining memory, and the stores should be movnti's on x64 (*), that way the profiler push/pop wouldn't even pollute your cache, which makes it very low impact indeed)

(* BTW movnti is exposed as _mm_stream_si64 ; unfortunately, this does not exist in VC2005, you need 2008; the fact that they took away inline asm and then failed to expose all the instructions in intrinsics was a huge "fuck you" to all low-level developers; it was terrible in the early days, they've caught up a bit more with each rev ) (note that movntq is not the same, movnti comes from normal registers).

So I did this, and made my AllocParser able to parse that kind of input and turn it into AllocRecords. (the normal mode of AllocParser is to handle stack traces of memory allocations).

So now I can make tabview output for either top-down or bottom-up hierarchies, and also graphiz output like : lz_profile.svg .

There are a few minor things to clean up, like I'd like to be able to show either seconds or clocks, I'd like to be able to divide clocks by the total # of bytes to make it a clocks per byte, and if a node only has a self time, I'd like to delete the redundant +self node.

Another option is to show the hierarchy in a totally different graphviz style, using the boxes-within-boxes method. I tried that before for allocs and it was hideous, but it might be okay for timing. If I do that, then I could combine this with a global timeline to show thread profiles over time, and then use the arrows to connect dependencies.

Then I have to buy that old plotter I've been talking about ...

8/24/2010

08-24-10 - Free Trail Maps

Don't support those Green Trails cocksuckers who charge $400 for their maps. (if you do have the Green Trails maps, I encourage you to post them for public use on the net; you fuckers have abused your right for me to respect your copyright; in fact I vaguely considered buying a set just to scan it and post it, but they're not even worth that).

It's most appalling because all the maps are based on the USGS data, which is paid for by our fucking tax dollars. Fortunately there are some perfectly good free map sites :

libremap.org : Libre Map Project - Free Maps and GIS data
digital-topo-maps.com : Free Printable Topo Maps - Instant Access to Topographic Maps
ACME Mapper 2.0

Of these, digital-topo-maps.com is the easiest to browse around cuz it just uses Google maps (actually, it seems to be like 10X faster than Google's own interface, so it's actually just a nice way to browse normal maps too).

Libre Maps is the most useful for hiking with because it has nice printer-ready pages in high quality.

Also, I sometimes forget that Google still has the Terrain maps because they are hidden under More... now. I'm sure somebody has done trail overlays for Google Terrain, but I haven't found it.

08-24-10 - AutoPrintf Release

Okay, I think AutoPrintf is close enough to final to release it. The nicer, more readable version is in : cblib.zip , but I understand you all like little autonomous files, so I grabbed the shit it needs and crammed it together in an ugly mess here :

apf.h
apf.cpp
apf.inc

(among other advantages, the cblib version doesn't pull vector.h or windows.h into the apf headers, both of which I consider to be very severe sins)

See older posts for a description of how it works and earlier not-good way of doing it and initial announcement .

The basic way it works is :

  • non-basic types are converted into basic types by the template using autoArgConvert; base types are passed through and others get ToString() called on them.

  • "%a" in the format string is turned into the type of the variable it points at (eg %d , %s, whatever). So you can of course use "%7a" or anything else that printf can handle.

  • I just call normal printf to do the actual printing.

that's it, very simple!

Here's my main.cpp for an example of usage :

#include < float.h >
#include < stdio.h >     
#include < string.h >

//#include "autoprintf.h"
#include "apf.h"

#include < windows.h >

struct Vec3 { float x,y,z; };

namespace apf 
{
inline const String ToString( const Vec3 & v )
{
    return StringPrintf("{%.3f,%.3f,%.3f}",v.x,v.y,v.z);
}

/*
inline size_t autoArgConvert(const HWND arg)
{
    return (size_t)arg;
}
*/

inline const String ToString( const HWND v )
{
    return StringPrintf("[%08X]",v);
}
};
    
int main(int argc,const char * argv[])
{   
    //MakeAutoPrintfINL();
    
    //autoprintf("test bad %d",3.f);
    
    autoprintf("hello %-7a",(size_t)400,"|\n");
    
    //*
    //autoprintf("100%");
    autoprintf("percent %% %s","100%"," stupid","!\n");
    autoprintf("hello ","world %.1f",3.f,"\n");
    autoprintf("hello ",7," world\n");
    autoprintf("hello %03d\n",7);
    autoprintf("hello %d",3," world %.1f\n",3.f);
    autoprintf("hello ",(size_t)400,"\n");
    autoprintf("hello ",L"unicode is balls"," \n");
    autoprintf("hello %a ",L"unicode is balls"," \n");
    //autoprintf("hello %S ",L"unicode is balls"," \n");
    autoprintf("hello ",apf::String("world")," \n");
//  autoprintf("hello ",LogString()); // compile error
    
    autoprintf("top bit ",(1UL<<31)," \n");
    autoprintf("top bit %d",(1UL<<31)," \n");
    autoprintf("top bit %a",(1UL<<31)," \n");
    autoprintf("top bit %a\n",(size_t)(1UL<<31));
    
    HANDLE h1 = (HANDLE) 77;
    HWND h2 = (HWND) 77;
    autoprintf("HANDLE %a\n",h1);
    autoprintf("HWND %a\n",h2);
    
    char temp[1024];
    autosnprintf(temp,1023,"hello %a %a %a",4.f,7,apf::String("world"));
    printf("%s\n",temp);
    
    Vec3 v = { 3, 7, 1.5f };
    autoprintf("vector ",v," \n");
    autoprintf("vector %a is cool %a\n",v,(size_t)100);
    /**/
        
    return 0;
}

The normal way to make user types autoconvert is to add a ToString() call for your type, but you could also use autoArgConvert. If you use autoArgConvert, then you will wind up going through a normal %d or whatever.

One nice thing is that this autoprintf is actually even safer than my old safeprintf. If you mismatch primitive types, (eg. you put a %d in your format string but pass a float), it will check it using the same old safeprintf method (that is, a runtime failure). But if you put a std::string in the list when you meant to put a char *, you will get a compile error now, which is much nicer.

Everything in cblib now uses this (I made Log.h be an autoprintf) and I haven't noticed a significant hit to compile time or exe size since the templates are all now deterministic and non-recursive.

Yes it does a lot of dynamic allocation. Get with the fucking 20th century. And it's fucking printf. Printf is slow. I don't want to hear one word about it.

8/23/2010

08-23-10 - AutoPrintf v2

Okay, v2 works and confirmed doesn't give massive exe size bloat or compiler time death the way v1 does.

At the heart of v2 is a "fixed" way of doing varargs. The problem with varargs in C is that you don't get the types of the variables passed in, or the number of them. Well there's no need to groan about that because it's actually really trivial to fix. You just make a bunch of functions like :


template < typename T1, typename T2, typename T3 >
inline String autoToStringSub( T1 arg1, T2 arg2, T3 arg3)
{
    return autoToStringFunc( 3,
            safeprintf_type(arg1), safeprintf_type(arg2), safeprintf_type(arg3), 
            arg1, arg2, arg3, 0 );
}

for various number of args. Here autoToStringFunc(int nArgs, ...) is the basic vararg guy who will do all the work, and we just want to help him out a bit. This kind of adapter could be used very generally to make enhanced varargs functions. Here I only care about the "printf_type" of the variable, but more generaly you could use type_info there. (you could also easily make abstract Objects to encapsulate the args and pass through an array of Objects, so that the called function wouldn't have to be a stupid C vararg function at all, but then it's harder to pass through to the old C funcs that still want varargs).

On top of this we have a type-change adapter :


template < typename T1, typename T2 >
inline String autoToString( T1 arg1, T2 arg2)
{
    return autoToStringSub( 
        autoprintf_StringToChar( autoArgConvert(arg1) ), 
        autoprintf_StringToChar( autoArgConvert(arg2) ));
}

autoToString calls down to autoToStringSub, and uses autoArgConvert. autoArgConvert is a template that passes through basic types and calls ToString() on other types. ToString is a template that knows the basic types, and the client can extend it by adding ToString for their own types. If they don't, it will be a compile error. StringToChar is a helper that turns a String into a char * and passes through anything else. We have to do it in that double-call way so that the String can get allocated and stick around as a temporary until our whole call is done.

The next piece is how to implement autoToStringFunc() , which takes "enhanced varargs". We need to figure out which pieces are format strings and do various types of printfs (including supporting %a for auto-typed printf). The only tricky part of this is how to step around in the varargs. Here is the only place we have to use a little bit of undefined behavior. First of all, think of the va_list as a pointer to a linked list. Calling va_arg essentially advances the pointer one step. That's fine and stanard. But I assume that I can then take that pointer and pass it on as a va_list which is the remaining args (see note *).

The key way we deal with the varargs is with functions like this :


static inline void SkipVAArg(ESafePrintfType argtype, va_list & vl)
{
    switch(argtype)
    {
    case safeprintf_charptr:    { va_arg(vl,char *); return; }
    case safeprintf_wcharptr:   { va_arg(vl,wchar_t *); return; }
    case safeprintf_int32:      { va_arg(vl,int); return; }
    case safeprintf_int64:      { va_arg(vl,__int64); return; }
    case safeprintf_float:      { va_arg(vl,double); return; }
    case safeprintf_ptrint:     { va_arg(vl,int*); return; }
    case safeprintf_ptrvoid:    { va_arg(vl,void*); return; }
    default:
        // BAD
        safeprintf_throwsyntaxerror("SkipVAArg","unknown arg type");
        return;
    }
}

And the remainder is easy!

* : actually it looks like this is okay by the standard, I just have to call va_end after each function call then SkipArgs back to where I was. I believe this is pointless busywork, but you can add it if you want to be fully standard compliant.

8/22/2010

08-22-10 - AutoPrintf v1

Well autoprintf v1 appears to be all working. The core element is a bunch of functions like this :

template < typename T1, typename T2, typename T3, typename T4 >
inline String autoToString( T1 arg1, T2 arg2, T3 arg3, T4 arg4 )
{
    return ToString(arg1) + autoToString( arg2,arg3,arg4);
}


template < typename T2, typename T3 >
inline String autoToString( const char *fmt, T2 arg2, T3 arg3 )
{
    autoFormatInfo fmtInfo = GetAutoFormatInfo(fmt);
    if ( fmtInfo.autoArgI )
    {
        String newFmt = ChangeAtoS(fmt,fmtInfo);
        if ( 0 ) ;
        else if ( fmtInfo.autoArgI == 1 ) return autoToString(newFmt.CStr(), ToString(arg2).CStr(),arg3);
        else if ( fmtInfo.autoArgI == 2 ) return autoToString(newFmt.CStr(), arg2,ToString(arg3).CStr());
        else return autoPrintf_BadAutoArgI(fmt,fmtInfo);
    }

         if ( fmtInfo.numPercents == 0 )    return ToString(fmt) + autoToString(arg2,arg3);
    else if ( fmtInfo.numPercents == 1 )    return StringPrintf(fmt,arg2) + autoToString(arg3);
    else if ( fmtInfo.numPercents == 2 )    return StringPrintf(fmt,arg2,arg3);
    else return autoPrintf_TooManyPercents(fmt,fmtInfo);
};

you have an autoToString that takes various numbers of template args. If the first arg is NOT a char *, it calls ToString on it then repeats on the remaning args. Any time the first arg is a char *, it uses the other specialization which looks in fmt to see if it's a printf format string, then splits the args based on how many percents they are. I also added the ability to use "%a" to mean auto-typed args, which is what the first part of the function is doing.

That's all dandy, but you should be able to see that for large numbers of args, it generates a massive amount of code.

The real problem is that even though the format string is usually a compile-time constant, I can't parse it at compile time, so I generate code for each arg being %a or not being %a, and for each possible number of percents. The result is something like 2^N codegens for N args. That's bad.

So, I know how to fix this, so I don't think I'll publish v1. I have a method for v2 that moves most of the work out of the template. It's much simpler actually, and it's a very obvious idea, all you have to do is make a template like :


autoprintf(T1 a1, T2 a2, T3 a3)
{
    autoPrintfSub( autoType(a1), autoArg(a1) ,autoType(a2), autoArg(a2) , .. )
}

where autoType is a template that gives you the type info of the arg, and autoArg does conversions on non-basic types for you, and then autoPrintfSub can be a normal varargs non-template function and take care of all the hard work.

... yep new style looks like it will work. It requires a lot more fudging with varargs, the old style didn't need any of that. And I'm now using undefined behavior, though I think it always works in all real-world cases. In particular, in v2 I'm now relying on the fact that I can do :


  va_start(vl)
  va_arg(vl) .. a few types to grab some args from vl
  vsnprintf(  vl);

that is, I rely on the fact that va_arg advances me one step in the va_list, and that I then still have a valid va_list for remaining args which I can pass on. This is not allowed by the standard technically but I've never seen a case where it doesn't work (unless GCC decided to get pedantic and forceably make it fail for no good reason).

8/21/2010

08-21-10 - autoprintf

AH HAHA HA ! I think I finally I have the one true printf solution. I can now do :

    autoprintf("hello world\n");
    autoprintf("hello ",7," world\n");
    autoprintf("hello %03d\n",7);
    autoprintf("hello ","world %.1f",3.f,"\n");
    autoprintf("hello %d",3," world %.1f\n",3.f);
    autoprintf("hello ",(size_t)400,"\n");
    autoprintf("hello ",L"unicode is balls"," \n");
    autoprintf("hello ",String("world")," \n");

In particular, all of the following things work :
  • Regular old printf stuff works just like always.

  • .. except that it does type validation against the format string using my old "safeprintf" system

  • You can put args anywhere on the line, not just at the end. This is handy when you get really long printfs going, and makes it easier to copy-paste and rearrange. Similarly you can put format strings anywhere on the line, which is handy when you have a long printf set up and you want to just add something on the end.

  • Does automatic type deduction and output of types that are not explicitly qualified in the format string. In particular a handy one is (size_t) which is not properly supported in a cross-platform way. Any type that doesn't have a % format thingy provided by the caller gets an automatic one.

  • Supports non-basic types in a clean template overriding way. So things like that line that pass a String() as an argument - you will get a compile error if you haven't made a compatible template for that type, or if you have you get nice autoconversion. Basically all the advantages of the C++ iostream << dealy but without the horribleness of having to do your printing that way.

I'm gonna clean up the code a bit and try to extricate it from cblib (meh or maybe not) and I'll post it in a few days.

It does pretty much everything I've always wanted from a printf. There is one thing missing, which is formatting for arbitrary types. Currently you can only format the basic types, and the non-basic types go through a different system. eg. you can either do :

autoprintf("hello %5d ",anInt); 
or
autoprintf("hello ",(size_t)anInt); 
but you can't yet do
autoprintf("hello %5",(size_t)anInt); 
(note that the type specifier is left off, only format specifiers are on the %). I know how to make this work, but it makes the implementation a lot more complicated, so I might punt on it.

The more complicated version is to be able to pass through the format spec into the templated converter. For example, you might have a ToString() for your Vec3 type which makes output like ("{%f,%f,%f}",x,y,z) . With the current system you can do :

Vec3 v;
autoprintf("v = ",v);
and it will call your ToString, but it would be groovy if you could do :
Vec3 v;
autoprintf("v = %.1",v);
as well and have that format apply to the conversion for the type. But that's probably more complication than I want to get into.

Another thing that might be nice is to have an explicit "%a" or something for auto-typed, so you can use them at the end like normal printf args. eg :

autoprintf("hello %d %a %f\n", 3, String("what"), 7.5f );

08-21-10 - Adler32

Sean pointed out that I should try Adler32 for corruption checking. For reference I did some study of file hashes before and I'll use the same test set now, so you can compare to that. I'm using the Adler32 from zlib which looks like a decent implementation.

Testing on 10M arrays of average length 192 (random in [128,256]).


count : 10000000
totalBytes : 1920164768
clocks per byte :
burtle               : 1.658665
crc32                : 10.429893
adler32              : 1.396631
murmur               : 1.110712
FNV                  : 2.520380

So Adler is in fact decently fast, not as fast as Murmur but a bit faster than Burtle. (everything is crazy fast on my x64 lappy; the old post was on my work machine, everything is 2-3X faster on this beast; it's insane how much Core i7 can do per clock).

BTW I wasn't going to add Murmur and FNV to this test - I didn't test them before because they are really not "corruption detection" hashes, they are hashes for hash tables, in particular they don't really try to specifically gaurantee the one bit flips will change the hash or whatever it is that CRC's gaurantee, but after I saw how non-robust Adler was I figured I should add them to the test, and we will see that they do belong...

Now when I count collisions in the same way as before, a problem is evident :


collisions :
rand32               : 11530
burtle               : 0
crc32                : 11774
adler32              : 1969609
murmur               : 11697
FNV                  : 11703

note that as before, rand32 gives you a baseline on how many collisions a perfect 32 bit hash should give you - those collisions are just due to running into the limitted space of the 32 bit word. Burtle here is a 64 bit hash and never collides. (I think I screwed up my CRC a little bit, it's colliding more than it should. But anyhoo). Adler does *terribly*. But that's actually a known problem for short sequences.

How does it do on longer sequences ? On arrays of random length between 2k and 4k (average 3k) :


num hashes : 10000000
totalBytes : 30722620564
clocks per byte :
burtle               : 1.644675
crc32                : 11.638417
adler32              : 1.346784
murmur               : 1.027105
FNV                  : 2.999243
collisions :
rand32               : 11530
burtle               : 0
crc32                : 11586
adler32              : 12335
murmur               : 11781
FNV                  : 11653

it's better, but still the worst of the group.

BTW I should note that the adler32 implementation does unrolling and rollup/rolldown and all that kind of stuff and none of the other ones do. So it's speed advantage is a bit unfair. All these sort of informal speed surveys should be taken with a grain of salt, since to really fairly compare them I would have to spend a few weeks on each one making sure I got it as fast as possible, and of course testing on various platforms. In particular FNV and Murmur use multiplies with is a no-go, but you could probably use shift and add to replace the multiplies, and you'd get something like Bob's "One at a Time" hash.

So I figured I'd test on what is more like my real usage scenario.

In the RAD LZH , I compress 16k data quanta, and check the CRC of each compressed chunk before decompressing. So compressed chunks are between 0 and 16k bytes. Since they are compressed they are near random bits. Corruption will take various forms, either complete stompage with random shite, or some bit flips, or tail or head stomps. Complete stompage has been tested in the above runs (it's the same as checking the collision rate for two unrelated sequences), so I tested incremental stomps.

I made random arrays between 256 and 16k bytes long. I then found the hash of that array, did some randomized incremental stomping, and took the hash after the changes. If the hashes were the same, it counts as a collision. The results are :


numTests : 13068402
burtle               : 0 : 0.00000%
crc32                : 0 : 0.00000%
adler32              : 3 : 0.00002%
murmur               : 0 : 0.00000%
FNV                  : 0 : 0.00000%

Adler32 is the only one that fails to detect these incremental stomps. Granted the failure rate is pretty low (3/13068402) but that's not secure. Also, the hashes which are completely not designed for this (Murmur and FNV) do better. (BTW you might think the Adler32 failures are all on very short arrays; not quite, it does fail on a 256 byte case, then twice at 3840 bytes).

ADDENDUM : Ok I tested Fletcher32 too.


cycles :
rand32               : 0.015727
burtle               : 1.364066
crc32                : 4.527377
adler32              : 1.107550
fletcher32           : 0.697941
murmur               : 0.976026
FNV                  : 2.439253

large buffers :
num hashes : 10000000
totalBytes : 15361310411
rand32               : 11530
burtle64             : 0
crc32                : 11710
adler32              : 12891
fletcher32           : 11645
murmur               : 11792
FNV                  : 11642

small buffers :
num hashes : 10000000
totalBytes : 1920164768
rand32               : 11530
burtle64             : 0
crc32                : 11487
adler32              : 24377
fletcher32           : 11793
murmur               : 11673
FNV                  : 11599

difficult small buffers :
num hashes : 10000000
totalBytes : 1920164768
rand32               : 11530
burtle64             : 0
burtle32             : 11689
crc32                : 11774
adler32              : 1969609
fletcher32           : 11909
murmur               : 11665
FNV                  : 11703

Conclusion : Adler32 is very bad and unsafe. Fletcher32 looks perfectly solid and is very fast.

ADDENDUM 2 : a bit more testing. I re-ran the test of munging the array with incremental small changes of various types again. Running on lengths from 256 up to N, I get :


munge pattern 1 :
length : 6400
numTests             : 25069753
rand32               : 0
burtle64             : 0
burtle32             : 0
crc32                : 0
adler32              : 14
fletcher32           : 22
murmur               : 0
FNV                  : 0

munge pattern 2 :
length : 4096
numTests             : 31322697
rand32               : 0
burtle64             : 0
burtle32             : 0
crc32                : 0
adler32              : 9
fletcher32           : 713
murmur               : 0
FNV                  : 0

So I strike my conclusion that Fletcher is okay. Fletcher and Adler are both bad.

ADDENDUM 3 : Meh, it depends what kind of "corruption" you expect. The run above in which Fletcher is doing very badly includes some "munges" which tend to fill the array with lots of zeros, in which area it does very badly.

If you look at really true random noise type errors, and you always start your array full of random bits, and then you make random bit flips or random byte changes (between 1 and 7 of them), and then refill the array with rand, they perform as expected over a very large number of runs :


numTests : 27987536
rand32               : 3 : 0.00001%
burtle64             : 2 : 0.00001%
burtle32             : 2 : 0.00001%
crc32                : 1 : 0.00000%
adler32              : 1 : 0.00000%
fletcher32           : 2 : 0.00001%
murmur               : 2 : 0.00001%
FNV                  : 1 : 0.00000%

8/20/2010

08-20-10 - Deobfuscating LZMA

I've been trying to figure out LZMA for a while, if anyone can help please chime in.

LZMA is very good, and also very obscure. While the code is free and published, it's completely opaque and the algorithm is not actually described anywhere. In particular, it's very good on structured data - even better than PPM. And, superficially, it looks very much like any other LZ+arithmetic+optimal parse, which there were many of before LZMA, and yet it trounces them all.

So, what's going on in LZMA? First, a description of the basic coder. Most of LZMA is very similar to LZX - LZX uses a forward optimal parse, log2-ish encoded lengths and offsets, and the most recent 3 offsets in an MTF/LRU which are coded as special "repeat match" codes. (LZX was made public when Microsoft made it part of CAB and published a full spec ).

LZMA can code a literal, a match, or a recent offset match - one of the three most recent offsets (like LZX). This is pretty standard. It also has two coding modes that are unusual : "Delta Literal" coding, and the 0th most recent offset match can code a single character match.

Everything it codes is context-coded binary arithmetic coded. Literals are coded as their bits; the initial context is the previous character and a few bits of position, and as literal bits are coded they are shifted into the context for future bits (top to bottom). This is pretty standard.

Using a few bits of position as part of the context lets it have different statistics at each byte position in a dword (or whatever). This is very useful for coding structured data such as arrays of floats. This idea has been around for a long time, but older coders don't do it and it certainly is part of the advantage on array/structured data. The bottom bits of position are also used as part of the context for the match flag, and also the "is last match 0 long" flag. Other match-related coding events don't use it.

In theory you should figure out what the local repetition period is and use that; LZMA doesn't make any effort to do that and just always uses N bits of position (I think N=2 is a typical good value).

Lengths and Offsets are coded in what seems to be mostly a pretty standard log2-ish type coding (like Zip and others). Offsets are coded as basically the position of their MSB and then the remaining bits. The MSB is context-coded with the length of the match as context; this capture length-offset correlation. Then, the bottom 4 bits of the offset are sent, binary arithmetic coded on each other in reverse order (bottom bit first). This lets you capture things like a fixed structure in offsets (eg. all offsets are multiples of 4 or 8). The bits between the MSB and the bottom 4 are sent without compression.

The binary match flags are context coded using the "state" , which is the position in an internal finite state machine. It is :


LZMA state machine :

Literal :

  state < 7 :
    normal literal
  state >= 7 :
    delta literal

  state [0-3] -> state = 0
  state [4-9] -> state -= 3 ([1-6])
  else state -= 6 [10-11] -> ([4-5])


Match :

  rep0
   len 1 : 
     state ->   < 7 ? 9 : 11
   len > 1 :
     state ->   < 7 ? 8 : 11

  rep12
     state ->   < 7 ? 8 : 11

  normal  
     state ->   < 7 ? 7 : 10


// or from Igor Pavlov's code :

static const int kLiteralNextStates[kNumStates] = {0, 0, 0, 0, 1, 2, 3, 4,  5,  6,   4, 5};
static const int kMatchNextStates[kNumStates]   = {7, 7, 7, 7, 7, 7, 7, 10, 10, 10, 10, 10};
static const int kRepNextStates[kNumStates]     = {8, 8, 8, 8, 8, 8, 8, 11, 11, 11, 11, 11};
static const int kShortRepNextStates[kNumStates]= {9, 9, 9, 9, 9, 9, 9, 11, 11, 11, 11, 11};

Okay, this is the first funny unique thing. State basically tells you what the last few coding operations have been. As you send matches, state gets larger, as you send literals, state gets smaller. In particular, after any literal encoding state is < 7, and after any match encoding it is > 7. Then above and below that it tells you something about how many literals or matches you've recently encoded. For example :


initial state = 5
code a normal match -> 7
code a rep match -> 11
code a literal -> 5
code a literal -> 2
code a literal -> 0

Now it's unclear to me whether this funny state machine thing is really a huge win as a context; presumably it is tweaked out to be an advantage, but other coders have used the previous match flags as context for the match flag coding (eg. was the last thing a match is one bit, take the last three that gives you 8 states of previous context), which seems to me to have about the same effect.

There is one funny and clever thing here though, and that's the "delta literal". Any time you code a literal immediately after a match, the state will be >= 7 so you will code a delta literal. After that state will fall below 7 so you will code normal literals. What is a delta literal ?

Delta literals are coded as :


    char literal = *ptr;
    char lastPosPtr = ptr - lastOffset;
    char delta = literal ^ *lastPosPtr;

that is, the character is xor'ed with the character at the last coded match offset away from current pointer (not at the last coded pos, the last offset, that's important for structured data).

When I first saw this I thought "oh it's predicting that the char at the last offset is similar, so the xor makes equal values zero" , but that's not the case at all. For one thing, xor is not a great way to handle correlated values, subtract mod 256 would be better. For another, these character are in fact gauranteed to *NOT* match. If they did match, then the preceeding match would have just been one longer. And that's what's cool about this.

Immediately after a match, you are in a funny position : you have a long preceding context which matches some other long context in the file (where the match was). From PPM* and LZP we know that long contexts are very strong predictors - but we also know that we have failed to match that character! If we just use the normal literal coder, we expect the most likely character to be the one that we just failed to match, so that would be a big waste of code space. So instead, we use this delta literal coder which will let us statistically exclude the zero.

Okay, I think that's it for how the coding works. A few more tidbits :

The match finder in 7zip appears to be a pretty standard hash-to-binary-tree. It uses a hash to find the most recent occurance of the first few chars of the current string, that points to a node in the binary tree, and then it walks the binary tree to find all matches. The details of this are a little bit opaque to me, but I believe it walks backwards in order, and it only finds longer matches as it walks back. That is, it starts at the lowest offset occurance of the substring and finds the match length for that, then it steps to the next later one along the binary tree, and finds a match *if longer*. So it doesn't find all offsets, it presumes that larger offsets are only interesting if their matches are longer. (I'm a little unclear on this so this could be wrong).

One thing I can't figure out is how the binary tree is maintained with the sliding window.

ADDENDUM : I just found it described in "Handbook of Data Compression By David Salomon, Giovanni Motta, David (CON) Bryant". My description above of the binary tree was basically right. It is built in the "greedy" way : new nodes are added at the top of the tree, which means that when you are searching down the tree, you will always see the lowest offset possible for a given match length first, so you only need to consider longer lengths. Also since older nodes are always deeper in the tree, you can slide the window by just killing nodes and don't have to worry about fixing the tree. Of course the disadvantage is the tree can be arbitrarily unbalanced, but that's not a castrophe, it's never worse than just a straight linked list, which is the alternative.

The big piece I'm missing is how the optimal parse works. It's a forward optimal parse which explores a limitted branch space (similar to previous work that was done in Quantum and LZX). When it saves state in the optimal parse tree, it only updates the FSM "state" variable and the last 3 offsets, it doesn't update the whole context-arithmetic state. At each position it appears to consider the cost of either a literal, a match, or a "lazy" match (that's a literal and then the following match), but I haven't figured out the details yet. It seems to optimal parse in 4k chunks, maybe it updates the arithmetic state on those boundaries. I also see there are lots of heuristics to speed up the optimal parse, assumptions about certain coding decisions being cheaper than others without really testing them, hard-coded things like (if offset > (1 << 14) && length < 7) which surely helps. If anyone has figured it out, please help me out.


ADDENDUM : here's an illustration of how the special LZMA modes help on structured data. Say you have a file of structs; the structs are 72 bytes long. Within each struct are a bunch of uint32, floats, stuff like that. Within something like a float, you will have a byte which is often very correlated, and some bytes that are near random. So we might have something like :


[00,00,40,00] [7F 00 3F 71] ... 72-8 bytes ... [00,00,40,00] [7E 00 4C 2F]
... history ...                                * start here

we will encode :

00,00,40,00 : 
  4 byte match at offset 72
  (offset 72 is probably offset0 so this is a rep0 match)

7E :
  delta literal
  encode 7E ^ 7F = 1

00 :
  one byte match to offset 72 (rep0)

4C :
  delta literal
  encode 4C ^ 3F = 0x73

2F :
  regular literal

Also because of the position and state-based coding, if certain literals occur often in the same spot in the pattern, that will be captured very well.

Note that this is not really the "holy grail" of compression which is a compressor that figures out the state-structure of the data and uses that, but it is much closer than anything in the past. (eg. it doesn't actually figure out that the first dword of the structure is a float, and you could easily confuse it, if your struct was 73 bytes long for example, the positions would no longer work in simple bottom-bits cycles).

8/19/2010

08-19-10 - Fuzz Testing

I'm "fuzz testing" my LZ decoder now, by which I mean making it never crash no matter how the data is corrupted.

The goal is to make this work without taking any speed hit. There are lots of little tricks to make this happen. For example, the LZ decode match copier is allowed to trash up to 8 bytes past where it thinks the end is. This lets me do a lot fewer bounds checks in the decode. To prevent actual trashing then, I just make the encoder never emit a match within 8 bytes of the end of a chunk. Similarly, the Huffman decoder can be made to always output a symbol in finite number of steps (never infinite loop or access a table out of bounds). You can do this just by doing some checks when you build your decode tables, then you don't have to do any checks in the actual decode loop.

So, how do we make sure that it actually works? To prove that it is 100% fuzz resilient, you would have to generate every possible bit stream of every possible length and try decoding them all. Obviously that is not possible, so we can only try our best to find bad cases. I have a couple of strategies for that.

Random stomps. I just stomp on the compressed data in some random way and then run the decoder and see what happens (it should fail but not crash). I have a test loop set up to do this on a bunch of different files and a bunch of different stomp methods.

Just stomping random bytes in turns out to not be a very good way to find failures - that type of corruption is actually one of the easiest to handle because it's so severe. So I have a few different stomp modes : insert random byte, insert 00, insert FF, flip one bit, and the same for shorts, dwords, qwords. Often jamming in a big string of 00 or FF will find cases that any single byte insert won't. I randomize the location of the stomp but prefer very early position ones, since stomping in the headers is the hardest to handle. I randomize the number of stomps.

One useful thing I do is log each stomp in the form of C code before I do it. For example I'll print something like :


compBuf[ 906 ] ^= 1 << 3;
compBuf[ 61  ] ^= 1 << 3;
compBuf[ 461 ] ^= 1 << 4;

then if that does cause a crash, I can just copy-paste that code to have a repro case. I was writing out the stomped buffers to disk to have repro cases, but that is an unnecessary slowdown; I'm currently running 10,000+ stomp tests.

(note to self : to do this, run main_lz -t -z -r1000)

Okay, so that's all very nice, but you can still easily miss failure cases. What I really want is something that gives me code coverage to tell that I've handled corrupted data in all the places where I read data. So I stole an idea from relacy :

Each place I get a byte (or bits) from the compressed stream, I replace :


U8 byte = *ptr++;

with

U8 byte = *ptr++;
FUZZ(byte);

// I wanted to do this but couldn't figure out how to make it work :
// U8 byte = FUZZ( *ptr++ );

(and similar for getting bits). Now, what the FUZZ macros do is this :

The first time they are encountered, they register their location with the FuzzManager. They are then a disabled possible fuzz location. Each one is given a unique Id.

I then start making passes to try to fuzz at all possible locations. To do this, each fuzz location is enabled one by one, then I rerun the decompressor and see if that location was in fact hit. If a fuzz location is enabled, then the FUZZ macro munges the value and returns it (using all the munge modes above), and if it's disabled it just passes the byte through untouched.

Once I try all single-munges, I go back and try all dual munges. Again in theory you should try all possible multi-fuzz sequences, but that's intractable for anything but trivial cases, and also it would be very odd to have a problem that only shows up after many fuzzes.

As you make passes, you can encounter new code spots, and those register new locations that have to be covered.

Again, a nice thing I do is before each pass I log C code that will reproduce the action of that pass, so that if there is a problem you can directly reproduce it. In this case, it looks like :


Fuzz : 1/36
rrFuzz_StartPass_Select(".\compress\rrLZHDecompress.cpprrLZHDecoder_DecodeSome",351010,3,1,0x28502CBB);

In order to have reproducability, I use FILE/LINE to identify the fuzz location, not an index, since the index can change from run to run based on the code path taken. Also, note that I don't actually use FILE/LINE because I have FUZZ in macros and templates - I use __FUNCDNAME__ so that two versions of a template get different tags, and I use __COUNTER__ so that macros which cause multiple fuzzes to occur at the same original code line get different location numbers. eg. this works :

#define A()  do { U8 t = *ptr++; FUZZ(t); } while(0)
#define B()  A(); A();

template < int i > void func() { B(); }

void main()
{
    func< 0 >();
    func< 1 >();
}

// there should be 4 separate unique FUZZ locations registered :

/*

I log :

rrFuzz_Register(".\main_lz.cpp|??$func@$0A@@@YAXXZ",1318000) = 0;
rrFuzz_Register(".\main_lz.cpp|??$func@$0A@@@YAXXZ",1318001) = 1;
rrFuzz_Register(".\main_lz.cpp|??$func@$00@@YAXXZ",1318000) = 2;
rrFuzz_Register(".\main_lz.cpp|??$func@$00@@YAXXZ",1318001) = 3;

*/

As usual I'm not sure how to get the same thing in GCC. (maybe __PRETTY_FUNCTION__ works? dunno).

The actual FUZZ macro is something like this :


#define FUZZ_ID     __FILE__ "|" __FUNCDNAME__ , __LINE__*1000 + __COUNTER__

#define FUZZ( word )    do { static int s_fuzzIndex = rrFuzz_Register(FUZZ_ID); if ( rrFuzz_IsEnabled(s_fuzzIndex) ) { word = rrFuzz_Munge(word); } } while(0)

The only imperfection at the moment is that FUZZ uses a static to register a location, which means that locations that are never visited at all never get registered, and then I can't check to see if they were hit or not. It would be nice to find a solution for that. I would like it to call Register() in _cinit, not on first encounter.

Anyway, this kind of system is pretty handy for any code coverage / regression type of thing.

(note to self : to do this, define DO_FUZZ_TEST and run main_lz -t -r1000)

ADDENDUM : another practical tip that's pretty useful. For something small and complex like your headers, or your Huffman tree, or whatever, you might have a ton of consistency checks to do to make sure they're really okay. In that case, it's usually actually faster to just go ahead and run a CRC (*) check on them to make sure they aren't corrupted, then skip most of the validation checks.

On the primary byte stream we don't want to do that because it's too slow, but for headers the simplicity is worth it.

(*) not actually a CRC because doing byte-by-byte table lookups is crazy slow on some game platforms. There are other robust hashes that are faster. I believe Bob Jenkin's Lookup3 is probably the best and fastest, since we have platforms that can't do multiplies fast (ridiculous but true), so many of the hashes that are fast on x86 like Murmur2 are slow on consoles.

8/16/2010

08-16-10 - Range Coder Revisited .. oh wait, nevermind

Hmm. I just wrote a long article on this and then as I was searching around for reference material, I discovered that I already covered the topic in full detail :

cbloom rants 10-05-08 - 5 - Rant on New Arithmetic Coders
cbloom rants 10-06-08 - 7 - Followup on the Russian Range Coder
cbloom rants 10-08-08 - 3 - Arithmetic coders throw away accuracy in lots of little places

So, WTF I'm going insane. Anyway, here are some more links :

encode.ru : How fast should be a range coder
ctxmodel.net : Context Modelling
CiteSeerX : Arithmetic coding , Langdon 79
Sachin Garg : 64-bit Range Coding and Arithmetic Coding

One random thing I should note is that if you have 64 bit registers, you can let range go between 2^32 and 2^64 , and output 32 bits at a time.

ADDENDUM : another random thing that occurs to me : if you're doing an fpaq0p-style sloppy binary arith coder where range is allowed to get quite small, you can actually do a few encodes or decodes in a row without checking for renormalization. What you would have to do is first do a *proper* renorm check that handles the underflow from straddling the middle case (which it normally doesn't handle) so that you are sure you have >= 24 bits in your range. Then, you can do several binary arithmetic codes, as long as the total probability shift is <= 24. For example, you could do two codes with 12 bits of probability precision, or 3 with 8 bits. Then you check renorm again. Probably the most sensible is doing two 12-bit precision codes, so you are able to do a renorm check once per two codes rather than every code. Of course then you do have to handle carries.

8/12/2010

08-12-10 - The Lost Huffman Paper

"On the Implementation of Minimum Redundancy Prefix Codes" by Moffat and Turpin , 1997 , is a brilliant work that outlines very clearly the best ways to write a huffman decoder. It's a completely modern, practical work, and basically nobody has added anything beyond this technique in the last 13 years.

However, the importance of it was missed when it came out. For many years afterwards people continued to publish "improvements" to Huffman decoding (such as Sub-linear Decoding of Huffman Codes Almost In-Place (1998) ) which are just pure useless shit (I don't mean to single out that paper, there were probably hundreds of shitty foolish pointless papers on "efficient huffman" written after Moffat/Turpin).

Most people in the implementation community also missed this paper (eg. zlib, JPEG, etc. people who make important use of huffman decodes have missed these techniques).

I missed it too. Recently we did a lot of work on Huffman decoding at RAD, and after trying many techniques and lots of brainstorming, we came up with what we thought was a brilliant idea :

Store the code in your variable bit input word left-justified in a register. The Huffman codes are numerically arranged such that for codes of any given length, leaves (symbols) are lower values than nodes (branches). Then, the code for the first branch of each codelen can be left-justified in a word, and your entire Huffman decode consists of :


while ( bits >= huff_branchCodeLeftAligned[codeLen] )
  codeLen++;

return ( (bits>>(WORD_SIZE-codeLen)) - baseCode[ codeLen ] );

(this returns a symbol in "canonical order" - that is most probable is 0 ; if your symbols are not in order from most to least probable, you need an additional table lookup to reorder them).

This is really incredibly fucking hot. Of course it's obvious that it can be improved in various ways - you can use a fast table to skip the first few steps, you can use a nextCodeLen table to skip blanks in the codeLen sequence, and you can use a binary search instead of linear search. For known-at-compile-time huffman trees you could even optimize the binary search for the probability distribution of the codes and generate machine code for the decoder directly.

All of those ideas are in the Moffat+Turpin paper.


ADDENDUM : this post happened to get linked up, so I thought I'd flesh it out and fill in some of the blanks I'm implying above, since I'm sure you blog readers aren't actually going and reading the Moffat/Turpin paper like you should.

Here are some other posts on Huffman codes :

cbloom rants 08-11-10 - Huffman - Arithmetic Equivalence
cbloom rants 08-10-10 - Transmission of Huffman Trees
cbloom rants 07-02-10 - Length-Limitted Huffman Codes
cbloom rants 05-22-09 - A little more Huffman
cbloom rants 05-20-09 - Some Huffman notes
cbloom rants 05-18-09 - Lagrange Space-Speed Optimization (and 05-25-09 - Using DOT graphviz for some Huffman space-speed SVG's)

In particular : cbloom rants 05-22-09 - A little more Huffman describes a 1 bit at a time Huffman decoder with the code values right-justified in the word.

And cbloom rants 05-18-09 - Lagrange Space-Speed Optimization describes first Dmitry Shkarin's standard table-walking Huffman decoder and then a generalization of it; both use N-bit reads of right-justified code words and table stepping.

In all cases a practical Huffman decoder should use an initial table lookup to accelerate the first N bit step. (N usually 8-12 depending on application). The reality is that what you do after that is not super important because it is rare (the majority of Huffman codes are short). Because of this, there are advantages to using a right-justified "upside down" huffman code word, with the MSB at the bottom (I believe Zip does this) because it means you can get the first N bits by doing just an AND with a constant (eg. get the "top" 8 bits by doing &0xFF).

There are two key efficiency issues for Huffman decoder implementation : 1. Reducing branches, and reducing the dependency-chain that leads to branches. That is, doing math is not bad, but doing math to decide to branch is bad. 2. Avoiding variable shifts, because variable shifts are catastrophically slow on some important platforms.

Finally, let's look at a real implementation of the Moffat/Turpin Huffman decoder. The variable bit input word is stored in a word left-justified with top bits at the left. The first branch code at each code length is also left-aligned.

We start with table-accelerating the first N bits :


if ( bits < huff_branchCodeLeftAligned[TABLE_N_BITS] )
{
    U32 peek = bits >> (WORD_SIZE - TABLE_N_BITS);
    Consume( table[peek].codeLen );
    return table[peek].symbol;
}

In practice you might use two tables, and Consume() is an unavoidable variable shift.

Next you have to handle the cases of code lens > TABLE_N_BITS. In that case rather than the loop in the pseudo-code above, you would actually unroll :


if ( bits < huff_branchCodeLeftAligned[TABLE_N_BITS+1] )
{
    return symbolUnsort[ (bits>>(WORD_SIZE-(TABLE_N_BITS+1))) - baseCode[ (TABLE_N_BITS+1) ] ];
}
if ( bits < huff_branchCodeLeftAligned[TABLE_N_BITS+2] )
{
    return symbolUnsort[ (bits>>(WORD_SIZE-(TABLE_N_BITS+2))) - baseCode[ (TABLE_N_BITS+2) ] ];
}
...

this does a branch on each code length, but avoids variable shifts. In some extreme cases (platforms with very slow variable shift, and huffman trees with very low minimum code lengths), this unrolled branching version can even be faster than the peek table, in which case you would simply set TABLE_N_BITS to zero.

In some cases, certain code lengths might not occur at all, and you can avoid checking them by having an additional table of which codelengths actually occur (in practice this rarely helps). This would look like :


if ( bits < huff_branchCodeLeftAligned[11] )
{
    return symbolUnsort[ (bits>>(WORD_SIZE-codeLenTable[11])) - baseCode[ 11 ] ];
}

where the 11 is not a codelen but the 11th code len which actually occurs, and you have the extra codeLenTable[] lookup.

Obviously you could just unroll directly starting at codelen=1 , and obviously this is also just a search. You are just trying to find where bits lies in the sorted huff_branchCodeLeftAligned table. So instead of just a linear search you could binary search. However note that lower code lens are much more likely, so you don't want to just binary search at the beginning. And the binary search makes the branches much less predictable, so it's not always a win. However, as Moffat/Turpin describes, in some cases, for example if you have a hard-coded Huffman tree, the huff_branchCodeLeftAligned can be constants and you can optimize the binary tree branch structure, so you can do codegen to make an optimal decoder, like :


if ( bits < 0xA01230000 )
{
  if ( bits < 0x401230000 )
  {
    // decode codeLen = 4 
  }
  else
  {
    // decode codeLen = 5
  }
}
else
  ...

There's one final issue. Because the bit word is left aligned in the machine word, we can't make any branchCode value for "all bits are branches". In particular, with this compare :


if ( bits < huff_branchCodeLeftAligned[11] )
{

when bits is all 1's (0xFFF...) we can't make a branchCodeLeftAligned that returns true. There are a few solutions for this, one is to use <= branchCodeMinusOne , but then you have to make sure that you start with codeLen >= minCodeLen , because below that branchCode is zero and you have a problem. Another solution is to make sure bits is never full; that is, if you have a 32 bit word, then only allow 31 bits (or less) in your variable bit register. The final solution is the one I prefer :

The actual case of bits = all 1's in a 32 bit register should only occur 1 in 4 billion times, so we don't have to handle it fast, we just have to handle it correctly. So I suggest you do the unrolled checks for decode, and then after you have checked all codelens up to maximum allowed (24 or 32 or whatever your max codelen is), if bits was ~0 , it will have not decoded, so you can do :


if ( bits < huff_branchCodeLeftAligned[21] ) .. return decoded 21 bit code
if ( bits < huff_branchCodeLeftAligned[22] ) .. return decoded 22 bit code
if ( bits < huff_branchCodeLeftAligned[23] ) .. return decoded 23 bit code
if ( bits < huff_branchCodeLeftAligned[24] ) .. return decoded 24 bit code
// 24 is my max allowed code len

// failed to do any decode ! must be the bad case
assert( bits == (~0) );
// huff code must be maxCodeLen (not necessarily 24 - the maximum actually used in this huffman table)
// and symbol must be the last ( least likely one )
// return decoded maxCodeLen code;
return symbolUnsort[ numSymbols-1 ];

Finally note that in most cases it is slightly more efficient to use a "semi-huffman" code rather than a true huffman code. The semi-huffman code I propose is huffman up to codelen = 16 or so, and then simply flat after that. In most cases this affects compression ratio microscopically (because the long code lengths are very rare) but can reduce complexity a lot. How does it reduce complexity?

1. You don't have to do the proper length-limitted huffman tree construction. Instead you just build a normal unlimitted huffman code length set, and then anything with code length >= 16 is flagged as part of the semi-huffman set.

2. When you transmit your code lengths, you don't have to send the lengths > 16, you just send lengths in [0,16] (0 means doesn't occur).

3. Your unrolled decoder only has to go up to 16 branches (if you table-accelerate, you do 8 bits by table then 8 more branches).

4. Then in the final case instead of just handling the ~0 case you handle all the "long" symbols with a flat code.

8/11/2010

08-11-10 - Huffman - Arithmetic Equivalence

Huffman

to

Arithmetic

coder transformation.

This is something well known by "practictioners of the art" but I've never seen it displayed explicitly, so here we go. We're talking about arbitrary-alphabet decoding here obviously, not binary, and static probability models mostly.

Let's start with our Huffman decoder. (a bit of review here or here or here ). For simplicity and symmetry, we will use a Huffman decoder that can handle code lengths up to 16, and we will use a table-accelerated decoder. The decoder looks like this :


// look at next 16 bits (but don't consume them)
U32 peek = BitInput_Peek(16);

// use peek to look in decode tables :
int sym = huffTable_symbol[peek];

// use symbol to get actual code length :
int bits = symbol_codeLength[sym];

// and remove that code length from the bit stream :
BitInput_Consume(bits);

this is very standard (more normally the huffTable would only accelerate the first 8-12 bits of decode, and you would then fall back to some other method for codes longer than that). Let's expand out what Peek and Consume do exactly. For symmetry to the arithcoder I'm going to keep my bit buffer right-aligned in a big-endian word.

int bits_bitLen = // # of bits in word
U32 bits_code = // current bits in word

BitInput_Peek(16) :
{
  ASSERT( bits_bitLen >= 16 );
  U32 ret = bits_code >> (bits_bitLen - 16);
}

BitInput_Consume(bits) :
{
  bits_bitLen -= bits;
  bits_code &= (1 << bits_bitLen)-1;
  while ( bits_bitLen < 16 )
  {
    bits_code <<= 8;
    bits_code |= *byteStream++;
    bits_bitLen += 8;
  }
}
it should be obvious what these do; _Peek grabs the top 16 bits of code for you to snoop. Consume removes the top "bits" from code, and then streams in bytes to refill the bits while we are under count. (to repeat again, this is not how you should actually implement bit streaming, it's slower than necessary).

Okay, now let's look at an Arithmetic decoder. (a bit of review here or here and here ). First lets start with the totally generic case. Arithmetic Decoding consists of getting the probability target, finding what symbol that corresponds to, then removing that symbol's probability range from the stream. This is :


AC_range = size of current arithmetic interval
AC_code  = value in range specified

Arithmetic_Peek(cumulativeProbabilityTotal) :
{
  r = AC_range / cumulativeProbabilityTotal;
  target = AC_code / r;
  return target;
}

Arithmetic_Consume(cumulativeProbabilityLow, probability, cumulativeProbabilityTotal)
{
  AC_range /= cumulativeProbabilityTotal;
  AC_code  -= cumulativeProbabilityLow * AC_range
  AC_range *= probability;

  while ( AC_range < minRange )
  {
    AC_code <<= 8;
    AC_range <<= 8;
    AC_code |= *byteStream++;
  }
}

Okay it's not actually obvious that this is a correct arithmetic decoder (the details are quite subtle) but it is; and in fact this is just about the fastest arithmetic decoder in the world (the only thing you would do differently in real code is share the divide by cumulativeProbabilityTotal so it's only done once).

Now, the problem of taking the Peek target and finding what symbol that specifies is actually the slowest part, there are various solutions, Fenwick trees, Deferred Summation, etc. For now we are talking about *static* coding, so we will use a table lookup.

To decode with a table we need a table from [0,cumulativeProbabilityTotal] which can map a probability target into a symbol. So when we get a value from _Peek we look it up in a table to get the symbol, cumulativeProbabilityLow, and probability.

To speed things up, we can use cumulativeProbabilityTotal = a power of two to turn the divide into a shift. We choose cumulativeProbabilityTotal = 2^16. (the longest code we can write with our arithmetic coder then has code length -log2(1/cumulativeProbabilityTotal) = 16 bits).

So now our static table-based arithmetic decode is :


Arithmetic_Peek() :
{
  r = AC_range >> 16;
  target = AC_code / r;
}

int sym = arithTable_symbol[target];

int cumProbLow  = cumProbTable[sym];
int cumProbHigh = cumProbTable[sym+1];

Arithmetic_Consume()
{
  AC_range >>= 16;
  AC_code  -= cumProbLow * AC_range
  AC_range *= (cumProbHigh - cumProbLow);

  while ( AC_range < minRange )
  {
    AC_code <<= 8;
    AC_range <<= 8;
    AC_code |= *byteStream++;
  }
}

Okay, not bad, and we still allow arbitrarily probabilities within the [0,cumulativeProbabilityTotal] , so this is more general than the Huffman decoder. But we still have a divide which is very slow. So if we want to get rid of that, we have to constrain a bit more :

Make each symbol probability a power of 2, so (cumProbHigh - cumProbLow) is always a power of 2 (< cumulativeProbabilityTotal). We will then store the log2 of that probability range. Let's do that explicitly :


Arithmetic_Peek() :
{
  r = AC_range >> 16;
  target = AC_code / r;
}

int sym = arithTable_symbol[target];

int cumProbLow  = cumProbTable[sym];
int cumProbLog2 = log2Probability[sym];

Arithmetic_Consume()
{
  AC_range >> 16;
  AC_code  -= cumProbLow * AC_range
  AC_range <<= cumProbLog2;

  while ( AC_range < minRange )
  {
    AC_code  <<= 8;
    AC_range <<= 8;
    AC_code |= *byteStream++;
  }
}

Now the key thing is that since we only ever >> shift down AC_Range or << to shift it up, if it starts a power of 2, it stays a power of 2. So we will replace AC_Range with its log2 :

Arithmetic_Peek() :
{
  r = AC_log2Range - 16;
  target = AC_code >> r;
}

int sym = arithTable_symbol[target];

int cumProbLow  = cumProbTable[sym];
int cumProbLog2 = log2Probability[sym];

Arithmetic_Consume()
{
  AC_code  -= cumProbLow << (AC_log2Range - 16);
  AC_log2Range += (cumProbLog2 - 16);

  while ( AC_log2Range < min_log2Range )
  {
    AC_code  <<= 8;
    AC_log2Range += 8;
    AC_code |= *byteStream++;
  }
}

we only need a tiny bit more now. First observe that an arithmetic symbol of log2Probability is written in (16 - log2Probability) bits, so lets call that "codeLen". And we'll rename AC_log2range to AC_bitlen :


Arithmetic_Peek() :
{
  peek = AC_code >> (AC_bitlen - 16);
}

int sym = arithTable_symbol[peek];

int codeLen = sym_codeLen[sym];
int cumProbLow  = sym_cumProbTable[sym];

Arithmetic_Consume()
{
  AC_code   -= cumProbLow << (AC_bitlen - 16);
  AC_bitlen -= codeLen;

  while ( AC_bitlen < 16 )
  {
    AC_code  <<= 8;
    AC_bitlen += 8;
    AC_code |= *byteStream++;
  }
}

let's compare this to our Huffman decoder (just copying down from the top of the post and reorganizing a bit) :

BitInput_Peek() :
{
  peek = bits_code >> (bits_bitLen - 16);
}

// use peek to look in decode tables :
int sym = huffTable_symbol[peek];

// use symbol to get actual code length :
int codeLen = sym_codeLen[sym];

BitInput_Consume() :
{
  bits_code &= (1 << bits_bitLen)-1;
  bits_bitLen -= codeLen;

  while ( bits_bitLen < 16 )
  {
    bits_code <<= 8;
    bits_bitLen += 8;
    bits_code |= *byteStream++;
  }
}

you should be able to see the equivalence.

There's only a small difference left. To remove the consumed bits, the arithmetic coder does :


  int cumProbLow  = sym_cumProbTable[sym];

  AC_code   -= cumProbLow << (AC_bitlen - 16);

while the Huffman coder does :

  bits_code &= (1 << bits_bitLen)-1;

which is obviously simpler. Note that the Huffman remove can be written as :

  code = peek >> (16 - codeLen);

  bits_code -= code << (bits_bitLen - codeLen);

What's happening here - peek is 16 bits long, it's a window in the next 16 bits of "bits_code". First we make "code" which is the top "codeLen" of "peek". "code" is our actual Huffman code for this symbol. Then we know the top bits of bits_code are equal to code, so to turn them off, rather than masking we can subtract. The equivalent cumProbLow is code<<(16-codeLen). This is the equivalence of the Huffman code to taking the arithmetic probability range [0,65536] and dividing it in half at each tree branch.

The arithmetic coder had to look up cumProbLow in a table because it is still actually a bit more general than the Huffman decoder. In particular our arithmetic decoder can still handle probabilities like [1,2,4,1] (with cumProbTot = 8). Because of that the cumProbLows don't hit the nice bit boundaries. If you require that your arithmetic probabilities are always sorted [1,1,2,4], then since they are power of two and sum to a power of two, each partial power of two must be present, so the cumProbLows must all hit bit boundaries like the huffman codes, and the equivalence is complete.

So, you should now see clearly that a Huffman and Arithmetic coder are not completely different things. They are a continuum on the same scale. If you start with a fully general Arithmetic coder it is flexible, but slow. You then constrain it in various ways step by step, it gets faster and less general, and eventually you get to a Huffman coder. But those are not the only coders in the continuum, you also have things like "Arithmetic coder with fixed power of two probability total but non-power-of-2 symbol probabilities" which is somewhere in between in space and speed.


BTW not directly on topic, but I found this in my email and figure it should be in public :



Well, Adaptive Huffman is awful, nobody does it.  So you have a few options :


Static Huffman  -
    very fast
    code lengths must be transmitted
    can use table-based decode

Arithmetic with static probabilities scaled with total = a power of 2
    very fast
    can use table-based decode
    must transmit probabilities
    decode must do a divide

Arithmetic semi-adaptive
    "Deferred Summation"
    doesn't transmit probabilites

Arithmetic fully adaptive
    must use Fenwick tree or something like that
    much slower, coder time no longer dominates
      (symbol search in tree time dominates)

Arithmetic by binary decomposition
    can use fast binary arithmetic coder
    speed depends on how many binary events it takes to code symbols on average


It just depends on your situation so much. With somehting like image or
audio coding you want to do special-cased things like turn amplitudes
into log2+remainder, use a binary coder for the zero, perhaps do
zero-run coding, etc. stuff to avoid doing the fully general case of a
multisymbol large alphabet coder.


08-11-10 - ROLZ and Links

I found this little tutorial on Fenwick Trees a little while ago. Peter's original paper is a better way to learn it, but the graphics on this page are really nice; you can really grock the structure of the tree best when you see it visually.

I also found this : Anatomy of ROLZ data archiver , which is the only actual algorithm description I've ever found of ROLZ , since Ilia doesn't write up his work. (there's also a brief description at the Russian Wikipedia ).

Anyway, it's pretty obvious how you would do ROLZ, there are few unexpected cool things on the "Anatomy of ROLZ data archiver" page.

1. The way he keeps the lists of offsets for each context by just stepping back through the history of the file already processed is pretty cool. It means there's no actual separate [context][maxoffsets] table at all, the offsets themselves are pointers back a linked list. It also means that you can do sliding-window trivially.

2. In the BALZnoROLZ.txt file he has Ilia Muraviev's binary probability updater :


//This is predictor of Ilya Muraviev
class TPredictor {
private:
    unsigned short p1, p2;
public:
    TPredictor(): p1(1 << 15), p2(1 << 15) {} 
    ~TPredictor() {}
    int P() {
        return (p1 + p2); 
    }
    void Update(int bit) { 
        if (bit) {
            p1 += unsigned short(~p1) >> 3; 
            p2 += unsigned short(~p2) >> 6; 
        }
        else {
            p1 -= p1 >> 3; 
            p2 -= p2 >> 6; 
        }
    }
};

First of all, let's back up a second, what is this? It's a probability update for binary arithmetic coding. A very standard way to do fast probability updates for binary arithmetic coding is to do :


#define PROB_ONE    (1<<14) // or whatever
#define PROB_UPD_SHIFT  (6) // or something

prob = PROB_ONE >> 1; // start at 1/2

if ( bit )
 prob += (PROB_ONE - prob) >> PROB_UPD_SHIFT;
else
 prob -= prob >> PROB_UPD_SHIFT;

what this is doing is when you get a zero bit :

prob *= (1 - 2^-PROB_UPD_SHIFT);

that's equivalent to a normal counting probability update if you put :

n1 = prob*N
n0 = N - n1

when I get a zero bit n0++ and N++

prob = n1 / N

so update is 

prob := prob*N / (N+1)

or prob *= N / (N+1)

so

N/(N+1) = (1 - 2^-PROB_UPD_SHIFT)

which means

N = (2^PROB_UPD_SHIFT - 1)

then you keep prob and reset N; that is, this update is equivalent to pretending you have such an n0 and N and you increment them and compute the new probability, but then you don't actually store N, so the next update will have the same weight (if N increased then each update has a smaller effect than the last). This is an IIR filter that acts a bit like a moving average of the last N. The larger N is, the bigger window we are effectively using. A small N adapts very quickly.

So Ilia's probability update is a 2^3-1 and 2^6-1 window size, and then averaged. That's a very simple and neat idea that never occured to me - use two simple probability estimators, one that adapts very fast and one that adapts more slowly, and just blend them.

8/10/2010

08-10-10 - Transmission of Huffman Trees

Transmission of Huffman Trees is one of those peripheral problems of compression that has never been properly addressed. There's not really any research literature on it, because in the N -> infinity case it disappears.

Of course in practice, it can be quite important, particularly because we don't actually just send one huffman tree per file. All serious compressors that use huffman resend the tree every so often. For example, to compress bytes what you might do is extend your alphabet to [0..256] inclusive, where 256 means "end of block" , when you decode a 256, you either are at the end of file, or you read another huffman tree and start on the next block. (I wrote about how the encoder might make these block split decisions here ).

So how might you send a Huffman tree?

For background, you obviously do not want to actually send the codes. The Huffman code value should be implied by the symbol identity and the code length. The so-called "canonical" codes are created by assigning codes in numerical counting up order to symbols of the same length in their alphabetical order. You also don't need to send the character counts and have the decoder make its own tree, you send the tree directly in the form of code lengths.

So in order to send a canonical tree, you just have to send the code lens. Now, not all symbols in the alphabet may occur at all in the block. Those technically have a code length of "infinite" but most people store them as code length 0 which is invalid for characters that do occur. So you have to code :


which symbols occur at all
which code lengths occur
which symbols that do occur have which code length

Now I'll go over the standard ways of doing this and some ideas for the future.

The most common way is to make the code lengths into an array indexed by symbol and transmit that array. Code lengths are typically in [1,31] (or even less [1,16] , and by far most common is [4,12]), and you use 0 to indicate "symbol does not occur". So you have an array like :


{ 0 , 0 , 0 , 4 , 5 , 7 , 6, 0 , 12 , 5 , 0, 0, 0 ... }

1. Huffman the huffman tree ! This code length array is just another array of symbols to compress - you can of course just run your huffman encoder on that array. In a typical case you might have a symbol count of 256 or 512 or so, so you have to compress that many bytes, and then your "huffman of huffmans" will have a symbol count of only 16 or so, so you can then send the tree for the secondary huffman in a simpler scheme.

2. Delta from neighbor. The code lens tend to be "clumpy" , that is , they have correlation with their neighbors. The typical way to model this is to subtract each (non-zero) code len from the last (non-zero) code len, thus turning them into deltas from neighbors. You can then take these signed deltas and "fold up" the negatives to make them unsigned and then use one of the other schemes for transmitting them (such as huffman of huffmans). (actually delta from an FIR or IIR filter of previous is better).

3. Runlens for zeros. The zeros (does not occur) in particular tend to be clumpy, so most people send them with a runlen encoder.

4. Runlens of "same". LZX has a special flag to send a bunch of codelens in a row with the same value.

5. Golomb or other "variable length coding" scheme. The advantage of this over Huffman-of-huffmans is that it can be adaptive, by adjusting the golomb parameter as you go. (see for example on how to estimate golomb parameters). The other advantage is you don't have to send a tree for the tree.

6. Adaptive Arithmetic Code the tree! Of course if you can Huffman or Golomb code the tree you can arithmetic code it. This actually is not insane; the reason you're using Huffman over Arithmetic is for speed, but the Huffman will be used on 32k symbols or so, and the arithmetic coder will only be used on the 256-512 or so Huffman code lengths. I don't like this just because it brings in a bunch more code that I then have to maintain and port to all the platforms, but it is appealing because it's much easier to write an adaptive arithmetic coder that is efficient than any of these other schemes.

BTW That's a general point that I think with is worth stressing : often you can come up with some kind of clever heuristic bit packing compression scheme that is close to optimal. The real win of adaptive arithmetic coding is not the slight gain in efficiency, it's the fact that it is *so* much easier to compress anything you throw at it. It's much more systematic and scientific, you have tools, you make models, you estimate probabilities and compress them. You don't have to sit around fiddling with "oh I'll combined these two symbols, then I'll write a run length, and this code will mean switch to a different coding", etc.

Okay, that's all standard review stuff, now let's talk about some new ideas.

One issue that I've been facing is that coding the huffman tree in this way is not actually very nice for the decoder to be able to very quickly construct trees. (I wind up seeing the build tree time show up in my profiles, even though I only buld tree 5-10 times per 256k symbols). The issue is that it's in the wrong order. To build the canonical huffman code, what you need is the symbols in order of codelen, from lowest codelen to highest, and with the symbols sorted by id within each codelen. That is, something like :


codeLen 4 : symbols 7, 33, 48
codeLen 5 : symbols 1, 6, 8, 40 , 44
codeLen 7 : symbols 3, 5, 22
...

obviously you can generate this from the list of codelens per symbol, but it requires a reshuffle which takes time.

So, maybe we could send the tree directly in this way?

One approach is through counting combinations / enumeration . For each codeLen, you send the # of symbols that have that codeLen. Then you have to select the symbols which have that codelen. If there are M symbols of that codeLen and N remaining unclaimed symbols, the number of ways is N!/(M!*(N-M)!) , and the number of bits needed to send the combination index is log2 of that. Note in this scheme you should also send the positions of the "not present" codeLen=0 group, but you can skip sending the entire group that's largest. You should also send the groups in order of smallest to largest (actually in order or *complement* order, a group that's nearly full is as good as a group that's nearly empty).

I think this is an okay way to send huffman trees, but there are two problems : 1. it's slow to decode a combination index, and 2. it doesn't take advantage of modelling clumpiness.

Another similar approach is binary group flagging. For each codeLen, you want to specify which remaining symbols are of that codelen or not of that codelen. This is just a bunch of binary off/on flags. You could send them with a binary RLE coder, or the elegant thing would be Group Testing. Again the problem is you would have to make many passes over the stream and each time you would have to exclude already done ones.

(ADDENDUM : a better way to do this which takes more advantage of "clumpiness" is like this : first code a binary event for each symbol to indicate codelen >=1 (vs. codeLen < 1). Then, on the subset that is >= 1, code an event for is it >= 2, and so on. This the same amount of binary flags as the above method, but when the clumpiness assumption is true this will give you flags that are very well grouped together, so they will code well with a method that makes coherent binary smaller (such as runlengths)).

Note that there's another level of redundancy that's not being exploited by any of these coders. In particular, we know that the tree must be a "prefix code" , that is satisfy Kraft, (Sum 2^-L = 1). This constrains the code lengths. (this is most extreme for the case of small trees; for example with a two symbol tree the code lengths are completely specified by this; on a three symbol tree you only have one free choice - which one gets the length 1, etc).

Another idea is to use MTF for the codelengths instead of delta from previous. I think that this would be better, but it's also slower.

Finally when you're sending multiple trees in a file you should be able to get some win by making the tree relative to the previous one, but I've found this is not trivial.

I've tried about 10 different schemes for huffman tree coding, but I'm not going to have time to really solve this issue, so it will remain neglected as it always has.

08-10-10 - One Really nice thing about the PS3

Is that PS3RUN takes command line args and my app just gets argc/argv. And I get stdin/stdout that just works, and direct access to the host disk through app_home/. That is all hella nice.

It means I can make command line test apps for regression and profiling and just run them and pass in file name arguments for testing on.

08-10-10 - HeapAllocAligned

How the fuck is there no HeapAllocAligned on Win32 ?

and yes I know I can use clib or do it myself or whatever, but still, WTF ?

08-10-10 - A small note on memset16

On the SPU I'm now making heavy use of a primitive op I call "memset16" ; by this I don't mean that it must be 16-byte aligned, but rather that it memsets 16-byte patterns, not individual bytes

void rrSPU_MemSet16(void * ptr,qword pattern,int count)
{
    qword * RADRESTRICT p = (qword *) ptr;
    char * end = ((char *)ptr + count );

    while ( (char *)p < end )
    {
        *p++ = pattern;
    }
}

(and yes I know this could be faster, this is the simple version for readability).

The interesting thing has been that taking a 16-byte pattern as input actually makes it way more useful than normal byte memset. I can now also memset shorts and words, floats, doubles, and vectors! So this is now the way I do any array assignments when a chunk of consecutive elements are the same. eg. instead of doing :


float fval = param;
float array[16];

for(int i=0;i<16;i++) array[i] = fval;

you do :

float array[16];
qword pattern = si_from_float(fval);

rrSPU_MemSet16(array,pattern,sizeof(array));

In fact it's been so handy that I'd like to have it on other platforms, at least up to U32.

8/06/2010

08-06-10 - The Casey Method

The "Casey Method" of dealing with boring coding tasks is to instead create a whole new code structure to do them in. eg. need to write some boring GUI code? Write a new system for making GUIs.

When I was young I would reinvent the wheel for everything stupidly. Then I went through a maturation phase where I cut back on that and tried to use existing solutions and be sensible. But now I'm starting to see that a certain judicious amount of "Casey Method" is a good idea for certain programmers.

For example, if I had to write a game level editor tool in C# the straightforward way, it would probably be the most efficient way, but that's irrelevant, because I would probably kill myself becuase I finished, or just quit the job, so the actual time to finish would be infinite. On the other hand, if I invented a system to write the tool for me, and it was fun and challenging, it might take longer in theory, and not be the most efficient/sensible/responsible/mature way to go, but it would keep be engaged enough that I would actually do it.

08-06-10 - Visual Studio File Associations

This is my annoyance of the moment. I rarely double-click CPP/H files, but once in a while it's handy.

On my work machine, I currently have VC 2003,2005,2008, and 2010 installed. Which one should the CPP/H file open in?

The right answer is obviously : whichever one is currently running.

And of course there's no way to do that.

Almost every day recently I've had a moment where I am working away in VC ver. A , and I double click some CPP file, and it doesn't pop up, and I'm like "WTF", and then I hear my disk grinding, and I'm like "oh nooes!" , and then the splash box pops up announcing VC ver. B ; thanks a lot guys, I'm really glad you started a new instance of your gargantuan hog of a program so that I could view one file.

Actually if I don't have a DevStudio currently running, then I'd like CPP/H to just open in notepad. Maybe I have to write my own tool to open CPP/H files. But even setting the file associations to point at my own tool is a nightmare. Maybe I have to write my own tool to set file associations. Grumble.

(for the record : I have 2008 for Xenon, 2005 is where I do most of my work, I keep 2003 to be able to build some old code that I haven't ported to 2005 templates yet, and 2010 to check it out for the future).

08-06-10 - More SPU

1. CWD instruction aligns the position you give it to the size of the type (byte,word,dword). God dammit. I don't see this documented anywhere. CWD is supposed to generate a shuffle key for word insertion at a position (regA). In fact it generates a shuffle key for insertion at position ( regA & ~3 ). That means I can't use it to do arbitrary four byte moves.

2. Shufb is not mod 32. Presumably because it has those special 0 and FF selections. This is not a huge deal because you can fix it by doing AND splats(31) , but that is enough slowdown that it ruins shufb as a fast path for doing byte indexing. (people use rotqby instead because that is mod 16).

A related problem for Shufb is that there's no Byte Add. If you had byte add, and shufb was mod 32, then you could generate grabbing a 16-byte portion of two quads by adding an offset.

In order to deal with this, you have to first mod your offset down to [0,15] so that you won't overflow, then you have to see if your original offset before modding had the 16 bit set, and if so, swap the reg order you pass to shuffle. If you had a byte add and shuffle was mod 32, you wouldn't have to do any of that and it would be way faster to do grabs of arbitrary qwords from two neighboring qwords.

(also there's no fast way to generate the typical shuffle mask {010203 ...}. )

3. You can make splats a few different ways; one is to rely on the "spu_splats" builtin, which figures out how to spread the value (usually using FSMB or some such variant). But you can also write it directly using something like :

(vec_int4) { -1,-1,-1,-1 }

which the compiler will either turn into loading a constant or some code to generate it, depending on what it thinks is faster.

Some important constants are hard to generate, the most common being (vec_uchar16){0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15} , so you might want to load that into a register and then pass it around if you need it much. (if you are using the stdlib memcpy at all, you can also get it from __Shuffles).

4. Something that I always want with data compression and bit manipulation is a "shift right with 1 bits coming in". Another one is bit extract and bit insert , grab N bits from position M. Also just "mask with N bits" would always be nice, so I don't have to do ((1 << N)-1) (or whatever) to generate masks. Obviously the SPU is not intended for bit manipulation. I'm sure getting Blu-Ray working on it was a nightmare.

5. If you build with -fpic (to make your ELF relocatable), then all your static addresses are relative to r126, which means if you just reference some static in your inner loop it will do a ton of math all the time (the add r126 is not a big deal, but just fetching the address a lot of work) ; as usual if you just copy it out to a local :


   combinedDecodeTable = s_combinedDecodeTable;

the compiler will "optimize" out the copy and will do all the math to compute s_combinedDecodeTable in your inner loop. So you have to use the "volatile trick" :

    volatile UINTa v_hack;
    v_hack = (UINTa) s_combinedDecodeTable;
    const CombinedDecodeTable * RADRESTRICT const combinedDecodeTable = (const CombinedDecodeTable * RADRESTRICT const) v_hack;

sigh.

6. memcpy seems to be treated differently than __builtin_memcpy by the compiler. They both call the same code, but sometimes calling memcpy() doesn't inline, but __builtin_memcpy does. I don't know WTF is up with that (and yes I have -fbuiltins set and all that shit, but maybe there's some GCC juju I'm missing). Also, it is a decent SPU-optimized memcpy, but the compiler doesn't have a true "builtin" knowledge of memcpy for SPU the way it does on some platforms - eg. it doesn't know that for types that are inherently 16-byte aligned it can avoid the rollup/rolldown, it just always calls memcpy. So if you're doing a lot of memcpys of 16-byte aligned stuff you should probably have your own.

7. This one is a fucking nightmare : I put in some "scheduling barriers" (asm volatile empty) to help me look at the disassembly for optimization (it makes it much easier to trace through because shit isn't spread all over). After debugging a bit, I forgot to take out the scheduling barriers and ran my speed test again -

it was faster.

Right now my code is 5-10% faster with a few scheduling barriers manually inserted (19 clocks per symbol vs 21). That's fucking bananas, it means the compiler's scheduler is fucking up somehow. It sucks because there's no way for me to place them in a systematic way, I have to randomly try moving them around and see what's fastest.

8. Lots of little things are painfully important on the SPU. For example, obviously most of your functions should be inline, but sometimes you have to make them forceinline (and it's a HUGE difference if you don't because once a val goes through the stack it really fucks you), but then sometimes it's faster if it's not inline. Another nasty one is manual scheduling. Obviously the compiler does a lot of scheduling for you (though it fucks up as noted previously), but in many cases it can't figure out that two ops are actually commutative. eg. say you have something like A() which does shared++ , and B() also does shared++ , then the compiler can't tell that the sequence {A B} could be reordered to {B A} , so you have to manually try both ways. Another case is :


A()

if ( x )
{
    B()
    .. more ..
}
else
{
    B()
    .. more ..
}

and again the compiler can't figure out that the B() could be hoisted out of the branches (more normally you would probably write it with B outside the branch, and the compiler won't bring it in). It might be faster inside the branches or outside, so you have to try both. Branch order reorganization can also help a lot. For example :

if ( x )
    if ( y )
        .. xy case
    else
        .. xNy case
else
    if ( y )
        .. Nxy case
    else
        .. NxNy case

obviously you could instead do the if(y) first then the if(x). Again the compiler can't do this for you and it can make a big difference.

This kind of shit affects every platform of course, but on a nice decent humane architecture like a Core iX/x86 it's < 1%. On SPU it's 5-10% which is nothing to sneeze at and forces you to do this annoying shit.

9. One thing that's annoying in general about the SPU (and even to some extent the PPU) is that some code patterns can be *SO* slow that your non-inner-loop parts of the code dominate.

On the wonderful PC/x86 , you basically can just find your hot spots and optimize those little routines, and ignore all your glue and startup code, cuz it will be reasonably fast. Not true on the SPU. I had some setup code that built some acceleration tables that's only called once, and then an inner loop that's called two hundred thousand times. The setup code takes 10% of the total time. On the PC it doesn't even show up as a blip. (On the Xenon you can get this same sort of thing if your setup code happens to do signed-16-bit loads, variable shifts, or load-hit-stores). On the SPU my setup code that was so bad was doing lots of byte reads & writes, and lots of branches.

The point is on the PC/x86 you can find 10% of your code base that you have to worry about for speed, and the other 90% you just make work. On the SPU you have to be aware of speed on almost all of your code. Blurg that blows so bad.

10. More generally, I found it just crucial to have my SPU ELF reloader running all the time. Every time I touch the code in any way, I hit "build" and then look over and see my speed, because my test app is just running on the PS3 all the time reloading the SPU ELF and running the test. Any time you do the most inoccuous thing - you move some code around, change some variable types - check the speed, because it may have made a surprisingly big difference. (*)

(* = there is a disadvantage to this style of optimization, because it does lead you into local minima; that is, I wind up being a greedy perturbation search in the code space; some times in order to find real minima you have to take a big step backwards).

11. For testing, be careful about what args you pass to system_init ; make sure you aren't sharing your SPUs with the management processes. If you see unreliable timing, something is wrong. (5,1) seems to work okay.