12/23/2015

Oodle Results Update

Major improvements coming in Oodle 2.1.2

Fabian's BitKnit is coming to Oodle. BitKnit is a pretty unique LZ; it makes clever use of the properties of RANS to hit a space-speed tradeoff point that nothing else does. It gets close to LZMA compression levels (sometimes more, sometimes less) while being more like zlib speed.

LZNA and LZNIB are also much improved. The bit streams are the same, but we found some little tweaks in the encoders & decoders that make significant difference. (5-10%, but that's a lot in compression, and they were already world-beating, so the margin is just bigger now). The biggest improvement came from some subtle issues in the parsers.

As usual, I'm trying to be as fair as possible to the competition. Everything is run single threaded. LZMA and LZHAM are run at max compression with context bits at their best setting. Compressors like zlib that are just not even worth considering are not included, I've tried to include the strongest competition that I know of now. This is my test of "slowies" , that is, all compressors set at high (not max) compression levels. ("oohc" is Oodle Optimal1 , my compression actually goes up quite a bit at higher levels, but I consider anything below 2 mb/s to encode to be just too slow to even consider).

The raw data : ("game test set")


by ratio:
oohcLZNA    :  2.88:1 ,    5.3 enc mb/s ,  135.0 dec mb/s
lzma        :  2.82:1 ,    2.9 enc mb/s ,   43.0 dec mb/s
oohcBitKnit :  2.76:1 ,    6.4 enc mb/s ,  273.3 dec mb/s
lzham       :  2.59:1 ,    1.8 enc mb/s ,  162.9 dec mb/s
oohcLZHLW   :  2.38:1 ,    4.2 enc mb/s ,  456.3 dec mb/s
zstdhc9     :  2.11:1 ,   29.5 enc mb/s ,  558.0 dec mb/s
oohcLZNIB   :  2.04:1 ,   11.5 enc mb/s , 1316.4 dec mb/s

by encode speed:
zstdhc9     :  2.11:1 ,   29.5 enc mb/s ,  558.0 dec mb/s
oohcLZNIB   :  2.04:1 ,   11.5 enc mb/s , 1316.4 dec mb/s
oohcBitKnit :  2.76:1 ,    6.4 enc mb/s ,  273.3 dec mb/s
oohcLZNA    :  2.88:1 ,    5.3 enc mb/s ,  135.0 dec mb/s
oohcLZHLW   :  2.38:1 ,    4.2 enc mb/s ,  456.3 dec mb/s
lzma        :  2.82:1 ,    2.9 enc mb/s ,   43.0 dec mb/s
lzham       :  2.59:1 ,    1.8 enc mb/s ,  162.9 dec mb/s

by decode speed:
oohcLZNIB   :  2.04:1 ,   11.5 enc mb/s , 1316.4 dec mb/s
zstdhc9     :  2.11:1 ,   29.5 enc mb/s ,  558.0 dec mb/s
oohcLZHLW   :  2.38:1 ,    4.2 enc mb/s ,  456.3 dec mb/s
oohcBitKnit :  2.76:1 ,    6.4 enc mb/s ,  273.3 dec mb/s
lzham       :  2.59:1 ,    1.8 enc mb/s ,  162.9 dec mb/s
oohcLZNA    :  2.88:1 ,    5.3 enc mb/s ,  135.0 dec mb/s
lzma        :  2.82:1 ,    2.9 enc mb/s ,   43.0 dec mb/s

-----------------------------------------------------------------
Log opened : Fri Dec 18 17:56:44 2015

total : oohcLZNIB   : 167,495,105 ->81,928,287 =  3.913 bpb =  2.044 to 1 
total : encode           : 14.521 seconds, 3.39 b/kc, rate= 11.53 M/s
total : decode           : 0.127 seconds, 386.85 b/kc, rate= 1316.44 M/s
total : encode+decode    : 14.648 seconds, 3.36 b/kc, rate= 11.43 M/s
total : oohcLZHLW   : 167,495,105 ->70,449,624 =  3.365 bpb =  2.378 to 1 
total : encode           : 40.294 seconds, 1.22 b/kc, rate= 4.16 M/s
total : decode           : 0.367 seconds, 134.10 b/kc, rate= 456.33 M/s
total : encode+decode    : 40.661 seconds, 1.21 b/kc, rate= 4.12 M/s
total : oohcLZNA    : 167,495,105 ->58,242,995 =  2.782 bpb =  2.876 to 1 
total : encode           : 31.867 seconds, 1.54 b/kc, rate= 5.26 M/s
total : decode           : 1.240 seconds, 39.68 b/kc, rate= 135.04 M/s
total : encode+decode    : 33.107 seconds, 1.49 b/kc, rate= 5.06 M/s
total : oohcBitKnit : 167,495,105 ->60,763,350 =  2.902 bpb =  2.757 to 1 
total : encode           : 26.102 seconds, 1.89 b/kc, rate= 6.42 M/s
total : decode           : 0.613 seconds, 80.33 b/kc, rate= 273.35 M/s
total : encode+decode    : 26.714 seconds, 1.84 b/kc, rate= 6.27 M/s
total : zstdhc9     : 167,495,105 ->79,540,333 =  3.799 bpb =  2.106 to 1 
total : encode           : 5.671 seconds, 8.68 b/kc, rate= 29.53 M/s
total : decode           : 0.300 seconds, 163.98 b/kc, rate= 558.04 M/s
total : encode+decode    : 5.971 seconds, 8.24 b/kc, rate= 28.05 M/s
total : lzham       : 167,495,105 ->64,682,721 =  3.089 bpb =  2.589 to 1 
total : encode           : 93.182 seconds, 0.53 b/kc, rate= 1.80 M/s
total : decode           : 1.028 seconds, 47.86 b/kc, rate= 162.86 M/s
total : encode+decode    : 94.211 seconds, 0.52 b/kc, rate= 1.78 M/s
total : lzma        : 167,495,105 ->59,300,023 =  2.832 bpb =  2.825 to 1 
total : encode           : 57.712 seconds, 0.85 b/kc, rate= 2.90 M/s
total : decode           : 3.898 seconds, 12.63 b/kc, rate= 42.97 M/s
total : encode+decode    : 61.610 seconds, 0.80 b/kc, rate= 2.72 M/s
-------------------------------------------------------

11/13/2015

Flipped encodemod

A while ago I wrote a series on Encoding Values in Bytes in which I talk about the "EncodeMod" varint encoding.

EncodeMod is just the idea that you send each token (byte, word, nibble, whatever) with two ranges; in one range the values are terminal (no more tokens), while in the other range it means "this is part of the value" but more tokens follow. You can then optimize the division point for a wide range of applications.

In my original pseudo-code I was writing the ranges with the "more tokens" follow at the bottom, and terminal values at the top. That is :


Specifically for the case of byte tokens and pow2 mod

mod = 1<<bits

in each token we send "bits" of values that don't currently fit

upper = 256 - mod

"upper" is the number of terminal values we can send in the current token

I was writing

[0,mod) = bits of value + more tokens follow
[mod,256) = terminal value

Fabian spotted that the code is slightly simpler if you switch the ranges. Use the low range [0,upper) for terminal values and [upper,256) for non-terminal values. The ranges are the same, so you get the same encoded lengths.

(BTW it also occurred to me when learning about ANS that EncodeMod is reminiscent of simple ANS. You're trying to send a bit - "do more bytes follow". You're putting that bit in a token, and you have some extra information you can send with that bit - so just put some of your value in there. The number of slots for bit=0 and 1 should correspond to the probability of each event.)

The switched encodemod is :


U8 *encmod(U8 *to, int val, int bits)
{
    const int upper = 256 - (1<<bits); // binary, this is 1110000 or similar (8-bits ones, bits zeros)
    while (val >= upper)
    {
        *to++ = (U8) (upper | val);
        val = (val - upper) >> bits;
    }

    *to++ = (U8) val;
    return to;
}


const U8 *decmod(int *outval, const U8 *from, int bits)
{
    const int upper = 256 - (1<<bits);
    int shift = 0;
    int val = 0;

    for (;;)
    {
        int byte = *from++;
        val += byte << shift;
        if (byte < upper)
            break;
        shift += bits;
    }

    *outval = val;
    return from;
}

The simplification of the encoder here :

    *to++ = (U8) (upper | val);
    val = (val - upper) >> bits;

written in long-hand is :

    low = val & ((1<<bits)-1);
    *to++ = upper + low;  // (same as upper | low, same as upper | val)
    val -= upper;
    val >>= bits;

or

    val -= upper;
    low = val & ((1<<bits)-1);
    *to++ = upper + low;  // (same as upper | low, same as upper | val)
    val >>= bits;

and the val -= upper can be done early or late because val >= upper it doesn't touch "low"

Basically by using "upper" like this, the mask of low bits and add of upper is done in one op.

10/17/2015

Huffman Performance

I'm following Yann Collet's nice blog series on Huffman. I thought I'd have my own look.

Background : 64-bit mode. 12-bit lookahead table, and 12-bit codelen limit, so there's no out-of-table case to handle.

Here's conditional bit buffer refill, 32-bits refilled at a time, aligned refill. Always >= 32 bits in buffer so you can do two decode ops per refill :


        loop
        {
            uint64 peek; int cl,sym;
            
            peek = decode_bits >> (64 - CODELEN_LIMIT);
            cl = codelens[peek];
            sym = symbols[peek];
            decode_bits <<= cl; thirtytwo_minus_decode_bitcount += cl;
            *decodeptr++ = (uint8)sym;
            
            peek = decode_bits >> (64 - CODELEN_LIMIT);
            cl = codelens[peek];
            sym = symbols[peek];
            decode_bits <<= cl; thirtytwo_minus_decode_bitcount += cl;
            *decodeptr++ = (uint8)sym;
            
            if ( thirtytwo_minus_decode_bitcount > 0 )
            {
                uint64 next = _byteswap_ulong(*decode_in++);
                decode_bits |= next << thirtytwo_minus_decode_bitcount;
                thirtytwo_minus_decode_bitcount -= 32;
            }
        }

325 mb/s.

(note that removing the bswap to have a little-endian u32 stream does almost nothing for performance, less than 1 mb/s)

The next option is : branchless refill, unaligned 64-bit refill. You always have >= 56 bits in buffer, now you can do 4 decode ops per refill :

        loop
        {
            // refill :
            uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
            bits |= next >> bitcount;
            int bytes_consumed = (64 - bitcount)>>3;
            decode_in += bytes_consumed;
            bitcount += bytes_consumed<<3;
        
            uint64 peek; int cl; int sym;
            
            #define DECONE() \
            peek = bits >> (64 - CODELEN_LIMIT); \
            cl = codelens[peek]; sym = symbols[peek]; \
            bits <<= cl; bitcount -= cl; \
            *decodeptr++ = (uint8) sym;
            
            DECONE();
            DECONE();
            DECONE();
            DECONE();
            
            #undef DECONE
        }
373 mb/s

These so far have both been "traditional Huffman" decoders. That is, they use the next 12 bits from the bit buffer to look up the Huffman decode table, and they stream bits into that bit buffer.

There's another option, which is "ANS style" decoding. To do "ANS style" you keep the 12-bit "peek" as a separate variable, and you stream bits from the bit buffer into the peek variable. Then you don't need to do any masking or shifting to extract the peek.

The naive "ANS style" decode looks like this :


        loop
        {
            // refill bits :
            uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
            bits |= next >> bitcount;
            int bytes_consumed = (64 - bitcount)>>3;
            decode_in += bytes_consumed;
            bitcount += bytes_consumed<<3;
        
            int cl; int sym;
            
            #define DECONE() \
            cl = codelens[state]; sym = symbols[state]; \
            state = ((state << cl) | (bits >> (64 - cl))) & ((1 << CODELEN_LIMIT)-1); \
            bits <<= cl; bitcount -= cl; \
            *decodeptr++ = (uint8) sym;
            
            DECONE();
            DECONE();
            DECONE();
            DECONE();
            
            #undef DECONE
        }

332 mb/s

But we can use an analogy to the "next_state" of ANS. In ANS, the next_state is a complex thing with certain rules (as we covered in the past). With Huffman it's just this bit of math :


    next_state[state] = (state << cl) & ((1 << CODELEN_LIMIT)-1);

So we can build that table, and use a "fully ANS" decoder :


        loop
        {
            // refill bits :
            uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
            bits |= next >> bitcount;
            int bytes_consumed = (64 - bitcount)>>3;
            decode_in += bytes_consumed;
            bitcount += bytes_consumed<<3;
        
            int cl; int sym;
            
            #define DECONE() \
            cl = codelens[state]; sym = symbols[state]; \
            state = next_state_table[state] | (bits >> (64 - cl)); \
            bits <<= cl; bitcount -= cl; \
            *decodeptr++ = (uint8) sym;
            
            DECONE();
            DECONE();
            DECONE();
            DECONE();
            
            #undef DECONE
        }

415 mb/s

Fastest! It seems the fastest Huffman decoder is a TANS decoder. (*1)

(*1 = well, on this machine anyway; these are all so close that architecture and exact usage matters massively; in particular we're relying heavily on fast unaligned reads, and doing four unrolled decodes in a row isn't always useful)

Note that this is a complete TANS decoder save one small detail - in TANS the "codelen" (previously called "numbits" in my TANS code) can be 0. The part where you do :


(bits >> (64 - cl))

can't be used if cl can be 0. In TANS you either have to check for zero, or you have to use the method of

((bits >> 1) >> (63 - cl))

which makes TANS a tiny bit slower - 370 mb/s for TANS on the same file on my machine.

(all times reported are non-interleaved, and without table build time; Huffman is definitely faster to build tables, and faster to decode packed/transmitted codelens as well)

NOTE : earlier version of this post had a mistake in bitcount update and worse timings.


Some tiny caveats :

1. The TANS way means you can't (easily) mix different peek amounts. Say you're doing an LZ, you might want an 11-bit peek for literals, but for the 4 bottom bits you only need an 8-bit peek. The TANS state has the # of bits to peek baked in, so you can't just use that. With the normal bit-buffer style Huffman decoders you can peek any # of bits you want. (though you could just do the multi-state interleave thing here, keeping with the TANS style).

2. Doing Huffman decodes without a strict codelen limit the TANS way is much uglier. With the bits-at-top bitbuffer method there are nice ways to do that.

3. Getting raw bits the TANS way is a bit uglier. Say you want to grab 16 raw bits; you could get 12 from the "state" and then 4 more from the bit buffer. Or just get 16 directly from the bit buffer which means they need to be sent after the next 12 bits of Huffman in a weird TANS interleave style. This is solvable but ugly.

4. For the rare special case of an 8 or 16-bit peek-ahead, you can do even faster than the TANS style by using a normal bit buffer with the next bits at bottom. (either little endian or big-endian but rotated around). This lets you grab the peek just by using "al" on x86.

9/19/2015

Library Writing Realizations

Some learnings about library writing, N years on.

X. People will just copy-paste your example code.

This is obvious but is something to keep in mind. Example code should never be sketches. It should be production ready. People will not read the comments. I had lots of spots in example code where I would write comments like "this is just a sketch and not ready for production; production code needs to check error returns and handle failures and be endian-independent" etc.. and of course people just copy-pasted it and didn't change it. That's not their fault, that's my fault. Example code is one of the main ways people get into your library.

X. People will not read the docs.

Docs are almost useless. Nobody reads them. They'll read a one page quick start, and then they want to just start digging in writing code. Keep the intros very minimal and very focused on getting things working.

Also be aware that if you feel you need to write a lot of docs about something, that's a sign that maybe things are too complicated.

X. Peripheral helper features should be cut.

Cut cut cut. People don't need them. I don't care how nice they are, how proud of them you are. Pare down mercilessly. More features just confuse and crud things up. This is like what a good writer should do. Figure out what your one core function really is and cut down to that.

If you feel that you really need to include your cute helpers, put them off on the side, or put them in example code. Or even just keep them in your pocket at home so that when someone asks about "how I do this" you can email them out that code.

But really just cut them. Being broad is not good. You want to be very narrow. Solve one clearly defined problem and solve it well. Nobody wants a kitchen sink library.

X. Simplicity is better.

Make everything as simple as possible. Fewer arguments on your functions. Remove extra functions. Cut everywhere. If you sacrifice a tiny bit of possible efficiency, or lose some rare functionality, that's fine. Cut cut cut.

For example, to plug in an allocator for Oodle used to require 7 function pointers : { Malloc, Free, MallocAligned, FreeSized, MallocPage, FreePage, PageSize }. (FreeSized for efficiency, and the Page stuff because async IO needs page alignment). It's now down just 2 : { MallocAligned, Free }. Yes it's a tiny bit slower but who cares. (and the runtime can work without any provided allocators)

X. Micro-efficiency is not important.

Yes, being fast and lean is good, but not when it makes things too complex or difficult to use. There's a danger of a kind of mental-masturbation that us RAD-type guys can get caught in. Yes, your big stream processing stuff needs to be competitive (eg. Oodle's LZ decompress, or Bink's frame decode time). But making your Init() call take 100 clocks instead of 10,000 clocks is irrelevant to everyone but you. And if it requires funny crap from the user, then it's actually making things worse, not better. Having things just work reliably and safely and easily is more important than micro-efficiency.

For example, one mistake I made in Oodle is that the compressed streams are headerless; they don't contain the compressed or decompressed size. The reason I did that is because often the game already has that information from its own headers, so if I store it again it's redundant and costs a few bytes. But that was foolish - to save a few bytes of compressed size I sacrifice error checking, robustness, and convenience for people who don't want to write their own header. It's micro-efficiency that costs too much.

Another one I realized is a mistake : to do actual async writes on Windows, you need to call SetFileValidData on the newly enlarged file region. That requires admin privileges. It's too much trouble, and nobody really cares. It's no worth the mess. So in Oodle2 I just don't do that, and writes are no longer async. (everyone else who thinks they're doing async writes isn't actually, and nobody else actually checks on their threading the way I do, so it just makes me more like everyone else).

X. It should just work.

Fragile is bad. Any API's that have to go in some complicated sequence, do this, then this, then this. That's bad. (eg. JPEGlib and PNGlib). Things should just work as simply as possible without requirements. Operations should be single function calls when possible. Like if you take pointers in and out, don't require them to be aligned in a certain way or padded or allocated with your own allocators. Make it work with any buffer the user provides. If you have options, make things work reasonably with just default options so the user can ignore all the option setup if they want. Don't require Inits before your operations.

In Oodle2 , you just call Decompress(pointer,size,pointer) and it should Just Work. Things like error handling and allocators now just fall back to reasonable light weight defaults if you don't set up anything explicitly.

X. Special case stuff should be external (and callbacks are bad).

Anything that's unique to a few users, or that people will want to be different should be out of the library. Make it possible to do that stuff through client-side code. As much as possible, avoid callbacks to make this work, try to do it through imperative sequential code.

eg. if they want to do some incremental post-processing of data in place, it should be possible via : { decode a bit, process some, decode a bit , process some } on the client side. Don't do it with a callback that does decode_it_all( process_per_bit_callback ).

Don't crud up the library feature set trying to please everyone. Some of these things can go in example code, or in your "back pocket code" that you send out as needed.

X. You are writing the library for evaluators and new users.

When you're designing the library, the main person to think about is evaluators and new users. Things need to be easy and clear and just work for them.

People who actually license or become long-term users are not a problem. I don't mean this in a cruel way, we don't devalue them and just care about sales. What I mean is, once you have a relationship with them as a client, then you can talk to them, help them figure out how to use things, show them solutions. You can send them sample code or even modify the library for them.

But evaluators won't talk to you. If things don't just work for them, they will be frustrated. If things are not performant or have problems, they will think the library sucks. So the library needs to work well for them with no help from you. And they often won't read the docs or even use your examples. So it needs to go well if they just start blindly calling your APIs.

(this is a general principle for all software; also all GUI design, and hell just design in general. Interfaces should be designed for the novice to get into it easy, not for the expert to be efficient once they master it. People can learn to use almost any interface well (*) once they are used to it, so you don't have to worry about them.)

(* = as long as it's low latency, stateless, race free, reliable, predictable, which nobody in the fucking world seems to understand any more. A certain sequence of physical actions that you develop muscle memory for should always produce the same result, regardless of timing, without looking at the device or screen to make sure it's keeping up. Everyone who fails this (eg. everyone) should be fucking fired and then shot. But this is a bit off topic.)

X. Make the default log & check errors. But make the default reasonably fast.

This is sort of related to the evaluator issue. The defaults of the library need to be targetted at evaluators and new users. Advanced users can change the defaults if they want; eg. to ship they will turn off logging & error checking. But that should not be how you ship, or evaluators will trigger lots of errors and get failures with no messages. So you need to do some amount of error checking & logging so that evaluators can figure things out. *But* they will also measure performance without changing the settings, so your default settings must also be fast.

X. Make easy stuff easy. It's okay if complicated stuff is hard.

Kind of self explanatory. The API should be designed so that very simple uses require tiny bits of code. It's okay if something complicated and rare is a pain in the ass, you don't need to design for that; just make it possible somehow, and if you have to help out the rare person who wants to do a weird thing, that's fine. Specifically, don't try to make very flexible general APIs that can do everything & the kitchen sink. It's okay to have a super simple API that covers 99% of users, and then a more complex path for the rare cases.

7/26/2015

The Wait on Workers Problem

I'd like to open source my Oodle threading stuff. There's some cool stuff. Some day. Sigh.

This is an internal email I sent on 05-13-2015 :

Cliff notes : there's a good reason why OS'es use thread pools and fibers to solve this problem.

There's this problem that I call the "wait on workers problem". You have some worker threads. Worker threads pop pending work from a queue, do it, then post a completion event. You can't ever call Wait (Wait checks a condition, and if not set, puts the thread to sleep pending that condition) on them, because it could possibly deadlock you (no progress possible) since they could all go to sleep in waits, with work still pending and noone to do it. The most obvious example is just to imagine you only have 1 worker thread. Your worker thread does something like : { stuff spawn work2 Wait(work2); more stuff } Oh crap, work2 never runs because the Wait put me to sleep and there's no worker to do it. In Oodle the solution I use is that you should never do a real Wait on a worker, instead you have to "Yield". What Yield does is change your current work item back to Pending, but with the specified handle as a condition to being run. Then it returns back to the work dispatcher loop. So the above example becomes : [worker thread dispatch loop pops Work1] Work1: { stuff spawnm work2 Yield(work2); } [Work1 is put back on the pending list, with work2 as a condition] [worker thread dispatch loop pops Work2] Work2 Work2 posts completion [worker thread dispatch loop pops Work1] { more stuff } So. The Yield solution works to an extent, but it runs into problems. 1. I only have "shallow yield" (non-stack-saving yield), so the worker must manually save its state or stack variables to be able to resume. I don't have "deep yield" that can yield from deep within a series of calls, that would save the execution location and stack. This can be a major problem in practice. It means you can only yield from the top level, you can't ever be down inside some function calls and logic and decide you need to yield. It means all your threading branching has to be very linear and mapped out at the top level of the work function. It works great for simple linear processing like do an IO then yield on it, then process the results of the IO. It doesn't work great for more complicated general parallelism. 2. Because Yield is different from Wait, you can't share code, and you can still easily accidentally break the system by calling Wait. For example if you have a function like DoStuffInParallel , if you run that on a non-worker thread, it can launch some work items then Wait on them. You can't do that from a worker. You must rewrite it for being run from a worker to launch items then return a handle to yield on them (don't yield internally). It creates an ugly and difficult heterogeneity between worker threads and non-worker threads. So, we'd like to fix this. What we'd like is essentially "deep yield" and we want it to just be like an OS Wait, so that functions can be used on worker threads or non-worker threads without changing them. So my first naive idea was : "Wait on Workers" can be solved by making Wait a dispatch. Any time you call Wait, the system checks - am I a worker thread, and if so, instead of actually going into an OS wait, it pops and runs any runnable work. After completing each work item, it rechecks the wait condition and if it's set, stops dispatching and returns to the Wait-caller. If there is no runnable work, you go into an OS wait on either the original wait condition OR runnable work available. So the original example becomes : { stuff spawn work2 Wait(work2); [Wait sees we're a worker and runs the work dispatcher] [work dispatcher pops work2] { Work2 } [work dispatcher return sees work1 now runnable and returns] more stuff } Essentially this is using the actual stack to do stack-saving. Rather than trying to save the stack and instruction pointer, you just use the fact that they are saved by a normal function call & return. This method has minor disadvantages in that it can require a very large amount of stack if you go very deep. But the real problem is it can easily deadlock. It only works for tree-structured work, and Waits that are only on work items. If you have non-tree wait cycles, or waits on non-work-items, it can deadlock. Here's one example : Work1 : { stuff1 Wait on IO stuff2 } Work2 : { stuff1 Wait on Work1 stuff2 } with current Oodle system, you can make work like this, and it will complete. (*) In any system, if Work1 and Work2 get separate threads, they will complete. But in a Dispatch-on-Wait system, if the Wait on IO in Work1 runs Work2, it will deadlock. (* = the Oodle system ensures completability by only giving you a waitable handle to a work item when that work is enqueued to run. So it's impossible to make loops. But you can make something like the above by doing h1 = Run(Work1) Work2.handle = h1; Run(Work2); *) Once you're started Work2 on your thread, you're hosed, you can't recover from that, because you already have Work1 in progress. Dispatch-on-Wait really only works for a very limited work pattern : you only Wait on work that you made yourself. None of the work you make yourself can Wait on anything but work they make themselves. Really it only allows you to run tree-structured child work, not general threading. So, one option is use Dispatch-on-Wait but with a rule that if you're on a worker you can only use it for tree-strcutured-child-work. If you need to do more general waits, you still do the coroutine Yield. Or you can try to solve the general problem. In hindsight the solution is obvious, since it's what the serious OS people do : thread pools. You want to have 4 workers running on a 4 core system. You actually have a thread pool of 32 worker threads (or whatever) and try to keep at least 4 running at all times. Any time you Wait on a worker, you first Wake a thread from the pool, then put your thread to sleep. Any time a worker completes a work item it checks how many worker threads are awake, and if it's too many it goes to sleep. This is just a way of using the thread system to do the stack-saving and instruction-pointer saving that you need for "deep yield". The Wait() is essentially doing that deep return back up to the Worker dispatch loop, but it does it by sleeping the current thread and waking another that can start from the dispatch loop. This just magically fixes all the problems. You can wait on arbitrary things, you can deep-wait anywhere, you don't get deadlocks. The only disadvantage is the overhead of the thread switch. If you really want the micro-efficiency, you could still provide a "WaitOnChildWork" that runs the work dispatch loop, which is to be used only for the tree-structured work case. This lets you avoid the thread pool work and is a reasonably common case.

6/04/2015

LZNA encode speed addendum

Filling in a gap in the previous post : cbloom rants 05-09-15 - Oodle LZNA

The encode speeds on lzt99 :


single-threaded :

==============

LZNA :

-z5 (Optimal1) :
24,700,820 -> 9,207,584 =  2.982 bpb =  2.683 to 1
encode           : 10.809 seconds, 1.32 b/kc, rate= 2.29 mb/s
decode           : 0.318 seconds, 44.87 b/kc, rate= 77.58 mb/s

-z6 (Optimal2) :
24,700,820 -> 9,154,343 =  2.965 bpb =  2.698 to 1
encode           : 14.727 seconds, 0.97 b/kc, rate= 1.68 mb/s
decode           : 0.313 seconds, 45.68 b/kc, rate= 78.99 mb/s

-z7 (Optimal3) :
24,700,820 -> 9,069,473 =  2.937 bpb =  2.724 to 1
encode           : 20.473 seconds, 0.70 b/kc, rate= 1.21 mb/s
decode           : 0.317 seconds, 45.06 b/kc, rate= 77.92 mb/s

=========

LZMA :

lzmahigh : 24,700,820 -> 9,329,982 =  3.022 bpb =  2.647 to 1
encode           : 11.373 seconds, 1.26 b/kc, rate= 2.17 M/s
decode           : 0.767 seconds, 18.62 b/kc, rate= 32.19 M/s

=========

LZHAM BETTER :

lzham : 24,700,820 ->10,140,761 =  3.284 bpb =  2.436 to 1
encode           : 16.732 seconds, 0.85 b/kc, rate= 1.48 M/s
decode           : 0.242 seconds, 59.09 b/kc, rate= 102.17 M/s

LZHAM UBER :

lzham : 24,700,820 ->10,097,341 =  3.270 bpb =  2.446 to 1
encode           : 18.877 seconds, 0.76 b/kc, rate= 1.31 M/s
decode           : 0.239 seconds, 59.73 b/kc, rate= 103.27 M/s

LZHAM UBER + EXTREME :

lzham : 24,700,820 -> 9,938,002 =  3.219 bpb =  2.485 to 1
encode           : 185.204 seconds, 0.08 b/kc, rate= 133.37 k/s
decode           : 0.245 seconds, 58.28 b/kc, rate= 100.77 M/s

===============

LZNA -z5 threaded :
24,700,820 -> 9,211,090 =  2.983 bpb =  2.682 to 1
encode only      : 8.523 seconds, 1.68 b/kc, rate= 2.90 mb/s
decode only      : 0.325 seconds, 43.96 b/kc, rate= 76.01 mb/s

LZMA threaded :

lzmahigh : 24,700,820 -> 9,329,925 =  3.022 bpb =  2.647 to 1
encode           : 7.991 seconds, 1.79 b/kc, rate= 3.09 M/s
decode           : 0.775 seconds, 18.42 b/kc, rate= 31.85 M/s

LZHAM BETTER threaded :

lzham : 24,700,820 ->10,198,307 =  3.303 bpb =  2.422 to 1
encode           : 7.678 seconds, 1.86 b/kc, rate= 3.22 M/s
decode           : 0.242 seconds, 58.96 b/kc, rate= 101.94 M/s

I incorrectly said in the original version of the LZNA post (now corrected) that "LZHAM UBER is too slow". It's actually the "EXTREME" option that's too slow.

Also, as I noted last time, LZHAM is the best threaded of the three, so even though BETTER is slower than LZNA -z5 or LZMA in single-threaded encode speed, it's faster threaded. (Oodle's encoder threading is very simplistic (chunking) and really needs a larger file to get full parallelism; it doesn't use all cores here; LZHAM is much more micro-threaded so can get good parallelism even on small files).

5/25/2015

05-25-15 - The Anti-Patent Patent Pool

The idea of the Anti-Patent Patent Pool is to destroy the system using the system.

The Anti-Patent Patent Pool is an independent patent licensing organization. (Hence APPP)

One option would be to just allow anyone to use those patents free of charge.

A more aggressive option would be a viral licensing model. (like the GPL, which has completely failed, so hey, maybe not). The idea of the viral licensing model is like this :

Anyone who owns no patents may use any patent in the APPP for free (if you currently own patents, you may donate them to the APPP).

If you wish to own patents, then you must pay a fee to license from the APPP. That fee is used to fund the APPP's activities, the most expensive being legal defense of its own patents, and legal attacks on other patents that it deems to be illegal or too broad.

(* = we'd have to be aggressive about going after companies that make a subsidiary to use APPP patents while still owning patents in the parent corporation)

The tipping point for the APPP would be to get a few patents that are important enough that major players need to either join the APPP (donate all their patents) or pay a large license.

The APPP provides a way for people who want their work to be free to ensure that it is free. In the current system this is hard to do without owning a patent, and owning a patent and enforcing it is hard to do without money.

The APPP pro-actively watches all patent submissions and objects to ones that cover prior art, are obvious and trivial, or excessively broad. It greatly reduces the issuance of junk patents, and fights ones that are mistakenly issued. (the APPP maintains a public list of patents that it believes to be junk, which it will help you fight if you choose to use the covered algorithms). (Obviously some of these activities have to be phased in over time as the APPP gets more money).

The APPP provides a way for small companies and individuals that cannot afford the lawyers to defend their work to be protected. When some evil behemoth tries to stop you from using algorithms that you believe you have a legal right to, rather than fight it yourself, you simply donate your work to the APPP and they fight for you.

Anyone who simply wants to ensure that they can use their own inventions could use the APPP.

Once the APPP has enough money, we would employ a staff of patent writers. They would take idea donations from the groundswell of developers, open-source coders, hobbyists. Describe your idea, the patent writer would make it all formal and go through the whole process. This would let us tap into where the ideas are really happening, all the millions of coders that don't have the time or money to pursue getting patents on their own.

In the current system, if you just want to keep your idea free, you have to constantly keep an eye on all patent submissions to make sure noone is slipping in and patenting it. It's ridiculous. Really the only safe thing to do is to go ahead and patent it yourself and then donate it to the APPP. (the problem is if you let them get the patent, even if it's bogus it may be expensive to fight, and what's worse is it creates a situation where your idea has a nasty asterisk on it - oh, there's this patent that covers this idea, but we believe that patent to be invalid so we claim this idea is still public domain. That's a nasty situation that will scare off lots of users.)

Some previous posts :

cbloom rants 02-10-09 - How to fight patents
cbloom rants 12-07-10 - Patents
cbloom rants 04-27-11 - Things we need
cbloom rants 05-19-11 - Nathan Myhrvold


Some notes :

1. I am not interested in debating whether patents are good or not. I am interested in providing a mechanism for those of us who hate patents to pursue our software and algorithm development in a reasonable way.

2. If you are thinking about the patent or not argument, I encourage you to think not of some ideal theoretical argument, but rather the realities of the situation. I see this on both sides of the fence; those who are pro-patent because it "protects inventors" but choose to ignore the reality of the ridiculous patent system, and those on the anti-patent side who believe patents are evil and they won't touch them, even though that may be the best way to keep free ideas free.

3. I believe part of the problem with the anti-patent movement is that we are all too fixated on details of our idealism. Everybody has slightly different ideas of how it should be, so the movement fractures and can't agree on a unified thrust. We need to compromise. We need to coordinate. We need to just settle on something that is a reasonable solution; perhaps not the ideal that you would want, but some change is better than no change. (of course the other part of the problem is we are mostly selfish and lazy)

4. Basically I think that something like the "defensive patent license" is a good idea as a way to make sure your own inventions stay free. It's the safest way (as opposed to not patenting), and in the long run it's the least work and maintenance. Instead of constantly fighting and keeping aware of attempts to patent your idea, you just patent it yourself, do the work up front and then know it's safe long term. But it doesn't go far enough. Once you have that patent you can use it as a wedge to open up more ideas that should be free. That patent is leverage, against all the other evil. That's where the APPP comes in. Just making your one idea free is not enough, because on the other side there is massive machinery that's constantly trying to patent every trivial idea they can think of.

5. What we need is for the APPP to get enough money so that it can be stuffing a deluge of trivial patents down the patent office's throat, to head off all the crap coming from "Intellectual Ventures" and its many brothers. We need to be getting at least as many patents as them and making them all free under the APPP.


Some links :

en.swpat.org - The Software Patents Wiki
Patent Absurdity � How software patents broke the system
Home defensivepatentlicense
FOSS Patents U.S. patent reform movement lacks strategic leadership, fails to leverage the Internet
PUBPAT Home

5/21/2015

05-21-15 - Software Patents are Fucking Awesome

Awesome. It was inevitable I suppose :

"System and method for compressing data using asymmetric numeral systems with probability distributions"

By these tards :

Storleap

Someone in the UK go over and punch them in the balls.

For those not aware of the background, ANS is probably the biggest invention in data compression in the last 20 years. Its inventor (Jarek Duda) has explicitly tried to publish it openly and make it patent-free, because he's awesome.

In the next 10 years I'm sure we will get patents for "using ANS with string-matching data compression", "using ANS with block mocomp data compression", "using ANS as a replacement for Huffman coding", "deferred summation with ANS", etc. etc. Lots of brilliant inventions like that. Really stimulating for innovation.

(as has happened over and over in data compression, and software in general in the past; hey let's take two obvious previously existing things; LZ string matching + Huffman = patent. LZ + hash table = patent. JPEG + arithmetic = patent. Mocomp + Huffman = patent. etc. etc.)

(often glossed over in the famous Stac-Microsoft suit story is the question of WHAT THE FUCK the LZS patent was supposed to be for? What was the invention there exactly? Doing LZ with a certain fixed bit encoding? Umm, yeah, like everyone does?)

Our patent system is working great. It obviously protects and motivates the real inventors, and doesn't just act as a way for the richest companies to lock in semi-monopolies of technologies they didn't even invent. Nope.

Recently at RAD we've made a few innovations related to ANS that are mostly in the vein of small improvements or clever usages, things that I wouldn't even imagine to patent, but of course that's wrong.

I've also noticed in general a lot of these vaporware companies in the UK. We saw one at RAD a few years ago that claimed to use "multi-dimensional curve interpolation for data compression" or some crackpot nonsense. There was another one that used alternate numeral systems (not ANS, but p-adic or some such) for god knows what. A few years ago there were lots of fractal-image-compression and other fractal-nonsense startups that did ... nothing. (this was before the VC "pivot" ; hey we have a bunch of fractal image patents, let's make a text messaging app)

They generally get some PhD's from Cambridge or whatever to be founders. They bring a bunch of "industry luminaries" on the board. They patent a bunch of nonsense. And then ...

... profit? There's a step missing where they actually ever make anything that works. But I guess sometimes they get bought for their vapor, or they manage to get a bullshit patent that's overly-general on something they didn't actually invent, and then they're golden.

I wonder if these places are getting college-backed "incubation" incentives? Pretty fucking gross up and down and all around. Everyone involved is scum.

(In general, universities getting patents and incubating startups is fucking disgusting. You take public funding and student's tuition, and you use that to lock up ideas for private profit. Fucking rotten, you scum.)


On a more practical note, if anyone knows the process for objecting to a patent in the UK, chime in.

Also, shame on us all for not doing more to fight the system. All our work should be going in the Anti-Patent Patent Pool.


Under the current first-to-file systems, apparently we are supposed to sit around all day reading every patent that's been filed to see if it covers something that we have already invented or is "well known" / public domain / prior art.

It's really a system that's designed around patents. It assumes that all inventions are patented. It doesn't really work well with a prior invention that's just not patented.

Which makes something like the APPP even more important. We need a way to patent all the free ideas just as a way to keep them legally free and not have to worry about all the fuckers who will rush in and try to patent our inventions as soon as we stop looking.

05-21-15 - LZ-Sub

LZ-Sub decoder :

delta_literal = get_sub_literal();

if ( delta_literal != 0 )
{
    *ptr++ = delta_literal + ptr[-lastOffset];
}
else // delta_literal == 0
{
    if ( ! get_offset_flag() )
    {
        *ptr++ = ptr[-lastOffset];
    }
    else if ( get_lastoffset_flag() )
    {
        int lo_index = get_lo_index();
        lastOffset = last_offsets[lo_index];
        // do MTF or whatever using lo_index
        
        *ptr++ = ptr[-lastOffset];
        // extra 0 delta literal implied :
        *ptr++ = ptr[-lastOffset];
    }
    else
    {
        lastOffset = get_offset();
        // put offset in last_offsets set
        
        *ptr++ = ptr[-lastOffset];
        *ptr++ = ptr[-lastOffset];
        // some automatic zero deltas follow for larger offsets
        if ( lastOffset > 128 )
        {
            *ptr++ = ptr[-lastOffset];
            if ( lastOffset > 16384 )
            {
                *ptr++ = ptr[-lastOffset];
            }
        }   
    }

    // each single zero is followed by a zero runlen
    //  (this is just a speed optimization)
    int zrl = get_zero_runlen();
    while(zrl--)
        *ptr++ = ptr[-lastOffset];
}

This is basically LZMA. (sub literals instead of bitwise-LAM, but structurally the same) (also I've reversed the implied structure here; zero delta -> offset flag here, whereas in normal LZ you do offset flag -> zero delta)

This is what a modern LZ is. You're sending deltas from the prediction. The prediction is the source of the match. In the "match" range, the delta is zero.

The thing about modern LZ's (LZMA, etc.) is that the literals-after-match (LAMs) are very important too. These are the deltas after the zero run range. You can't really think of the match as just applying to the zero-run range. It applies until you send the next offset.

You can also of course do a simpler & more general variant :

Generalized-LZ-Sub decoder :


if ( get_offset_flag() )
{
    // also lastoffset LRU and so on not shown here
    lastOffset = get_offset();
}

delta_literal = get_sub_literal();

*ptr++ = delta_literal + ptr[-lastOffset];

Generalized-LZ-Sub just sends deltas from prediction. Matches are a bunch of zeros. I've removed the acceleration of sending zero's as a runlen for simplicity, but you could still do that.

The main difference is that you can send offsets anywhere, not just at certain spots where there are a bunch of zero deltas generated (aka "min match lengths").

This could be useful. For example when coding images/video/sound , there is often not an exact match that gives you a bunch of exact zero deltas, but there might be a very good match that gives you a bunch of small deltas. It would be worth sending that offset to get the small deltas, but normal LZ can't do it.

Generalized-LZ-Sub could also give you literal-before-match. That is, instead of sending the offset at the run of zero deltas, you could send it slightly *before* that, where the deltas are not zero but are small.

(when compressing text, "sub" should be replaced with some kind of smart lexicographical distance; for each character precompute a list of its most likely substitution character in order of probability.)

LZ is a bit like a BWT, but instead of the contexts being inferred by the prefix sort, you transmit them explicitly by sending offsets to prior strings. Weird.

5/16/2015

05-16-15 - Threading Primitive - monitored semaphore

A monitored semaphore allows two-sided waiting :

The consumer side decs the semaphore, and waits on the count being positive.

The producer side incs the semaphore, and can wait on the count being a certain negative value (some number of waiting consumers).

Monitored semaphore solves a specific common problem :

In a worker thread system, you may need to wait on all work being done. This is hard to do in a race-free way using normal primitives. Typical ad-hoc solutions may miss work that is pushed during the wait-for-all-done phase. This is hard to enforce, ugly, and makes bugs. (it's particularly bad when work items may spawn new work items).

I've heard of many ad-hoc hacky ways of dealing with this. There's no need to muck around with that, because there's a simple and efficient way to just get it right.

The monitored semaphore also provides a race-free way to snapshot the state of the work system - how many work items are available, how many workers are sleeping. This allows you to wait on the joint condition - all workers are sleeping AND there is no work available. Any check of those two using separate primitives is likely a race.

The implementation is similar to the fastsemaphore I posted before.

"fastsemaphore" wraps some kind of underlying semaphore which actually provides the OS waits. The underlying semaphore is only used when the count goes negative. When count is positive, pops are done with simple atomic ops to avoid OS calls. eg. we only do an OS call when there's a possibility it will put our thread to sleep or wake a thread.

"fastsemaphore_monitored" uses the same kind atomic variable wrapping an underlying semaphore, but adds an eventcount for the waiter side to be triggered when enough workers are waiting. (see who ordered event count? )

Usage is like this :


To push a work item :

push item on your queue (MPMC FIFO or whatever)
fastsemaphore_monitored.post();

To pop a work item :

fastsemaphore_monitored.wait();
pop item from queue

To flush all work :

fastsemaphore_monitored.wait_for_waiters(num_worker_threads);

NOTE : in my implementation, post & wait can be called from any thread, but wait_for_waiters must be called from only one thread. This assumes you either have a "main thread" that does that wait, or that you wrap that call with a mutex.

template <typename t_base_sem>
class fastsemaphore_monitored
{
    atomic<S32> m_state;
    eventcount m_waiters_ec;
    t_base_sem m_sem;

    enum { FSM_COUNT_SHIFT = 8 };
    enum { FSM_COUNT_MASK = 0xFFFFFF00UL };
    enum { FSM_COUNT_MAX = ((U32)FSM_COUNT_MASK>>FSM_COUNT_SHIFT) };
    enum { FSM_WAIT_FOR_SHIFT = 0 };
    enum { FSM_WAIT_FOR_MASK = 0xFF };
    enum { FSM_WAIT_FOR_MAX = (FSM_WAIT_FOR_MASK>>FSM_WAIT_FOR_SHIFT) };

public:
    fastsemaphore_monitored(S32 count = 0)
    :   m_state(count<<FSM_COUNT_SHIFT)
    {
        RL_ASSERT(count >= 0);
    }

    ~fastsemaphore_monitored()
    {
    }

public:

    inline S32 state_fetch_add_count(S32 inc)
    {
        S32 prev = m_state($).fetch_add(inc<<FSM_COUNT_SHIFT,mo_acq_rel);
        S32 count = ( prev >> FSM_COUNT_SHIFT );
        RR_ASSERT( count < 0 || ( (U32)count < (FSM_COUNT_MAX-2) ) );
        return count;
    }

    // warning : wait_for_waiters can only be called from one thread!
    void wait_for_waiters(S32 wait_for_count)
    {
        RL_ASSERT( wait_for_count > 0 && wait_for_count < FSM_WAIT_FOR_MAX );
        
        S32 state = m_state($).load(mo_acquire);
        
        for(;;)
        {
            S32 cur_count = state >> FSM_COUNT_SHIFT;

            if ( (-cur_count) == wait_for_count )
                break; // got it
        
            S32 new_state = (cur_count<<FSM_COUNT_SHIFT) | (wait_for_count << FSM_WAIT_FOR_SHIFT);
            
            S32 ec = m_waiters_ec.prepare_wait();
            
            // double check and signal what we're waiting for :
            if ( ! m_state.compare_exchange_strong(state,new_state,mo_acq_rel) )
                continue; // retry ; state was reloaded
            
            m_waiters_ec.wait(ec);
            
            state = m_state($).load(mo_acquire);
        }
        
        // now turn off the mask :
        
        for(;;)
        {
            S32 new_state = state & FSM_COUNT_MASK;
            if ( state == new_state ) return;
        
            if ( m_state.compare_exchange_strong(state,new_state,mo_acq_rel) )
                return; 
                
            // retry ; state was reloaded
        }
    }

    void post()
    {
        if ( state_fetch_add_count(1) < 0 )
        {
            m_sem.post();
        }
    }

    void wait_no_spin()
    {
        S32 prev_state = m_state($).fetch_add((-1)<<FSM_COUNT_SHIFT,mo_acq_rel);
        S32 prev_count = prev_state>>FSM_COUNT_SHIFT;
        if ( prev_count <= 0 )
        {
            S32 waiters = (-prev_count) + 1;
            RR_ASSERT( waiters >= 1 );
            S32 wait_for = prev_state & FSM_WAIT_FOR_MASK;
            if ( waiters == wait_for )
            {
                RR_ASSERT( wait_for >= 1 );
                m_waiters_ec.notify_all();
            }
            
            m_sem.wait();
        }
    }
    
    void post(S32 n)
    {
        RR_ASSERT( n > 0 );
        for(S32 i=0;i<n;i++)
            post();
    }
       
    bool try_wait()
    {
        // see if we can dec count before preparing the wait
        S32 state = m_state($).load(mo_acquire);
        for(;;)
        {
            if ( state < (1<<FSM_COUNT_SHIFT) ) return false;
            // dec count and leave the rest the same :
            //S32 new_state = ((c-1)<<FSM_COUNT_SHIFT) | (state & FSM_WAIT_FOR_MASK);
            S32 new_state = state - (1<<FSM_COUNT_SHIFT);
            RR_ASSERT( (new_state>>FSM_COUNT_SHIFT) >= 0 );
            if ( m_state($).compare_exchange_strong(state,new_state,mo_acq_rel) )
                return true;
            // state was reloaded
            // loop
            // backoff here optional
        }
    }
     
       
    S32 try_wait_all()
    {
        // see if we can dec count before preparing the wait
        S32 state = m_state($).load(mo_acquire);
        for(;;)
        {
            S32 count = state >> FSM_COUNT_SHIFT;
            if ( count <= 0 ) return 0;
            // swap count to zero and leave the rest the same :
            S32 new_state = state & FSM_WAIT_FOR_MASK;
            if ( m_state($).compare_exchange_strong(state,new_state,mo_acq_rel) )
                return count;
            // state was reloaded
            // loop
            // backoff here optional
        }
    }
           
    void wait()
    {
        int spin_count = rrGetSpinCount();
        while(spin_count--)
        {
            if ( try_wait() ) 
                return;
        }
        
        wait_no_spin();
    }

};

05-16-15 - LZ literals after match

Some vague rambling about LAMs.

LAMs are weird.

LAM0 , the first literal after a match, has the strong exclusion property (assuming maximum match lengths). LAM0 is strictly != lolit. (lolit = literal at last offset).

LAM1, the next literal after end of match, has the exact opposite - VERY strong prediction of LAM1 == lolit. This prediction continues but weakens as you go to LAM2, LAM3, etc.

In Oodle LZNA (and in many other coders), I send a flag for (LAM == lolit) as a separate event. That means in the actual literal coding path you still have LAM1 != lolit. (the LAM == lolit flag should be context-coded using the distance from the end of the match).

In all cases, even though you know LAM != lolit, lolit is still a very strong predictor for LAM. Most likely LAM is *similar* to lolit.

LAM is both an exclude AND a predictor!

What similar means depends on the file type. In text it means something like vowels stay vowels, punctuation stays punctuation. lolit -> LAM is sort of like substituting one character change. In binary, it often means that they are numerically close. This means that the delta |LAM - lolit| is never zero, but is often small.

One of the interesting things about the delta is that it gives you a data-adaptive stride for a delta filter.

On some files, you can get huge compression wins by running the right delta filter. But the ideal delta distance is data-dependent (*). The sort of magic thing that works out is that the LZ match offsets will naturally pick up the structure & word sizes. In a file of 32-byte structs made of DWORDs, you'll get offsets of 4,8,12,32,etc. So you then take that offset and forming the LAM sub is just a way of doing a delta with that deduced stride. On DWORD or F32 data, you tend to get a lot of offset=4, so LAM tends to just be doing delta from the previous word (note of course this bytewise delta, not a proper dword delta).

(* = this is a huge thing that someone needs to work on; automatic detection of delta filters for arbitrary data; deltas could be byte,word,dword, other, from immediate neighbors or from struct/row strides, etc. In a compression world where we are fighting over 1% gains, this can be a 10-20% jump.)

Experimentally we have observed that LAMs are very rapidly changing. They benefit greatly from very quickly adapting models. They like geometric adaptation rates (more recent events are much more important). They cannot be modeled with large contexts (without very sophisticated handling of sparsity and fast adaptation), they need small contexts to get lots of events and statistical density. They seem to benefit greatly from modeling in groups (eg. bitwise or nibblewise or other), so that events on one symbol also affect other probabilities for faster group learning. Many of these observations are similar for post-BWT data. LAM sub literals does seem to behave like post-BWT data to some extent, and similar principles of modeling apply.

So, for example, just coding an 8-bit symbol using the 8-bit lolit as context is a no-go. In theory this would give you full modeling of the effects of lolit on the current symbol. In practice it dilutes your statistics way too much. (in theory you could do some kind of one-count boosts other counts thing (or a secondary coding table ala PPMZ SEE), but in practice that's a mess). Also as noted previously, if you have the full 8-bit context, then whether you code symbol raw or xor or sub is irrelevant, but if you do not have the full context then it does change things.

Related posts :

cbloom rants 08-20-10 - Deobfuscating LZMA
cbloom rants 09-14-10 - A small note on structured data
cbloom rants 03-10-13 - Two LZ Notes
cbloom rants 06-12-14 - Some LZMA Notes
cbloom rants 06-16-14 - Rep0 Exclusion in LZMA-like coders
cbloom rants 03-15-15 - LZ Literal Correlation Images

5/13/2015

05-13-15 - Skewed Pareto Chart

It's hard to see just the decomp speed in the normal Pareto Chart. It gets squished down over at the far-right Y-intercept.

The obvious fix is just to magnify the right side. This is a linear scaling of the data; *1 on the far left, *10 on the far right :

The far-left is still proportional to the compression ratio, the far right is proportional to the decompression speed. The compressor lines are still speedups vs. memcpy, but the memcpy baseline is now sloped.

I'm not really sure how I feel about the warped chart vs unwarped.

The Pareto curves are in fact sigmoids (tanh's).


speedup = 1 / (1/compression_ratio + disk_speed / decompress_speed)

speedup = 1 / (1/compression_ratio + exp( log_disk_speed ) / decompress_speed)

(here they're warped sigmoids because of the magnification; the ones back here in the LZNA post are true sigmoids)

I believe (but have not proven) that a principle of the Pareto Frontier is that the maximum of all compressors should also be a sigmoid.


max_speedup(disk_speed) = MAX{c}( speedup[compressor c](disk_speed) );

One of the nice things about these charts is it makes it easy to see where some compressors are not as good as possible. If we fit a sigmoid over the top of all the curves :

We can easily see that LZHLW and LZNIB are not touching the curve. They're not as good as they should be in space/speed. Even thought nothing beats them at the moment (that I know of), they are algorithmically short of what's possible.

There are two things that constrain compressors from being better in a space/speed way. There's 1. what is our current best known algorithm. And then there's 2. what is possible given knowledge of all possible algorithms. #2 is the absolute limit and eventually it runs into a thermodynamic limit. In a certain amount of cpu time (cpu bit flips, which increase entropy), how much entropy can you take out of a a given data stream. You can't beat that limit no matter how good your algorithm is. So our goal in compression is always to just find improvements in the algorithms to edge closer to that eventual limit.

Anyway. I think I know how to fix them, and hopefully they'll be up at the gray line soon.

5/11/2015

05-11-15 - ANS Minimal Flush

A detail for the record :

ANS (TANS or RANS) in the straightforward implementation writes a large minimum number of bytes.

To be concrete I'll consider a particular extremely bad case : 64-bit RANS with 32-bit renormalization.

The standard coder is :


initialize encoder (at end of stream) :

x = 1<<31

renormalize so x stays in the range x >= (1<<31) and x < (1<<63)

flush encoder (at the beginning of the stream) :

output all 8 bytes of x

decoder initializes by reading 8 bytes of x

decoder renormalizes via :

if ( x < (1<<31) )
{
  x <<= 32;  x |= get32(ptr); ptr += 4;
}

decoder terminates and can assert that x == 1<<31

this coder outputs a minimum of 8 bytes, which means it wastes up to 7 bytes on low-entropy data (assuming 1 byte minimum output and that the 1 byte required to byte-align output is not "waste").

In contrast, it's well known how to do minimal flush of arithmetic coders. When the arithmetic coder reaches the end, it has a "low" and "range" specifying an interval. "low" might be 64-bits, but you don't need to output them all, you only need to output enough such that the decoder will get something in the correct interval between "low" and "low+range".

Historically people often did arithmetic coder minimum flush assuming that the decoder would read zero-valued bytes after EOF. I no longer do that. I prefer to do a minimum flush such that decoder will get something in the correct interval no matter what byte follows EOF. This allows the decoder to just read past the end of your buffer with no extra work. (the arithmetic coder reads some # of bytes past EOF because it reads enough to fill "low" with bits, even though the top bits are all that are needed at the end of the stream).

The arithmetic coder minimum flush outputs a number of bytes proportional to log2(1/range) , which is the number of bits of information that are currently held pending in the arithmetic coder state, which is good. The excess is at most 1 byte.

So, to make ANS as clean as arithmetic coding we need a minimal flush. There are two sources of the waste in the normal ANS procedure outlined above.

One is the initial value of x (at the end of the stream). By setting x to (1<<31) , the low end of the renormalization interval, we have essentually filled it with bits it has to flush. (the pending bits in x is log2(x)). But those bits don't contain anything useful (except a value we can check at the end of decoding). One way to remove that waste is to stuff some other value in the initial state which contains bits you care about. Any value you initialize x with, you get back at the end of decoding, so then those bits aren't "wasted". But this can be annoying to find something useful to put in there, since you don't get that value out until the end of decoding.

The other source of waste is the final flush of x (at the beginning of the stream). This one is obvious - the # of pending bits stored in x at any time is log2(x). Clearly we should be flushing the final value of x in a # of bits proportional to log2(x).

So to do ANS minimal flush, here's one way :


initialize encoder (at end of stream) :

x = 0

renormalize so x stays in the range x < (1<<63)

flush encoder (at the beginning of the stream) :

output # of bytes with bits set in x, and those bytes

decoder initializes by reading variable # of bytes of x

decoder renormalizes via :

if ( x < (1<<31) )
{
  if ( ptr < ptrend )
  {
    x <<= 32;  x |= get32(ptr); ptr += 4;
  }
}

decoder terminates and can assert that x == 0

This ANS variant will output only 1 byte on very-low-entropy data.

There are now two phases of the coder. In the beginning of encoding (at the ending of the stream), x is allowed to be way below the renormalization range. During this phase, encoding just puts information into x, and the value of x grows. (note that x can actually stay 0 and never hold any bits if your consists of entirely the bottom symbol in RANS). Once x grows up into the renormalization interval, you enter the next phase where bits of x are pushed to the output to keep x in the renormalization interval. Decoding, in the first phase you read bytes from the stread to fill x with bits and keep it in the renormalization interval. Once the decoder read pointer hits the end, you switch to the second phase, and now x is allowed to shrink below the renormalization minimum and you can continue to decode the remaining information held in it.

This appears to add an extra branch to the decoder renormalization, but that can be removed by duplicating your decoder into "not near the end" and "near the end" variants.

The #sigbit output of x at the head is just the right thing and should always be done in all variants of ANS.

The checking ptr vs. ptrend and starting x = 0 is the variant that I call "minimal ANS".

Unfortunately "minimal ANS" doesn't play well with the ILP multi-state interleaved ANS. To do interleaved ANS like this you would need an EOF marker for each state. That's possible in theory (and could be done compactly in theory) but is a pain in the butt in practice.

5/09/2015

05-09-15 - Oodle LZNA

Oodle 1.45 has a new compressor called LZNA. (LZ-nibbled-ANS)

LZNA is a high compression LZ (usually a bit more than 7z/LZMA) with better decode speed. Around 2.5X faster to decode than LZMA.

Anyone who needs LZMA-level compression and higher decode speeds should consider LZNA. Currently LZNA requires SSE2 to be fast, so it only runs full speed on modern platforms with x86 chips.

LZNA gets its speed from two primary changes. 1. It uses RANS instead of arithmetic coding. 2. It uses nibble-wise coding instead of bit-wise coding, so it can do 4x fewer coding operations in some cases. The magic sauce that makes these possible is Ryg's realization about mixing cumulative probability distributions . That lets you do the bitwise-style shift update of probabilities (keeping a power of two total), but on larger alphabets.

LZNA usually beats LZMA compression on binary, slightly worse on text. LZNA is closer to LZHAM decompress speeds.


Some results :


lzt99

LZNA -z6 : 24,700,820 -> 9,154,248 =  2.965 bpb =  2.698 to 1
decode only      : 0.327 seconds, 43.75 b/kc, rate= 75.65 mb/s

LZMA : 24,700,820 -> 9,329,925 =  3.021 bpb =  2.647 to 1
decode           : 0.838 seconds, 58.67 clocks, rate= 29.47 M/s

LZHAM : 24,700,820 ->10,140,761 =  3.284 bpb =  2.435 to 1
decode           : 0.264 seconds, 18.44 clocks, rate= 93.74 M/s

(note on settings : LZHAM is run at BETTER because UBER is too slow. LZHAM BETTER is comparable to Oodle's -z6 ; UBER is similar to my -z7. LZMA is run at the best compression setting I can find; -m9 and lc=0,lp=2,pb=2 for binary data; with LZHAM I don't see a way to set the context bits. This is the new LZHAM 1.0, slightly different than my previous tests of LZHAM. All 64-bit, big dictionaries.).


baby_robot_shell

LZNA -z6 : 58,788,904 ->12,933,907 =  1.760 bpb =  4.545 to 1
decode only      : 0.677 seconds, 50.22 b/kc, rate= 86.84 mb/s

LZMA : 58,788,904 ->13,525,659 =  1.840 bpb =  4.346 to 1
decode           : 1.384 seconds, 40.70 clocks, rate= 42.49 M/s

LZHAM : 58,788,904 ->15,594,877 =  2.122 bpb =  3.769 to 1
decode           : 0.582 seconds, 17.12 clocks, rate= 100.97 M/s

I'm not showing encode speeds because they're all running different amounts of threading. It would be complicated to show fairly. LZHAM is the most aggressively threaded, and also the slowest without threading.


My "game testset" total sizes, from most compression to least :


Oodle LZNA -z8 :            57,176,229
Oodle LZNA -z5 :            58,318,469

LZMA -mx9 d26:lc0:lp2:pb3 : 58,884,562
LZMA -mx9 :                 59,987,629

LZHAM -mx9 :                62,621,098

Oodle LZHLW -z6 :           68,199,739

zip -9 :                    88,436,013

raw :                       167,495,105


Here's the new Pareto chart for Oodle. See previous post on these charts

This is load+decomp speedup relative to memcpy : (lzt99)

The left-side Y-intercept is the compression ratio. The right-side Y-intercept is the decompression speed. In between you can see the zones where each compressor is the best tradeoff.

With LZMA and LZHAM : (changed colors)

lzt99 is bad for LZHAM, perhaps because it's heterogeneous and LZHAM assumes pretty stable data. (LZHAM usually beats LZHLW for compression ratio). Here's a different example :

load+decomp speedup relative to memcpy : (baby_robot_shell)

3/25/2015

03-25-15 - My Chameleon

I did my own implementation of the Chameleon compression algorithm. (the original distribution is via the density project)

This is the core of Chameleon's encoder :

    cur = *fm32++; h = CHAMELEON_HASH(cur); flags <<= 1;
    if ( c->hash[h] == cur ) { flags ++; *to16++ = (uint16) h; }
    else { c->hash[h] = cur; *((uint32 *)to16) = cur; to16 += 2; }

This is the decoder :

    if ( (int16)flags < 0 ) { cur = c->hash[ *fm16++ ]; }
    else { cur = *((const uint32 *)fm16); fm16 += 2; c->hash[ CHAMELEON_HASH(cur) ] = cur; }
    flags <<= 1; *to32++ = cur;

I thought it deserved a super-simple STB-style header-only dashfuly-described implementation :

Chameleon.h

My Chameleon.h is not portable or safe or any of that jizzle. Maybe it will be someday. (Update : now builds on GCC & clang. Tested on PS4. Still not Endian-invariant.)


// Usage :

#define CHAMELEON_IMPL
#include "Chameleon.h"

Chameleon c;

Chameleon_Reset(&c);

size_t comp_buf_size = CHAMELEON_MAXIMUM_OUTPUT_SIZE(in_size);

void * comp_buf = malloc(comp_buf_size);

size_t comp_len = Chameleon_Encode(&c, comp_buf, in_buf, in_size );

Chameleon_Reset(&c);

Chameleon_Decode(&c, out_buf, in_size, comp_buf );

int cmp = memcmp(in_buf,out_buf,in_size);
assert( comp == 0 );


ADD : Chameleon2 SIMD prototype now posted : (NOTE : this is not good, do not use)

Chameleon2.h - experimental SIMD wide Chameleon
both Chameleons in a zip

The SIMD encoder is not fast. Even on SSE4 it only barely beats scalar Chameleon. So this is a dead end. Maybe some day when we get fast hardware scatter/gather it will be good (*).

(* = though use of hardware scatter here is always going to be treacherous, because hashes may be repeated, and the order in which collisions resolve must be consistent)

03-25-15 - Density - Chameleon

Casey pointed me at Density .

Density contains 3 algorithms, from super fast to slower : Chameleon, Cheetah, Lion.

They all attain speed primarily by working on U32 quanta of input, rather than bytes. They're sort of LZPish type things that work on U32's, which is a reasonable way to get speed in this modern world. (Cheetah and Lion are really similar to the old LZP1/LZP2 with bit flags for different predictors, or to some of the LZRW's that output forward hashes; the main difference is working on U32 quanta and no match lengths)

The compression ratio is very poor. The highest compression option (Lion) is around LZ4-fast territory, not as good as LZ4-hc. But, are they Pareto? Is it a good space-speed tradeoff?

Well, I can't build Density (I use MSVC) so I can't test their implementation for space-speed.

Compressed sizes :


lzt99 :
uncompressed       24,700,820

density :
c0 Chameleon       19,530,262
c1 Cheetah         17,482,048
c2 Lion            16,627,513

lz4 -1             16,193,125
lz4 -9             14,825,016

Oodle -1 (LZB)     16,944,829
Oodle -2 (LZB)     16,409,913

Oodle LZNIB        12,375,347

(lz4 -9 is not competitive for encode time, it's just to show the level of compression you could get at very fast decode speeds if you don't care about encode time ; LZNIB is an even more extreme case of the same thing - slow to encode, but decode time comparable to Chameleon).

To check speed I did my own implementation of Chameleon (which I believe to be faster than Density's, so it's a fair test). See the next post to get my implementation.

The results are :

comp_len = 19492042
Chameleon_Encode_Time : seconds:0.0274 ticks per: 1.919 mbps : 901.12
Chameleon_Decode_Time : seconds:0.0293 ticks per: 2.050 mbps : 843.31

round trip time = 0.05670
I get a somewhat smaller file size than Density's version for unknown reason.

Let's compare to Oodle's LZB (an LZ4ish) :


Oodle -1 :

24,700,820 ->16,944,829 =  5.488 bpb =  1.458 to 1
encode           : 0.061 seconds, 232.40 b/kc, rate= 401.85 mb/s
decode           : 0.013 seconds, 1071.15 b/kc, rate= 1852.17 mb/s

round trip time = 0.074

Oodle -2 :

24,700,820 ->16,409,913 =  5.315 bpb =  1.505 to 1 
encode           : 0.070 seconds, 203.89 b/kc, rate= 352.55 mb/s
decode           : 0.014 seconds, 1008.76 b/kc, rate= 1744.34 mb/s

round trip time = 0.084

lzt99 is a collection of typical game data files.

We can test on enwik8 (text/html) too :


Chameleon :

enwik8 :
Chameleon_Encode_Time : seconds:0.1077 ticks per: 1.862 mbps : 928.36
Chameleon_Decode_Time : seconds:0.0676 ticks per: 1.169 mbps : 1479.08
comp_len = 61524068

Oodle -1 :

enwik8 : 
100,000,000 ->57,267,299 =  4.581 bpb =  1.746 to 1 
encode           : 0.481 seconds, 120.17 b/kc, rate= 207.79 mb/s
decode           : 0.083 seconds, 697.58 b/kc, rate= 1206.19 mb/s

here Chameleon is much more compelling. It's competitive for size & decode speed, not just encode speed.

Commentary :

Any time you're storing files on disk, this is not the right algorithm. You want something more asymmetric (slow compress, fast decompress).

I'm not sure if Cheetah and Lion are Pareto for round trip time. I'd have to test speed on a wider set of sample data.

When do you actually want a compressor that's this fast and gets so little compression? I'm not sure.

3/15/2015

03-15-15 - LZ Literal Correlation Images

I made some pictures.

I'm showing literal correlation by making an image of the histogram. That is, given an 8-bit predictor, you tally of each event :


int histo[256][256]

histo[predicted][value] ++

then I scale the histo so the max is at 255 and make it into an image.

Most of the images that I show are in log scale, otherwise all the detail is too dark, dominated by a few peaks. I also sometimes remove the predicted=value line, so that the off axis detail is more visible.

Let's stop a moment and look t what we can see in these images.

This is a literal histo of "lzt99" , using predicted = lolit (last offset literal; the rep0len1 literal). This is in log scale, with the diagonal removed :

In my images y = prediction and x = current value. x=0, y=0 is in the upper left instead of the lower left where it should be because fucking bitmaps are annoying (everyone is fired, left handed coordinate systems my ass).

The order-0 probability is the vertical line sum for each x. So any vertical lines indicate just strong order-0 correlations.

Most files are a mix of different probability sources, which makes these images look a sum of different contibuting factors.

The most obvious factor here is the diagonal line at x=y. That's just a strong value=predicted generator.

The red blob is a cluster of events around x and y = 0. This indicates a probability event that's related to |x+y| being small. That is, the sum, or length, or something tends to be small.

The green shows a square of probabilities. A square indicates that for a certain range of y's, all x's are equally likely. In this case the range is 48-58. So if y is in 48-58, then any x in 48-58 is equally likely.

There are similar weaker squarish patterns all along the diagonal. Surprisingly these are *not* actually at the binary 8/16 points you might expect. They're actually in steps of 6 & 10.

The blue blobs are at x/y = 64/192. There's a funny very specific strong asymmetric pattern in these. When y = 191 , it predicts x=63,62,61,60 - but NOT 64,65,66. Then at y=192, predict x=64,65,66, but not 63.

In addition to the blue blobs, there are weak dots at all the 32 multiples. This indicates that when y= any multiple of 32, there's a generating event for x = any multiple of 32. (Note that in log scale, these dots look more important than they really are.). There are also some weak order-0 generators at x=32 and so on.

There's some just general light gray background - that's just uncompressible random data (as seen by this model anyway).


Here's a bunch of images : (click for hi res)

rawrawraw subsubsub xorxorxor
loglogNDlinND loglogNDlinND loglogNDlinND
Fez LO
Fez O1
lzt24 LO
lzt24 O1
lzt99 LO
lzt99 O1
enwik7 LO
enwik7 O1

details :

LO means y axis (predictor) is last-offset-literal , in an LZ match parse. Only the literals coded by the LZ are shown.

O1 means y axis is order1 (previous byte). I didn't generate the O1 from the LZ match parse, so it's showing *all* bytes in the file, not just the literals from the LZ parse.

"log" is just log-scale of the histo. An octave (halving of probability) is 16 pixel levels.

"logND" is log without the x=y diagonal. An octave is 32 pixel levels.

"linND" is linear, without the x=y diagonal.

"raw" means the x axis is just the value. "xor" means the x axis is value^predicted. "sub" means the x axis is (value-predicted+127).

Note that raw/xor/sub are just permutations of the values along a horizontal axis, they don't change the values.


Discussion :

The goal of a de-correlating transform is to create vertical lines. Vertical lines are order-0 probability peaks and can be coded without using the predictor as context at all.

If you use an order-0 coder, then any detail which is not in a vertical line is an opportunity for compression that you are passing up.

"Fez" is obvious pure delta data. "sub" is almost a perfect model for it.

"lzt24" has two (three?) primary probability sources. One is almost pure "sub" x is near y data.

The other sources, however, do not do very well under sub. They are pure order-0 peaks at x=64 and 192 (vertical lines in the "raw" image), and also those strange blobs of correlation at (x/y = 64 and 192). The problem is "sub" turns those vertical lines into diagonal lines, effectively smearing them all over the probability spectrum.

A compact but full model for the lzt24 literals would be like this :


is y (predictor) near 64 or 192 ?

if so -> strongly predict x = 64 or 192

else -> predict x = y or x = 64 or 192 (weaker)

lzt99, being more heterogenous, has various sources.

"xor" takes squares to squares. This works pretty well on text.

In general, the LO correlation is easier to model than O1.

The lzt99 O1 histo in particular has lots of funny stuff. There are bunch of non-diagonal lines, indicating things like x=y/4 patterns, which is odd.

old rants