Oodle 1.40 got the new LZA compressor. LZA is a very high compression arithmetic-coded LZ. The goal of LZA is as much compression as possible while retaining somewhat reasonable (or at least tolerable) decode speeds. My belief is that LZA should be used for internet distribution, but not for runtime loading.
The charts :
compression ratio : (raw/comp ratio; higher is better)
lzmamax : 2.665 to 1 lzmafast : 2.314 to 1 zlib9 : 1.883 to 1 zlib5 : 1.871 to 1 lz4hc : 1.667 to 1 lz4fast : 1.464 to 1
encode speed : (mb/s)
lzmamax : 5.55 lzmafast : 11.08 zlib9 : 4.86 zlib5 : 25.23 lz4hc : 32.32 lz4fast : 606.37
decode speed : (mb/s)
lzmamax : 42.17 lzmafast : 40.22 zlib9 : 308.93 zlib5 : 302.53 lz4hc : 2363.75 lz4fast : 2288.58
While working on LZA I found some encoder speed wins that I ported back to LZHLW (mainly in Fast and VeryFast). A big one is to early out for last offsets; when I get a last offset match > N long, I just take it and don't even look for non-last-offset matches. This is done in the non-Optimal modes, and surprisingly hurts compression almost not all while helping speed a lot.
Four of the compressors are now in pretty good shape (LZA,LZHLW,LZNIB, and LZB16). There are a few minor issues to fix someday (someday = never unless the need arises) :
LZA decoder should be a little faster (currently lags LZMA a tiny bit). LZA Optimal1 would be better with a semi-greedy match finder like MMC (LZMA is much faster to encode than me at the same compression level; perhaps a different optimal parse scheme is needed too). LZA Optimal2 should seed with multi-parse. LZHLW Optimal could be faster. LZNIB Normal needs much better match selection heuristics, the ones I have are really just not right. LZNIB Optimal should be faster; needs a better way to do threshold-match-finding. LZB16 Optimal should be faster; needs a better 64k-sliding-window match finder.
The LZH and LZBLW compressors are a bit neglected and you can see they still have some of the anomalies in the space/speed tradeoff curve, like the Normal encode speed for LZBLW is so bad that you may as well just use Optimal. Put aside until there's a reason to fix them.
If another game developer tells me that "zlib is a great compromise and you probably can't beat it by much"
I'm going to murder them. For the record :
zlib -9 :
4.86 MB/sec to encode
308.93 MB/sec to decode
1.883 to 1 compression
LZHLW Optimal1 :
4.67 MB/sec to encode
391.28 MB/sec to decode
2.352 to 1 compression
come on! The encoder is slow, the decoder is slow, and it compresses poorly.
LZMA in very high compression settings is a good tradeoff. In its low compression fast modes, it's very poor. zlib has the same flaw - they just don't have good encoders for fast compression modes.
LZ4 I have no issues with; in its designed zone it offers excellent tradeoffs.
In most cases the encoder implementations are :
cache table match finder
cache table match finder
hash with ways
very simple heuristic decisions
varies a lot for the different compressors
generally something like a hash-link match finder
or a cache table with more ways
more lazy eval
more careful "is match better" heuristics
exact match finder (SuffixTrie or similar)
cost-based match decision, not heuristic
backward exact parse of LZB16
all others have "last offset" so require an approximate forward parse
I'm mostly ripping out my
Hash->Link match finders
and replacing them with N-way cache tables. While the cache table is slightly worse for compression, it's a big speed win, which makes it
better on the space-speed tradeoff spectrum.
I don't have a good solution for windowed optimal parse match finding (such as LZB16-Optimal). I'm currently using overlapped suffix arrays, but that's not awesome. Sliding window SuffixTrie is an engineering nightmare but would probably be good for that. MMC is a pretty good compromise in practice, though it's not exact and does have degenerate case breakdowns.
LZB16's encode speed is very sensitive to the hash table size.
24,700,820 ->16,944,823 = 5.488 bpb = 1.458 to 1
encode : 0.045 seconds, 161.75 b/kc, rate= 550.51 mb/s
decode : 0.009 seconds, 849.04 b/kc, rate= 2889.66 mb/s
24,700,820 ->16,682,108 = 5.403 bpb = 1.481 to 1
encode : 0.049 seconds, 148.08 b/kc, rate= 503.97 mb/s
decode : 0.009 seconds, 827.85 b/kc, rate= 2817.56 mb/s
24,700,820 ->16,491,675 = 5.341 bpb = 1.498 to 1
encode : 0.055 seconds, 133.07 b/kc, rate= 452.89 mb/s
decode : 0.009 seconds, 812.73 b/kc, rate= 2766.10 mb/s
24,700,820 ->16,409,957 = 5.315 bpb = 1.505 to 1
encode : 0.064 seconds, 113.23 b/kc, rate= 385.37 mb/s
decode : 0.009 seconds, 802.46 b/kc, rate= 2731.13 mb/s
If you accidentally set it too big you get a huge drop-off in speed.
(The charts above show -h13 ; -h12 is more comparable to lz4fast
(which was built with HASH_LOG=12)).
I stole an idea from LZ4 that helped the encoder speed a lot. (lz4fast is very good!)
Instead of doing the basic loop like :
if ( match )
instead do :
while( ! match )
This lets you make a tight loop just for outputing literals. It makes it clearer to you as a programmer
what's happening in that loop and you can save work and simplify things. It winds up being a lot faster.
(I've been doing the same thing in my decoders forever but hadn't done in the encoder).
My LZB16 is very slightly more complex to encode than LZ4, because I do some things that let me have a faster decoder. For example my normal matches are all no-overlap, and I hide the overlap matches in the excess-match-length branch.