"Rep matches" are a little weird. They help a lot, but the reason why they help depends on the file you are compressing. (rep match = repeat match, gap match, aka "last offset")
On text files, they work as interrupted matches, or "gap matches". They let you generate something like :
stand on the floor
stand in the door
stand in the door
[stand ][i][n the ][d][oor]
[off 19, len 6][1 lit][rep len 6][1 lit][off 18, len 3]
that is, you have a long match of [stand on the ] but with a gap at the 'o'.
Now, something I observed was that more than one last offset continues to help. On text the main benefit from having two last offsets is that it lets
you use a match for the gap. When the gap is not just one character but a word, you might want to use a match to put that word in, in which case
the continuation after the gap is no longer the first last-offset, it's the second one. eg.
cope
how to work with animals
how to cope with animals
[how to ][cope][ with animals]
[off 25 ][off 32][off 25 (rep2)]
You could imagine alternative coding structures that don't require keeping some number of "last offsets". (oddly, the last offset maintenance can be a large part of decode time, because maintaining an MTF list is something that CPUs do incredibly poorly). For example you could code with a scheme where you just send the entire long match, and then any time you send a long match you send a flag for "are there any gaps", and if so you then code some gaps inside the match.
The funny thing is, on binary files "last offsets" do something else which can be more important. They become the most common offsets. In particular, on highly structured binary data, they will generally be some factor of the structure size. eg. on a file that has a struct size of 36, and that struct has dwords and such in it, the last offsets will generally be things like 4,8,16,36, or 72. They provide a sort of dictionary of the most common offsets so that you can code those smaller. You are still getting the gap-match effect, but the common-offset benefit is much bigger on these files.
(aside : word-replacing transform on text really helps LZ (and everything) by removing the length variance of tokens. In particular for LZ77, word length variance breaks rep matches. There are lots of common occurances of a single replaced word in a phrase, like : "I want some stuff" -> "I want the stuff". You can't get a rep match here of [ stuff] because the offset changed because the substituted word was different length. If you do WRT first, then gap matches get these.)
Note 2 : on offset structure.
I've had it in the back of my head for quite some time now to do an LZ compressor specifically designed for structured data.
One idea I had was to use "2d" match offsets. That is, send a {dx,dy} where dx is within the record and dy is different records. Like imagine the data is in a table, dy is going back rows, dx is an offset on the row. You probably want to mod dx around the row so its range is always the same, and special case dy=0 (matches within your own record).
It occurred to me that the standard way of sending LZ offsets these days actually already does this. The normal way that good LZ's send
offsets these days is to break it into low and high parts :
low = offset & 7F;
high = offset >> 7;
or similar, then you send "high" using some kind of "NoSB" scheme (Number of Significant Bits is entropy coded, and the bits themselves
are sent raw), and you send "low" with an order-0 entropy coder.
But this is just a 2d structured record offset for a particular power-of-2 record size. It's why when I've experimented with 2d offsets I haven't seen huge wins - because I'm already doing it.
There is some win to be had from custom 2d-offsets (vs. the standard low/high bits scheme) when the record size is not a power of two.
Hey, i never thought about sending lower bits compressed using "entropy-0"; i thought they were all noise.
ReplyDeleteOh yeah, helps a lot on binary, not so much on text.
ReplyDeleteI used to mainly test on text in the long long ago so didn't see it myself either until I started looking into why LZX and Quantum were beating me.