Background : 64-bit mode. 12-bit lookahead table, and 12-bit codelen limit, so there's no out-of-table case to handle.
Here's conditional bit buffer refill, 32-bits refilled at a time, aligned refill.
Always >= 32 bits in buffer so you can do two decode ops per refill :
loop
{
uint64 peek; int cl,sym;
peek = decode_bits >> (64 - CODELEN_LIMIT);
cl = codelens[peek];
sym = symbols[peek];
decode_bits <<= cl; thirtytwo_minus_decode_bitcount += cl;
*decodeptr++ = (uint8)sym;
peek = decode_bits >> (64 - CODELEN_LIMIT);
cl = codelens[peek];
sym = symbols[peek];
decode_bits <<= cl; thirtytwo_minus_decode_bitcount += cl;
*decodeptr++ = (uint8)sym;
if ( thirtytwo_minus_decode_bitcount > 0 )
{
uint64 next = _byteswap_ulong(*decode_in++);
decode_bits |= next << thirtytwo_minus_decode_bitcount;
thirtytwo_minus_decode_bitcount -= 32;
}
}
325 mb/s.
(note that removing the bswap to have a little-endian u32 stream does almost nothing for performance, less than 1 mb/s)
The next option is : branchless refill, unaligned 64-bit refill. You always have >= 56 bits in buffer, now you can do 4 decode ops per
refill :
loop
{
// refill :
uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
bits |= next >> bitcount;
int bytes_consumed = (64 - bitcount)>>3;
decode_in += bytes_consumed;
bitcount += bytes_consumed<<3;
uint64 peek; int cl; int sym;
#define DECONE() \
peek = bits >> (64 - CODELEN_LIMIT); \
cl = codelens[peek]; sym = symbols[peek]; \
bits <<= cl; bitcount -= cl; \
*decodeptr++ = (uint8) sym;
DECONE();
DECONE();
DECONE();
DECONE();
#undef DECONE
}
373 mb/s
These so far have both been "traditional Huffman" decoders. That is, they use the next 12 bits from the bit buffer to look up the Huffman decode table, and they stream bits into that bit buffer.
There's another option, which is "ANS style" decoding. To do "ANS style" you keep the 12-bit "peek" as a separate variable, and you stream bits from the bit buffer into the peek variable. Then you don't need to do any masking or shifting to extract the peek.
The naive "ANS style" decode looks like this :
loop
{
// refill bits :
uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
bits |= next >> bitcount;
int bytes_consumed = (64 - bitcount)>>3;
decode_in += bytes_consumed;
bitcount += bytes_consumed<<3;
int cl; int sym;
#define DECONE() \
cl = codelens[state]; sym = symbols[state]; \
state = ((state << cl) | (bits >> (64 - cl))) & ((1 << CODELEN_LIMIT)-1); \
bits <<= cl; bitcount -= cl; \
*decodeptr++ = (uint8) sym;
DECONE();
DECONE();
DECONE();
DECONE();
#undef DECONE
}
332 mb/s
But we can use an analogy to the "next_state" of ANS. In ANS, the next_state is a complex thing with
certain rules (as we covered in the past). With Huffman it's just this bit of math :
next_state[state] = (state << cl) & ((1 << CODELEN_LIMIT)-1);
So we can build that table, and use a "fully ANS" decoder :
loop
{
// refill bits :
uint64 next = _byteswap_uint64(*((uint64 *)decode_in));
bits |= next >> bitcount;
int bytes_consumed = (64 - bitcount)>>3;
decode_in += bytes_consumed;
bitcount += bytes_consumed<<3;
int cl; int sym;
#define DECONE() \
cl = codelens[state]; sym = symbols[state]; \
state = next_state_table[state] | (bits >> (64 - cl)); \
bits <<= cl; bitcount -= cl; \
*decodeptr++ = (uint8) sym;
DECONE();
DECONE();
DECONE();
DECONE();
#undef DECONE
}
415 mb/s
Fastest! It seems the fastest Huffman decoder is a TANS decoder. (*1)
(*1 = well, on this machine anyway; these are all so close that architecture and exact usage matters massively; in particular we're relying heavily on fast unaligned reads, and doing four unrolled decodes in a row isn't always useful)
Note that this is a complete TANS decoder save one small detail - in TANS the "codelen" (previously called
"numbits" in my TANS code) can be 0. The part where you do :
(bits >> (64 - cl))
can't be used if cl can be 0. In TANS you either have to check for zero, or you have to use the method of
((bits >> 1) >> (63 - cl))
which makes TANS a tiny bit slower - 370 mb/s for TANS on the same file on my machine.
(all times reported are non-interleaved, and without table build time; Huffman is definitely faster to build tables, and faster to decode packed/transmitted codelens as well)
NOTE : earlier version of this post had a mistake in bitcount update and worse timings.
Some tiny caveats :
1. The TANS way means you can't (easily) mix different peek amounts. Say you're doing an LZ, you might want an 11-bit peek for literals, but for the 4 bottom bits you only need an 8-bit peek. The TANS state has the # of bits to peek baked in, so you can't just use that. With the normal bit-buffer style Huffman decoders you can peek any # of bits you want. (though you could just do the multi-state interleave thing here, keeping with the TANS style).
2. Doing Huffman decodes without a strict codelen limit the TANS way is much uglier. With the bits-at-top bitbuffer method there are nice ways to do that.
3. Getting raw bits the TANS way is a bit uglier. Say you want to grab 16 raw bits; you could get 12 from the "state" and then 4 more from the bit buffer. Or just get 16 directly from the bit buffer which means they need to be sent after the next 12 bits of Huffman in a weird TANS interleave style. This is solvable but ugly.
4. For the rare special case of an 8 or 16-bit peek-ahead, you can do even faster than the TANS style by using a normal bit buffer with the next bits at bottom. (either little endian or big-endian but rotated around). This lets you grab the peek just by using "al" on x86.
No comments:
Post a Comment