6/07/2018

New in Oodle 2.6.3 : HyperFast Encode Speeds

Oodle 2.6.3 now has faster encode levels ("hyperfast"), for uses where encode speed is crucial.

Previously the fastest Oodle encode level was "SuperFast" (level 1). The new "HyperFast" levels are below that (level -1 to -4). The HyperFast levels sacrifice some compression ratio to maximize encode speed.

An example of the performance of the new levels (on lzt99, x64, Core i7-3770) :

Higher CompressionLevels are to the right in the bar charts above; they get higher compression ratios at the cost of lower encode speed. Charts show three HyperFast levels (-1 to -3) and 4 normal levels (1 to 4).

In the loglog plot, up = higher compression ratio, right = faster encode.

lzt99      : Kraken-z-3  : 1.711 to 1 :  416.89 MB/s
lzt99      : Kraken-z-2  : 1.877 to 1 :  333.28 MB/s
lzt99      : Kraken-z-1  : 2.103 to 1 :  280.09 MB/s
lzt99      : Kraken-z1   : 2.268 to 1 :  167.01 MB/s
lzt99      : Kraken-z2   : 2.320 to 1 :  120.39 MB/s
lzt99      : Kraken-z3   : 2.390 to 1 :   38.85 MB/s
lzt99      : Kraken-z4   : 2.434 to 1 :   24.98 MB/s

lzt99      : Mermaid-z-3 : 1.660 to 1 :  438.89 MB/s
lzt99      : Mermaid-z-2 : 1.793 to 1 :  353.82 MB/s
lzt99      : Mermaid-z-1 : 2.011 to 1 :  277.35 MB/s
lzt99      : Mermaid-z1  : 2.041 to 1 :  261.38 MB/s
lzt99      : Mermaid-z2  : 2.118 to 1 :  172.77 MB/s
lzt99      : Mermaid-z3  : 2.194 to 1 :   97.11 MB/s
lzt99      : Mermaid-z4  : 2.207 to 1 :   40.88 MB/s

lzt99      : Selkie-z-3  : 1.447 to 1 :  627.76 MB/s
lzt99      : Selkie-z-2  : 1.526 to 1 :  466.57 MB/s
lzt99      : Selkie-z-1  : 1.678 to 1 :  370.34 MB/s
lzt99      : Selkie-z1   : 1.698 to 1 :  340.68 MB/s
lzt99      : Selkie-z2   : 1.748 to 1 :  204.76 MB/s
lzt99      : Selkie-z3   : 1.833 to 1 :  107.29 MB/s
lzt99      : Selkie-z4   : 1.863 to 1 :   43.65 MB/s

A quick guide to the Oodle CompressionLevels :


-4 to -1 : HyperFast levels

    when you want maximum encode speed
    these sacrifice compression ratio for encode time

0 : no compression (memcpy pass through)

1 to 4 : SuperFast, VeryFast, Fast, Normal

    these are the "normal" compression levels
    encode times are ballpark comparable to zlib

5 to 8 : optimal levels

    increasing compression ratio & encode time
    levels above 6 can be slow to encode
    these are useful for distribution, when you want the best possible bitstream

Note that the CompressionLevel is a dial for encode speed vs. compression ratio. It does not have a consistent correlation to decode speed. That is, all of these compression levels get roughly the same excellent decode speed.

Comparing to Oodle 2.6.0 on Silesia :


Oodle 2.6.0 :
Kraken 1 "SuperFast"   :  3.12:1 ,  147.2 enc MB/s ,  920.9 dec MB/s
Kraken 2 "VeryFast"    :  3.26:1 ,  107.8 enc MB/s ,  945.0 dec MB/s
Kraken 3 "Fast"        :  3.50:1 ,   47.1 enc MB/s , 1043.3 dec MB/s

Oodle 2.6.3 :
Kraken -2 "HyperFast2" :  2.92:1 ,  300.4 enc MB/s , 1092.5 dec MB/s
Kraken -1 "HyperFast1" :  3.08:1 ,  231.3 enc MB/s ,  996.2 dec MB/s
Kraken 1 "SuperFast"   :  3.29:1 ,  164.6 enc MB/s ,  885.0 dec MB/s
Kraken 2 "VeryFast"    :  3.40:1 ,  109.5 enc MB/s ,  967.3 dec MB/s
Kraken 3 "Fast"        :  3.61:1 ,   45.8 enc MB/s ,  987.5 dec MB/s

Note that in Oodle 2.6.3 the normal levels (1-3) have also improved (much higher compression ratios).


Oodle is an SDK for high performance lossless data compression. For more about Oodle, or licensing inquiries, visit the RAD Game Tools web site. This is my personal blog where I post supplemental material about Oodle.

6/06/2018

The Perils of Holistic Profiling

I have found that many evaluators of Oodle are now trying to do timing of their entire process. That is, profiling the compressor by measuring the effect on total load time, or by profiling an entire game frame time (as opposed to profiling just the compression operation, perhaps in situ or perhaps in a test bench).

I believe what's happened is that many people have read about the dangerous of artificial benchmarks. (for example there are some famous papers on the perils of profiling malloc with synthetic use loads, or how profiling threading primitives in isolation is pretty useless).

While those warnings do raise important issues, the right response is not to switch to timing whole operations.

For example, while timing mallocs with bad synthetic data loads is not useful (and perhaps even harmful), similarly timing an entire application run to determine whether a malloc is better or not can also be misleading.

Basically I think the wrong lesson has been learned and people are over simplifying. They have taken one bad practice (time operations by running them in a synthetic test bench over and over), and replaced it with another bad practice (time the whole application).

The reality of profiling is far more complex and difficult. There is no one right answer. There is not a simple prescription of how to do it. Like any scientific measurement of a complex dynamic system, it requires care and study. It requires looking at the specific situation and coming up with the right measurement process. It requires secondary measurements to validate your primary measurements, to make sure you are testing what you think you are.

Now, one of the appealing things of whole-process timing is that in one very specific case, it is the right thing to do.

IF the thing you care about is whole-process time, and the process is always run the same way, and you do timing on the system that the process is run on, and in the same application state and environment, AND crucially this you are only allowed to make one change to the process - then whole process timing is right.

Let's first talk about the last issue, which is the "single change" problem.

Quite often a good change can appear to do nothing (or even be negative) for whole process time on its own. By looking at just the whole process time to evaluate the change, you miss a very positive step. Only if another step is taken will the value of that first step be shown.

A common case of this is if your process has other limiting factors that need to be fixed.

For example on the macroscopic level, if your game is totally GPU bound, then anything you do to CPU time will not show up at all if you are only measuring whole frame time. So you might profile a CPU optimization and see no benefit to frame time. You can miss big improvements this way, because they will only show up if you also fix what's causing the process to be GPU bound.

Similarly at a more microscopic level, it's common to have a major limiting factor in a sequence of code. For example you might have a memory read that typically misses cache, or an unpredictable branch. Any improvements you make to the arithmetic instructions in that area may be invisible, because the processor winds up stalling on a very slow cache line fill from memory. If you are timing your optimization work "in situ" to be "realistic" you can completely miss good changes because they are hidden by other bad code.

Another common example, maybe you convert some scalar code to SIMD. You think it should be faster, but you time it in app and it doesn't seem to be. Maybe you're bound in other places. Maybe you're suffering added latency from round-tripping from scalar to SIMD back to scalar. Maybe your data needs to be reformatted to be stored in SIMD friendly ways. Maybe the surrounding code needs to be also converted to SIMD so that they can hand off more smoothly. There may in fact be a big win there that you aren't seeing.

This is a general problem that greedy optimization and trying to look at steps one by one can be very misleading when measuring whole process time. Sometimes taking individual steps is better evaluated by measuring just those steps in isolation, because using whole process time obscures them. Sometimes you have to try a step that you believe to be good even if it doesn't show up in measurements, and see if taking more steps will provide a non-greedy multi-step improvement.

Particular perils of IO timing

A very common problem that I see is trying to measure data loading performance, including IO timing, which is fraught with pitfalls.

If you're doing repeated timings, then you'll be loading data that is already in the system disk cache, so your IO speed may just look like RAM speed. Is what's important to you cold cache timing (user's first run), or hot cache time? Or both?

Obviously there is a wide range of disk speeds, from very slow hard disks (as on consoles) in the 20 MB/s range up to SSD's and NVMe in the GB/s range. Which are you timing on? Which will your user have? Whether you have slow seeks or not can be a huge factor.

Timing on consoles with disk simulators (or worse : host FS) is particularly problematic and may not reflect real world performance at all.

The previously mentioned issue of high latency problems hiding good changes is very common. For example doing lots of small IO calls creates long idle times that can hide other good changes.

Are you timing on a disk that's fragmented, or nearly full? Has your SSD been through lots of write cycles already or does it need rebalancing? Are you timing when other processes are running hitting the disk as well?

Basically it's almost impossible to accurately recreate the environment that the user will experience. And the variation is not small, it can be absolutely massive. A 1 byte read could take anything between 1 nanosecond (eg. data already in disk cache) to 100 milliseconds (slow HD seek + other processes hitting disk).

Because of the uncertainty of IO timing, I just don't do it. I use a simulated "disk speed" and just set :


disk time = data size / simulated disk speed

Then the question is, well if it's so uncertain, what simulated disk speed do you use? The answer is : all of them. You cannot say what disk speed the user will experience, there's a huge range, so you need to look at performance over a spectrum of disk speeds.

I do this by making a plot of what the total time for (load + decomp) is over a range of simulated disk speeds. Then I can examine how the performance is affected over a range of possible client systems, without trying to guess the exact disk speed of the client runtime environment. For more on this, see : Oodle LZ Pareto Frontier or Oodle Kraken Pareto Frontier .

ZStd is faster than Leviathan

ZStd is faster than Leviathan on some files ; well, no, it's not that simple.

This is another post about careful measurement, how to compare compressors, and about the unique way Oodle works.

(usual caveat: I don't mean to pick on ZStd here; I use it as a reference point because it is excellent, the closest thing to Oodle, and something we are often compared against. ZStd timing is done with lzbench; times are on x64 Core i7-3770)

There are two files in my "gametestset" where ZStd appears to be significantly faster to decode than Leviathan :


e.dds :

zstd 1.3.3 -22           3.32 MB/s   626 MB/s      403413  38.47%

ooLeviathan8    :  1,048,704 ->   355,045 =  2.708 bpb =  2.954 to 1 
decode          : 1.928 millis, 6.26 c/b, rate= 544.03 MB/s


Transistor_AudenFMOD_Ambience.bank :

zstd 1.3.3 -22           5.71 MB/s  4257 MB/s    16281301  84.18%

ooLeviathan8    : 19,341,802 ->16,178,303 =  6.692 bpb =  1.196 to 1 
decode          : 8.519 millis, 1.50 c/b, rate= 2270.48 MB/s

Whoah! ZStd is a lot faster to decode than Leviathan on these files, right? (626 MB/s vs 544.03 MB/s and 4257 MB/s vs 2270.48 MB/s)

No, it's not that simple. Compressor performance is a two axis value of {space,speed}. It's a 2d vector, not a scalar. You can't simply take one component of the vector and just compare speeds at unequal compression.

All compressors are able to hit a range of {space,speed} points by making different decisions. For example with ZStd at level 22 you could forbid length 3 matches and that would bias it more towards decode speed and lower compression ratio.

Oodle is unique in being fundamentally built as a space-speed optimization process. The Oodle encoders can make bit streams that cover a range of compression ratios and decode speeds, depending on what the client asks it to prioritize.

Compressor performance is determined by two things : the fundamental algorithm, and the current settings. The settings will allow you to dial the 2d performance data point to different places. The algorithm places a limit on where those data points can be - it defines a Pareto Frontier. This Pareto curve is a fundamental aspect of the algorithm, while the exact space speed point on that curve is simply a choice of settings.

There is no such thing as "what is the speed of ZStd?". It depends how you have dialed the settings to reach different performance data points. The speed is not a fundamental aspect of the algorithm. The Pareto frontier *is* a fundamental aspect, the limit on where those 2d data points can reach.

One way to compare compression algorithms (as opposed to their current settings) is to plot many points of their 2d performance at different settings, and then inspect how the curves lie. One curve might strictly cover the other, then that algorithm is always better. Or, they might cross at some point, which means each algorithm is best in a different performance domain.

Another way to compare compression algorithms is to dial them to find points where one axis is equal (either they decode at the same speed, or they have the same compression ratio), then you can do a simple 1d comparison of the other value. You can also try to find points where one compressor is strictly better on both axes. The inconclusive situations is when one compressor is better on one axis, and the other is better on the other axis.

(note I have been talking about compressor performance as the 2d vector of {decode speed,ratio} , but of course you could also consider encode time, memory use, other factors, and then you might choose other axes, or have a 3d or 4d value to compare. The same principles apply.)

(there is another way to compare 2d compressor performance with 1d scalar; at RAD we internally use the corrected Weissman score . One of the reasons we use the 1d Weissman score is because sometimes we make an improvement to a compressor and one of the axes gets worse. That is, we do some work, and then measure, and we see compression ratio went down. Oh no, WTF! But actually decode speed went up. From the 2d performance vector it can be hard to tell if you made an improvement or not, the 1d scalar Weissman score makes that easier.)

Oodle is an optimizing compiler

Oodle is fundamentally different than other compressors. There is no "Oodle has X performance". Oodle has whatever performance you ask it to have (and the compressed size will vary along the Pareto frontier).

Perhaps an analogy that people are more familiar with is an optimizing compiler.

The Oodle decoder is a virtual machine that runs a "program" to create an output. The compressed data is the program of commands that run in the Oodle interpreter.

The Oodle encoder is a compiler that makes the program to run on that machine (the Oodle decoder). The Oodle compiler tries to create the most optimal program it can, by considering different instruction sequences that can create the same output. Those different sequences may have different sizes and speeds. Oodle chooses them based on how the user has specified to consider the value of time vs. size. (this is a bit like telling your optimizing compiler to optimize for size vs. optimize for speed, but Oodle is much more fine grained).

For example at the microscopic level, Oodle might consider a sequence of 6 bytes. This can be sent as 6 literals, or a pair of length 3 matches, or a literal + a len 4 rep match + another literal. Each possibility is considered and the cost is measured for size & decode time. At the macroscopic level Oodle considers different encodings of the command sequences, whether to send somethings uncompressed or with different entropy coders, and different bit packings.

Oodle is a market trader

Unlike any other lossless compressor, Oodle makes these decisions based on a cost model.

It has been standard for a long time to make space vs. speed decisions in lossless compressors, but it has in the past always been done with hacky ad-hoc methods. For example, it's common to say something like "if the compressed size is only 1% less than the uncompressed size, then just send it uncompressed".

Oodle does not do that. Oodle considers its compression savings (bytes under the uncompressed size) to be "money". It can spend that money to get decode time. Oodle plays the market, it looks for the best price to spend its money (size savings) to get the maximum gain of decode time.

Oodle does not make ad-hoc decisions to trade speed for size, it makes an effort to get the best possible value for you when you trade size for speed. (it is of course not truly optimal because it uses heuristics and limits the search, since trying all possible encodings would be intractable).

Because of this, it's easy to dial Oodle to different performance points to find more fundamental comparisons with other compressors. (see, for example : Oodle tuneability with space-speed tradeoff )

Note that traditional ad-hoc compressors (like ZStd and everyone else) make mistakes in their space-speed decisions. They do not allocate time savings to the best possible files. This is an inevitable consequence of having simple thresholds in decision making (and this flaw is what led us to do a true cost model). That is, Leviathan decode speed is usually, say, 30% faster than ZStd. On some files that ratio goes way up or way down. When that happens, it is often because ZStd is making a mistake. That is, it's not paying the right price to trade size for speed.

Of course this relies on you telling Oodle the truth about whether you want decode speed or size. Since Oodle is aggressively trading the market, you must tell it the way you value speed vs. size. If you use Leviathan at default settings, Oodle thinks your main concern is size, not decode speed. If you actually care more about decode speed, you need to adjust the price (with "spaceSpeedTradeoffBytes") or possibly switch to another compressor (Kraken, Mermaid, or let Hydra switch for you).

Back to the files where ZStd is faster

Armed with our new knowledge, let's revist those two files :


e.dds      : zstd 1.3.3 -22 : 2.600 to 1 :  625.72 MB/s
e.dds      : Leviathan -8   : 2.954 to 1 :  544.03 MB/s


Transistor_AudenFMOD_Ambience.bank : zstd 1.3.3 -22 : 1.188 to 1 : 4257 MB/s
Transistor_AudenFMOD_Ambience.bank : Leviathan -8   : 1.196 to 1 : 2270.48 MB/s 

Is ZStd faster on these files? At this point we don't know. These are inconclusive data points. In both cases, Leviathan has more compression, but ZStd has more speed - the victor on each axis differs and we can't easily say which is really doing better.

To get a simpler comparison we can dial Leviathan to different performance points using Oodle's "spaceSpeedTradeoffBytes" parameter, which sets the relative cost of time vs size in Oodle's decisions.

That is, in both cases Oodle has size savings to spend. It can spend those size savings to get more decode speed.

On e.dds, let's take Leviathan and dial spaceSpeedTradeoffBytes up from the default of 256 in powers of two to favor decode speed more :

e.dds      : zstd 1.3.3 -22 : 2.600 to 1 :  625.72 MB/s
e.dds      : Leviathan 1    : 3.020 to 1 :  448.30 MB/s
e.dds      : Leviathan 256  : 2.954 to 1 :  544.03 MB/s
e.dds      : Leviathan 512  : 2.938 to 1 :  577.23 MB/s
e.dds      : Leviathan 1024 : 2.866 to 1 :  826.15 MB/s
e.dds      : Leviathan 2048 : 2.831 to 1 :  886.42 MB/s
What is the speed of Leviathan? There is no one speed of Leviathan. It can go from 448 MB/s to 886 MB/s depending on what you tell the encoder you want. The fundamental aspect is what compression ratio can be achieved at each decode speed.

We can see that ZStd is not fundamentally faster on this file; in fact Leviathan can get much more decode speed AND compression ratio at spaceSpeedTradeoffBytes = 1024 or 2048.

Similarly on Transistor_AudenFMOD_Ambience.bank :

Transistor_Aude...D_Ambience.bank : zstd 1.3.3 -22 : 1.188 to 1 : 4275.38 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 256  : 1.196 to 1 : 2270.48 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 512  : 1.193 to 1 : 3701.30 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 1024 : 1.190 to 1 : 4738.83 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 2048 : 1.187 to 1 : 6193.92 MB/s

zstd 1.3.3 -22           5.71 MB/s  4257 MB/s    16281301  84.18 Transistor_AudenFMOD_Ambience.bank

Leviathan spaceSpeedTradeoffBytes = 2048
ooLeviathan8    : 19,341,802 ->16,290,106 =  6.738 bpb =  1.187 to 1 
decode          : 3.123 millis, 0.55 c/b, rate= 6193.92 MB/s

In this case we can dial Leviathan to get very nearly the same compressed size, and then just compare speeds (4275.38 MB/s vs 6193.92 MB/s).

Again ZStd is not actually faster than Leviathan here. If you looked at Leviathan's default setting encode (2270.48 MB/s) you were not seeing ZStd being faster to decode. What you are seeing is that you told Leviathan to choose an encoding that favors size over decode speed.

It doesn't make sense to tell Oodle to make a very small file, and then just compare decode speeds. That's like buying a truck to maximize cargo carrying and then complaining that it has poor gas mileage. You specifically asked me to optimize for the opposite goal!

Note that in the Transistor bank case, it looks like Oodle is paying a bad price to get a tiny compression savings; going from 6000 MB/s to 2000 MB/s seems like a lot. In fact that is a small time difference, while 1.187 to 1.196 ratio is actually a big size savings. The problem here is that ratio & speed are inverted measures of what we are really optimizing, which is time and size. Internally we always look at bpb (bits per byte) and cpb (cycles per byte) when measuring performance.

Bits and Cycles are your commidities that you are trading. If you look at compression ratio or speed, those are actually inverses of the commodities you care about, which can make numbers look weird. If we convert these ratio to commodites :

Transistor_Aude...D_Ambience.bank : zstd 1.3.3 -22 : 1.188 to 1 : 4275.38 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 256  : 1.196 to 1 : 2270.48 MB/s
Transistor_Aude...D_Ambience.bank : Leviathan 512  : 1.193 to 1 : 3701.30 MB/s

is

Transistor_Aude...D_Ambience.bank : zstd 1.3.3 -22 : 6.734 bits per byte : 0.795 cycles per byte
Transistor_Aude...D_Ambience.bank : Leviathan 256  : 6.689 bits per byte : 1.497 cycles per byte
Transistor_Aude...D_Ambience.bank : Leviathan 512  : 6.706 bits per byte : 0.919 cycles per byte
The setting "spaceSpeedTradeoffBytes" tells Leviathan how much it should pay in cycles to gain some compression in bits.

e.dds charts :

See also : The Natural Lambda

old rants