2/23/2021

Rate allocation in Oodle Texture

Oodle Texture does rate-distortion optimization (RDO) BCN encoding, optimizing the BCN encoding for an R-D two axis score such that the size of the BCN texture after a following lossless compression is reduced while the distortion (difference from original) is not increased too much.

One way to think about RDO conceptually is as rate allocation. Rate allocation is when an encoder intentionally puts more or fewer bits in parts of the data to control what the decoder will receive, so that it can use more bits where it helps more, and the result is a better quality for a given output size. I want to talk a bit about that viewpoint and how it works in Oodle Texture.

Traditional lossy encoders like JPEG didn't have active rate allocation; they used a fixed scheme (DCT and quantize) that did a fixed rate allocation (give more bits to the lower frequencies) and didn't try to optimize that based on image contents. Modern codecs like H264 use rate allocating encoders and furthermore have explicit tools in the format to make rate allocation more flexible, such as variable quantizers. Formats without explicit rate allocation controls can still be rate adjusted, which is what we do in Oodle Texture, but it's not as easy to dial.

In that context, we think of a traditional non-RDO BCN encoder as allocating rate evenly to all blocks; it doesn't care where the rate goes and just tries to minimize distortion. The BCN encoder itself produces fixed size blocks (8 bytes per block for BC1 or 16 bytes per block for BC7), but the "rate" we care about is the size of the block after subsequent LZ compression (eg. with Kraken or Zip/deflate).

In Oodle Texture, we use the Lagrange parameter method for RDO. Specifically we construct a Lagrange cost :

J = D + lambda * R
and then we can just minimize a single number, J, rather than a two-axis score of {R,D}. Obviously when lambda=0 this reduces to J=D , minimize D, which is a non-RDO encoding that only cares about quality and not rate. As lambda increases your score is more and more a balance of minimizing both distortion and rate.

We sometimes think of D(R) , distortion just being a function of R, but that's not how it works in an implementation, you don't actually know the functions for how R and D relate.

In practice there is some encoding, which I will index by "i" , and you have two separate functions :

R(i) and D(i)
That is, you just choose various encodings that look promising, and measure R and D of each. This will give you a scattering of points on an R-D plot. Some points are off the Pareto Frontier - strictly worse in terms of R & D and we just reject those points. See Fabian's blog on RDO for some pictures of this.

Henceforth I will assume that points off the Pareto Frontier have been rejected and we now only are looking at encodings on the Pareto R-D curve. Also I will sort the index of encodings "i" by rate.

The R(i) and D(i) curves both tend to be hyperbolic-shaped, with minima at opposite ends of the "i" spectrum. If you minimized either one on their own, they don't have any local minima so you would just go to the edge of what the encoding format is capable of doing. By adding them together with lambda, it makes a trough somewhere in the middle :

High D, low R on the left (very lossy), low D high R (non-RDO) on the right.

At low lambda, the minimum of J is in the high-R, low-D domain. As lambda increases it shifts the minimum of J to lower R. In the example chart, the red low-lambda curve has minimum at i=14, the teal medium-lambda curve has minimum at i=12, the gold high-lambda curve has minimum at i=10. In this way, lambda can be used to dial rate, but indirectly.

Concretely, if we reparameterize to imagine we have a function D(R) (Distortion as a function of R; there is only one encoding for each R because we discard non-Pareto encodings), we have :


J(R) = D(R) + lambda * R

minimum is at d/dR J = 0

d/dR J = 0 = d D /dR + lambda

lambda = - dD / dR

lambda is the slope on the D(R) curve.

Because the D(R) curve is hyperbolic-shaped, the slope acts like a parameter of D itself. That is, where D is higher, the curve is also steeper, so by dialing the slope we have control over the principle value D as well.

Aside for experts : we are assuming that D(R) is continuous and monotonic AND dD/dR is continuous and monotonic; that is, the slope steadily increases from low to high. The whole idea of lambda as a slope of D(R) only really works if these curves are smooth and hyperbolic shaped as expected. We also need J to only have one local minimum. If these assumptions fail (which in practice they do!) you can wind up at a minimum that's not where you want, and the lambda-dials-D relationship can break down. There are some tricks to avoid these pitfalls. Try to avoid large steps of R/D in your encoding choices; provide the encoder with a fine grained series of steps; don't use the true R which can have strange non-monotonic wiggles, instead use an approximation of R which tends to smooth things out; you generally want to pre-classify the encodings that you think are low R vs high R and force them to be monotonic rather than measuring true R.

Now, "lambda is the slope on the D(R) curve" sounds a bit unintuitive and abstract, but in fact it is a concrete simple thing.

Lambda is a price. It is the gain in D that you must get for a gain in R to be worth it.

By using lambda as your control parameter instead of D, you are dialing quality via the *cost* of quality in terms of rate. You are saying "this much gain in quality is worth this much rate to me". We typically measure R in bits, so let's say lambda is quality per bit. In order to make a file bigger by 1 bit, it must provide at least "lambda" quality gain. Smaller quality gain, it's not worth it, don't do it.

Having our control paramter be price instead of directly controlling R or D turns out to be very beneficial, both for us internally and for you externally.

First why it's right externally : lambda control (slope control) lets you set a single parameter that works for all kinds of data or images. It automatically gives more rate to images that are harder to compress ("harder" here very concretely means a steeper slope - they give up more distortion than average for each step or rate). So you can have a mix of very vastly different data, lightmaps and normal maps and cartoon drawings, and they automatically rate adjust to send bits where they help the most. Many novices prefer to think of "rate reduction" or a "constant quality" kind of setting, like "I want to RDO all my images to 30% rate reduction", but that's really wrong. On some images, getting to 30% rate reduction would do lots of quality damage, while on others you could easily get more rate reduction (say 50%) without much quality loss at all. Lambda does that for you automatically.

Internally it's important because it lets us make each coding decision in isolation, just by looking at the J cost of that one decision in isolation. It makes the problem separable (which is also great for parallelism), but still achieve a global minimum. Lagrange optimization automatically does rate allocation within the image to different blocks and decisions.

I think it helps to compare to an alternative manual rate allocation method to see how advantageous it is :


Algorithm Manual Rate Allocation :

you have an image of N BCN 4x4 blocks
for each block, find the lowest D (non-RDO) encoding
measure R of that encoding - this is the max rate for each block

now incrementally reduce total rate
you want to take rate from the best block to take rate from of the N

for all N blocks
find the next smaller encoding 
measure the D increase for that step down of R
slope(n) = delta(D) / delta(R)

take the one block change n with the lowest slope(n)

repeat until all slopes(n) are > max_desired_slope

vs.

Algorithm Lagrange Rate Allocation :

for all N blocks in parallel :

try various encodings of the block
compute J = D + lambda * R for each encoding
take the encoding on that block with the best J

In "Manual Rate Allocation" we have this palette of various bins (the blocks) to choose to take rate from, and you take it from the one that does the least damage to D, if you keep repeating that the slopes will converge until they are all roughly equal.

The powerful thing is that these two algorithms converge to exactly the same solution (caveat: assuming monotonic and smooth R/D curves, which in fact they are not). The Lagrange method is actually doing rate allocation between the blocks, even though it's not explicit, and each block can be considered in isolation. In an RDO encoder, rate is like water, it flows away from high ground (dD/dR is high) to low ground, until level is reached. The Lagrange method is able to do this separably because the total amount of water (rate) is not conserved.

We can control exactly what the rate allocation does through the choice of the D function. The R function for rate is relatively inflexible - we're counting bits of output size (though in practice we may want to use a smoothed rate rather than true rate), we don't have much choice in R - in contrast, the D function is not absolutely specified and there are lots of options there. The choice of D changes where rate flows.

For example consider the common choices of D = L1 norm (SAD) vs L2 norm (SSD). These metric score different errors differently, which causes rate to flow. In particular L2 (squaring) puts a very big weight on single large errors vs. smaller multiple errors.


Source data   = { 10, 10 }
Approximation = {  8, 12 }

SAD error = 2 + 2 = 4
SSD error = 2^2 + 2^2 = 8

Source data   = { 10, 10 }
Approximation = { 10, 14 }

SAD error = 0 + 4 = 4
SSD error = 0^2 + 4^2 = 16

Choosing D = SSD would cause rate to flow from the {8,12} approximation to the {10,14} approximation because the error is much bigger there (relative to where it is with SAD). All reasonable D functions will be zero for an exact match, and increase as the approximation gets further from the source data, but they can increase at different rates for different types of distortions. In a non-RDO encoding, these different D functions might find very similar encodings, but with RDO by choosing different D you get rate to flow between blocks to different characters of error.


With no further ado, we can get to the fun part : pictures of how rate allocation plays out in practice in Oodle Texture.

These images show the original image, followed by gray scale images of the rate allocation. These are for BC7 encodings with Oodle Texture. The non-RDO encoding is Oodle Texture at lambda=0 ; the RDO encodings are at lambda=40 (typical medium quality setting).

In the rate allocation images, each pixel represents one 4x4 block. White is full rate (16 bytes per block) and black is zero bits. These encodings are not normalized to the same total rate (in particular the non-RDO is of course much larger). The images are gamma corrected so that linear light intensity corresponds to bits per block. The original images are halved in size for display here. So for example "mysoup" was 1024x512, the original shown here is 512x256, and rate map is one pixel per block, so 256x128.

The "rmse" and "perceptual" images use the exact same coding procedure at the same lambda, they differ only in the D function used to measure distortion. "rmse" tries to minimize simple L2 norm, while "perceptual" has some estimation of visual quality (trying to avoid blocking artifacts, for example).

The R measured for these maps is the size of each block after subsequent compression by Oodle's LZA compressor.

non-RDORDO rmseRDO perceptual

The non-RDO "mysoup" has very high entropy, the BC7 blocks are nearly incompressible by the LZ (7.759 bpb). The RDO with D=rmse keeps very high bit rate in the blocks of the bowl rim, but allocates lots of bits away from the smooth regions in the egg and pumpkin. In the "perceptual" allocation, the bits in the rim are reduced, allowing more error there, and the bit rate of the smooth regions come up to avoid blocking artifacts.

non-RDORDO rmseRDO perceptual

In the house image "test6" the bit rate of the "rmse" allocation goes nearly to zero in the smooth dark areas of the beams under the eaves. That's a mistake perceptually and causes bad blocking artifacts, so the "perceptual" allocator shifts rate back to those blocks.

non-RDORDO rmseRDO perceptual

We can see that Oodle Texture is changing near-even rate allocation to very variable rate per block. Even though the BC7 blocks are always 16 bytes, we have made some more compressible than others, shifting rate to where it is needed. On many blocks, 16 bytes is just too much, that much rate is not needed to get a good encoding, and the difference in D to a reduced-size encoding is small; these are low slope blocks and will lose rate in RDO. After rate allocation, the blocks all have the same dD/dR slope; they reach an equilibrium where you can't take bits from one block and move it to another block and improve total quality.

Something you may notice in all the rate allocation maps is there are horizontal lines of higher rate going through the image. This is where we slice the images into chunks for parallelism of some operations that have to work per-chunk instead of per-block. The stripes of higher rate show that we could find some improvement there.


Links :

Oodle Texture at radgametools.com
Lagrange space-speed optimization - cbloom blog
Leviathan is a market trader - cbloom blog
Rate-Distortion Optimisation in Dirac
Rate-distortion optimization The ryg blog

2/15/2021

Oodle 2.8.14 with Mac ARM64

Oodle 2.8.14 is out. The full changelog is at RAD. The highlights are :
## Release 2.8.14 - February 15, 2021

$* *enhancement* : BC7 encoding is faster ; slightly different encodings at higher speed with similar quality

$* *new* : Mac ARM64 build now provided ; Mac example exes are fat x64+arm64
$* *new* : Apple tvOS build now provided

$* *deprecation* : Mac 32 bit x86 build no longer provided
We're also now shipping plugin integrations for Unreal 4.26

Kraken decompression is wicked fast on the M1 :

Kraken, Win-x64 msvc-1916, lzc99.kraken.zl6
lzt99 : 24,700,820 ->10,013,788 =  3.243 bpb =  2.467 to 1
decode           : 16.444 millis, 2.26 c/B, rate= 1502.09 MB/s

Kraken, Mac-x64 xcode-12.0.0, lzc99.kraken.zl6
lzt99 : 24,700,820 ->10,013,788 =  3.243 bpb =  2.467 to 1
decode           : 16.183 millis, 2.10 c/B, rate= 1526.36 MB/s
(emulated!)

Kraken, Mac-ARM64 xcode-12.0.0, lzc99.kraken.zl6
lzt99 : 24,700,820 ->10,013,788 =  3.243 bpb =  2.467 to 1
decode           : 11.967 millis, 1.55 c/B, rate= 2064.13 MB/s

Win64 run is :
AMD Ryzen 9 3950X (CPU locked at 3393 MHz, no turbo)
Zen2, 16 cores (32 hyper), TSC at 3490 MHz 

Mac runs on Macbook Mini M1 at 3205 MHz

Mac x64 is emulated on the M1
c/B = cycles per byte should be taken with some salt as we have trouble finding real clock rates, but it's clear the M1 has superior IPC (instructions per clock) to the Zen2. It seems to be about the same speed as the Zen2 in emulated x64!

It will be interesting to see what the M1 high performance variants can do.


Some more speeds cuz I like big numbers :

Macbook Mini M1 ARM64 :

Mermaid, Normal, lzt99 :
24,700,820 ->11,189,930 =  3.624 bpb =  2.207 to 1
encode (x1)      : 299.154 millis, 37.21 c/B, rate= 82.57 MB/s
decode (x30)     : 6.438 millis, 0.80 c/B, rate= 3836.60 MB/s

Mermaid, Optimal2, lzt99 :
24,700,820 ->10,381,175 =  3.362 bpb =  2.379 to 1
encode (x1)      : 3.292 seconds, 409.43 c/B, rate= 7.50 MB/s
decode (x30)     : 7.134 millis, 0.89 c/B, rate= 3462.57 MB/s

Selkie, Normal, lzt99 :
24,700,820 ->13,258,742 =  4.294 bpb =  1.863 to 1
encode (x1)      : 213.197 millis, 26.51 c/B, rate= 115.86 MB/s
decode (x30)     : 3.126 millis, 0.39 c/B, rate= 7901.52 MB/s

Selkie, Optimal2, lzt99 :
24,700,820 ->12,712,659 =  4.117 bpb =  1.943 to 1
encode (x1)      : 1.861 seconds, 231.49 c/B, rate= 13.27 MB/s
decode (x30)     : 3.102 millis, 0.39 c/B, rate= 7962.55 MB/s

old rants