2/22/2009

02-22-09 - Tech Image Series 3

Continuing in the theme of color extrapolators from last time...

The Weepy Tree :

The Wavelet Mirror :

These are both from the "Brando" wavelet coder I did for Eclipse (Genesis3d) for the "Hollywood" project that was supposed to stream high quality video games over the net. (sadly it was 10 years before its time, and we didn't have Half Life 2 to drive adoption)

When you're making image coders, the core of the algorithm is very simple, but then you get to the real world and there are lots of little issues to deal with, which people rarely spend much time.

The "Weepy Tree" was just a test image I was looking at to examine my heuristic for alpha-test (one bit alpha cutout images). A lot of people just leave the transparent pixels black which is of course horrible, you can see the ring of ugly show up in the bilinear filter. I wanted to do an extrapolate from the opaque pixels to fill in the blanks, and also to make the invisible areas very simple and continuous so that the wavelet would send no data for them. I don't remember the algorithm exactly, but I like the way it looks like the tree is melting ala Dali.

The "Wavelet Mirror" is a later test of Brando's handling of non-power-of-2 image sizes. When you have an image that's like 128x231 , you want to do a 5-level wavelet, that requires the image dimensions to be multiples of 32. You have to pad the vertical size up to 256, send the image as 128x256, then crop it back to 231 after decode. To fill in those extra pixels, you want to do something continuous. Continuous is good for two reasons - 1. it reduces the amount of wasted bits sent transmitting the out-of-bounds data, and 2. it reduces the amount of ringing that comes into the original image from the out-of-bounds data. For example, if you just extend the edge by extrapolating a solid color plateau, that creates a derivative discontinuity which ripples back into the image at the boundary. The best choices are to extrapolate or to mirror. Sometimes extrapolate is best, but it also has bad noise-amplification properties and can sometimes be very poor (and often gets clamped at 0 or 255). So I went with mirror.

BTW when you mirror you need to be careful about whether you repeat the edge pixel or not. Like do you do [0123][210] or [0123][3210]. There's no clear right or wrong about repeating the edge pixel, it depends on the rest of your system. For transform coders, you want to choose to duplicate the edge or not based on whether your basis functions are even or odd; for example with a DCT you generally do want to duplicate the edge pixel, but with most wavelets you do not.

The thing that's really neat here though is that in the out-of-bounds mirrored region, I wanted to waste as little data as possible sending unneeded wavelet coefficients. To do that I simply zeroed out any wavelets that did not touch the original part of the image at all. Now the wavelets obviously have some size, usually 5-9 taps, and it doubles as you go to lower frequencies. The result is near the edge of the original image, you keep the high frequency wavelets, as you get farther from the original, you are only keeping the lower and lower frequency coefficients. The result is that the reflection of the image seems to blur as you get farther from the original edge, which makes it look like a reflection in a pool of water or something.

(BTW again the point of this series is not to say look at the awesome algorithm you should do this; a lot of these algorithms are old or broken in some way; this is a look back at some random cool screenshots I have collected over the years just because the image looked neat to me when I was developing).

No comments:

old rants