8/31/2014

08-31-14 - DLI Image Compression

I got pinged about DLI so I had a look.

DLI is a closed source image compressor. There's no public information about it. It may be the best lossy image compressor in the world at the moment.

(ASIDE : H265 looks promising but has not yet been developed as a still image compressor; it will need a lot of perceptual work; also as in my previous test of x264 you have to be careful to avoid the terrible YUV and subsamplers that are in the common tools)

I have no idea what the algorithms are in DLI. Based on looking at some of the decompressed images, I can see block transform artifacts, so it has something like an 8x8 DCT in it. I also see certain clues that make me think it uses something like intra prediction. (those clues are good detail and edge preservation, and a tendency to preserve detail even if it's the wrong detail; the same thing that you see in H264 stills)

Anyway, I thought I'd run my own comparo on DLI to see if it really is as good as the author claims.

I tested against JPEG + packJPEG + my JPEG decoder. I'm using an unfinished version of my JPEG decoder which uses the "Nosratinia modified" reconstruction method. It could be a lot better. Note that this is still a super braindead simple JPEG. No per-block quantizer. No intra prediction. Only 8x8 transforms. Standard 420 YCbCr. No trellis quantization or rate-distortion. Just a modern entropy coding back end and a modern deblocking decoder.

I test with my perceptual image tester imdiff . The best metric is Combo which is a linear combo of SCIELAB_MyDelta + MS_SSIM_IW_Y + MyDctDelta_YUV.

You can see some previous tests on mysoup or moses or PDI

NOTE : "dlir.exe" is the super-slow optimizing variant. "dli.exe" is reasonably fast. I tested both. I ran dlir with -ov (optimize for visual quality) since my tests are mostly perceptual. I don't notice a huge difference between them.

My impressions :

DLI and jpeg+packjpg+jpegdec are both very good. Both are miles ahead of what is commonly used these days (old JPEG for example).

DLI preserves detail and contrast much better. JPEG tends to smooth and blur things at lower bit rates. Part of this may be something like a SATD heuristic metric + better bit allocation.

DLI does "mangle" the image. That is, it gets the detail *wrong* sometimes, which is something that JPEG really never does. The primary shapes are preserved by jpeg+packjpg+jpegdec, they just lose detail. With DLI, you sometimes get weird lumps appearing that weren't there before. If you just look at the decompressed image it can be hard to spot, because it looks like there's good detail there, but if you A-B test the uncompressed to the original, you'll see that DLI is actually changing the detail. I saw this before when analyzing x264.

DLI is similar looking to x264-still but better.

DLI seems to have a special mode for gradients. It preserves smooth gradients very well. JPEG-unblock creates a stepped look because it's a series of ramps that are flat in the middle.

DLI seems to make edges a bit chunky. Smooth curves get steppy. jpeg+packjpg+jpegdec is very good at preserving a smooth curved edge.

DLI is the only image coder I've seen that I would say is definitely slightly better than jpeg+packjpg+jpegdec. Though it is worse in some ways, I think the overall impression of the decoded image is definitely better. Much better contrast preservation, much better detail energy level preservation.

Despite jpeg often scoring better than DLI on the visual quality metrics I have, DLI usually looks much better to my eyes. This is a failure of the visual quality metrics.


Okay. Time for some charts.

In all cases I will show the "TID Fit" score. This is a 0-10 quality rating with higher better. This removes the issue of SSIM, RMSE, etc. all being on different scales.

NOTE : I am showing RMSE just for information. It tells you something about how the coders are working and why they look different, where the error is coming from. In both cases (DLI and JPEG) the runs are optimized for *visual* quality, not for RMSE, so this is not a comparison of how well they can do on an RMSE contest. (dlir should be run with -or and jpeg should be run with flat quantization matrices at least).

(see previous tests on mysoup or moses or PDI )

mysoup :

moses :

porsche640 :

pdi1200 :


Qualitative Comparison :

I looked at JPEG and DLI encodings at the same bit rate for each image. Generally I try to look around 1 bpp (that's logbpp of 0) which is the "sweet spot" for lossy image compression comparison.

Here are the original, a JPEG, and a DLI of Porsche640.
Download : RAR of Porsche640 comparison images (1 MB)

What I see :

DLI has very obvious DCT ringing artifacts. Look at the lower-right edge of the hood, for example. The sharp line of the hood has ringing ghosts in 8x8 chunks.

DLI preserves contrast overall much better. The most obvious places are in the background - the leaves, the pebbles. JPEG just blurs those and drops a lot of high frequency detail, DLI keeps it much better. DLI preserves a lot more high frequency data.

DLI adds a lot of noise. JPEG basically never adds noise. For example compare the centers of the wheels. The JPEG just looks like a slightly smoothed version of the original. The DLI has got lots of chunkiness and extra variation that isn't in the original.

In a few places DLI really mangles the image. One is the A-pillar of the car, another is the shadow on the hood, also the rear wheel.

Both DLI and JPEG do the same awful thing to the chroma. All the orange in the gravel is completely lost. The entire color of the laurel bush in the background is changed. Both just produce a desaturated image.

Based on the scores and what I see perceptually, my guess is this : DLI uses an 8x8 DCT. It uses a quantization matrix that is much flatter than JPEG's.

8/27/2014

08-27-14 - LZ Match Length Redundancy

A quick note on something that does not work.

I've written before about the redundancy in LZ77 codes. ( for example ). In particular the issue I had a look at was :

Any time you code a match, you know that it must be longer than any possible match at lower offsets.

eg. you won't sent a match of length of 3 to offset 30514 if you could have sent offset 1073 instead. You always choose the lowest possible offset that gives you a given match length.

The easy way to exploit this is to send match lengths as the delta from the next longest match length at lower offset. You only need to send the excess, and you know the excess is greater than zero. So if you have an ML of 3 at offset 1073, and you find a match of length 4 at offset 30514, then you send {30514,+1}

To implement this in the encoder is straightforward. If you walk your matches in order from lowest offset to highest offset, then you know the current best match length as you go. You only consider a match if it exceeds the previous best, and you record the delta in lengths that you will send.

The same principle applies to the "last offsets" ; you don't send LO2 if you could sent LO0 at the same length, so the higher index LO matches must be of greater length. And the same thing applies to ROLZ.

I tried this in all 3 cases (normal LZ matches, LO matches, ROLZ). No win. Not even tiny, but close to zero.

Part of the problem is that match lengths are just not where the bits are; they're small already. But I assume that part of what's happening is that match lengths have patterns that the delta-ing ruins. For example binary files will have patterns of 4 or 8 long matches, or in an LZMA-like you'll have certain patterns show up like at certain pos&3 intervals after a literal you get a 3-long match, etc.

I tried some obvious ideas like using the next-lowest-length as part of the context for coding the delta-length. In theory you could be able to recapture something like a next-lowest of 3 predicts a delta of 1 in places where an ML of 4 is likely. But I couldn't find a win there.

I believe this is a dead end. Even if you could find a small win, it's too slow in the decoder to be worth it.