# cbloom rants

## 3/13/2014

### 03-13-14 - Hilbert Curve Testing

So I stumbled on this blog post about counting the rationals which I found rather amusing.

The snippet that's relevant here is that if you iterate through the rationals naively by doing something like

```1/1 2/1 3/1 4/1 ...
1/2 2/2 3/2 4/2 ...
1/3 2/3 ...
...
```
then you will never even reach 1/2 because the first line is infinitely long. But if you take a special kind of path, you can reach any particular rational in a finite number of steps. Much like the way a Hilbert curve lets you walk the 2d integer grid using only a 1d path.

It reminded me of something that I've recently changed in my testing practices.

When running code we are always stepping along a 1d path, which you can think of as discrete time. That is, you run test 1, then test 2, etc. You want to be able to walk over some large multi-dimensional space of tests, but you can only do so by stepping along a 1d path through that space.

I've had a lot of problems testing Oodle, because there are lots of options on the compressors, lots of APIs, lots of compression levels and different algorithms - it's impossible to test all combinations, and particularly impossible to test all combinations on lots of different files. So I keep getting bitten by some weird corner case.

(Total diversion - actually this is a good example of why I'm opposed to the "I tested it" style of coding in general. You know when you stumble on some code that looks like total garbage spaghetti, and you ask the author "wtf is going on here, do you even understand it?" and they "well it works, I tested it". Umm, no, wrong answer. Maybe it passed the test, but who knows how it's going to be used down the road and fail in mysterious ways? Anyway, that's an old school cbloom coding practices rant that I don't bother with anymore ... and I haven't been great about following my own advice ...)

If you try to set up a big test by just iterating over each option :

```for each file
for each compressor
for each compression_level
for each chunking
for each parallel branching
for each dictionary size
for each sliding window size
...

```
then you'll never even get to the second file.

The better approach is to get a broad sampling of options. An approach I like is to enumerate all the tests I want to run, using a loop like the above, put them all in a list, then randomly permute the list. Because it's just a permutation, I will still only run each test once, and will cover all the cases in the enumeration, but by permuting I get a broader sampling more quickly.

(you could also add the nice technique that we worked out here long ago - generate a consistent permutation using a pseudorandom generator with known seed, and save your place in the list with a file or the registry or something. That way when you stop and start the tests, you will resume where you left off, and eventually cover more of the space (also when a test fails you will automatically repeat the same test if you restart)).

The other trick that's useful in practice is to front-load the quicker tests. You do want to have a test where you run on 8 GB files to make sure that works, but if that's your first test you have to wait forever to get a failure. This is particularly for the case that something dumb is broken, it should show up as quickly as possible so you can just cancel the test and fix it. So you want an initial broad sampling of options on very quick tests.

dfan said...

Haskell has a great library called QuickCheck where you give it descriptions of how to generate random data of any type you're testing (in your case, all seven variables), then it tests your functions and assertions with a zillion random cases. It's not exhaustive, of course, but it tends to find problems really quickly. I last used it when I had some tricky logic in a card game and wanted to test it out on lots of game states but obviously wasn't going to iterate through 52! possibilities. Turned out there was one corner case that showed up in a few seconds.

MH said...

This is similar to fuzzing things. I wrote a genetic art program which generated random shader programs. The user could then vote on the ones they liked and the process continued.

But a bunch of trees either returned errors, even though they were legal, or better yet, crashed my driver.

I was then working on a network system that had the messages nicely laid out. I realized I could do the same thing there.

Turns out I inadvertently invented fuzzing. I didnt know it was a thing.

MH said...

Oh, on the Hilbert curve thing. I want to build a space jump system based on it.

So, using the Nth approximation as what hyperspace band we live in (something like 12 or 20 or such), you can jump up a band by being in the right place and engaging your Hilbert drive. Hyperspace only allows movement in 1D. You travel how far you want, then jump out.

A variation would be that you can only jump up at ends or corners. This would make certain sectors of space very valuable. Even without it, certain lines would be valuable (but require tremendous amount of energy. Each jump up the approximations require N^(>1) energy)