05-31-11 - STB style code

I wrote a couple of LZP1 implementations (see previous) in "STB style" , that is, plain C, ANSI, single headers you can just include and use. It's sort of wonderfully simple and easy to use. Certainly I understand the benefit - if I'm grabbing somebody else's code to put in my project, I want it to be STB style, I don't want some huge damn library.

(for example I actually use the James Howse "lsqr.c" which is one file, I also use "divsufsort.c" which is a delightful single file, those are beautiful little pieces of code that do something difficult very well, but I would never use some beast like the GNU Triangulated Surface lib, or OpenCV or any of those big bloated libs)

But I just struggle to write code that way. Like even with something as simple as the LZP's , okay fine you write an ANSI version and it works. But it's not fast and it's not very friendly.

I want to add prefetching. Well, I have a module "mem.h" that does platform-independent prefetching, so I want to include that. I also want fast memsets and memcpys that I already wrote, so do I just copy all that code in? Yuck.

Then I want to support streaming in and out. Well I already have "CircularBuffer.h" that does that for me. Sure I could just rewrite that code again from scratch, but this is going backwards in programming style and efficiency, I'm duplicating and rewriting code and that makes unsafe buggy code.

And of course I want my assert. And if I'm going to actually make an EXE that's fast I want my async IO.

I just don't see how you can write good code this way. I can't do it; it totally goes against my style, and I find it very difficult and painful. I wish I could, it would make the code that I give away much more useful to the world.

At RAD we're trying to write code in a sort of heirarchy of levels. Something like :

very low level : includes absolutely nothing (not even stdlib)
low level : includes only low level (or lower) (can use stdlib)
              low level stuff should run on all platforms
medium level : includes only medium level (or lower)
               may run only on newer platforms
high level : do whatever you want (may be PC only)

This makes a lot of sense and serves us well, but I just have so much trouble with it.

Like, where do I put my assert? I like my assert to do some nice things for me, like log to file, check if a debugger is present and int 3 only if it is (otherwise do an interactive dialog). So that's got to be at least "medium level" - so now I'm writing some low level code and I can't use my assert!

Today I'm trying to make a low level logging faccility that I can call from threads and it will stick the string into a lock-free queue to be flushed later. Well, I've already got a bunch of nice lockfree queues and stuff ready to go, that are safe and assert and have unit tests - but those live in my medium level lib, so I can't use them in the low level code that I want to log.

What happens to me is I wind up promoting all my code to the lowest level so that it can be accessible to the place that I want it.

I've always sort of struggled with separated libs in general. I know it's a nice idea in theory to build your game out of a few independent (or heirarchical) libs, but in practice I've always found that it creates more friction than it helps. I find it much easier to just throw all my code in a big bag and let each bit of code call any other bit of code.


05-20-11 - LZP1 Variants

LZP = String match compression using some predictive context to reduce the set of strings to match

LZP1 = variant of LZP without any entropy coding

I've just done a bunch of LZP1 variants and I want to quickly describe them for my reference. In general LZP works thusly :

Make some context from previous bytes
Use context to look in a table to see a set of previously seen pointers in that context
  (often only one, but maybe more)

Encode a flag for whether any match, which one, and the length
If no match, send a literal

Typically the context is made by hashing some previous bytes, usually with some kind of shift-xor hash. As always, larger hashes generally mean more compression at the cost of more memory. I usually use a 15 bit hash, which means 64k memory use if the table stores 16 bit offsets rather than pointers.

Because there's no entropy coding in LZP1, literals are always sent in 8 bits.

Generally in LZP the hash table of strings is only updated at literal/match decision points - not for all bytes inside the match. This helps speed and doesn't hurt compression much at all.

Most LZP variants benefit slightly from "lazy parsing" (that is, when you find a match in the encoder, see if it's better to instead send a literal and take the match at the next byte) , but this hurts encoder speed.

LZP1a : Match/Literal flag is 1 bit (eight of them are sent in a byte). Single match option only. 4 bit match length, if match length is >= 16 then send full bytes for additional match length. This is the variant of LZP1 that I did for Clariion/Data General for the Pentium Pro.

LZP1b : Match/Literal is encoded as 0 = LL, 10 = LM, 11 = M (this is the ideal encoding if literals are twice as likely as matches) ; match length is encoded as 2 bits, then if it's >= 4 , 3 more bits, then 5 more bits, then 8 bits (and after that 8 more bits as needed). This variant of LZP1 was the one published back in 1995.

LZP1c : Hash table index is made from 10 bits of backwards hash and 5 bits of forward hash (on the byte to be compressed). Match/Literal is a single bit. If a match is made, a full byte is sent, containing the 5 bits of forward hash and 3 bits of length (4 bits of forward hash and 4 bits of length is another option, but is generally slightly worse). As usual if match length exceeds 3 bits, another 8 bits is sent. (this is a bit like LZRW3, except that we use some backward context to reduce the size of the forward hash that needs to be sent).

LZP1d : string table contains 2 pointers per hash (basically a hash with two "ways"). Encoder selects the best match of the two and send a 4 bit match nibble consisting of 1 selection bit and 3 bits of length. Match flag is one bit. Hash way is the bottom bit of the position, except that when a match is made the matched-from pointer is not replaced. More hash "ways" provide more compression at the cost of more memory use and more encoder time (most LZP's are symmetric, encoder and decoder time is the same, but this one has a slower encoder) (nowadays this is called ROLZ).

LZP1e : literal/match is sent as run len; 4 bit nibble is divided as 0-4 = literal run length, 5-15 = match length. (literal run length can be zero, but match length is always >= 1, so if match length >= 11 additional bytes are sent). This variant benefits a lot from "Literal after match" - after a match a literal is always written without flagging it.

LZP1f is the same as LZP1c.

LZP1g : like LZP1a except maximum match length is 1, so you only flag literal/match, you don't send a length. This is "Predictor" or "Finnish" from the ancient days. Hash table stores chars instead of pointers or offsets.

Obviously there are a lot of ways that these could all be modifed to get more compression (*), but it's rather pointless to go down that path because then you should just use entropy coding.

(* a few ways : combine the forward hash of lzp1c with the "ways" of lzp1d ; if the first hash fails to match escape down to a lower order hash (such as maybe just order-1 plus 2 bits of position) before outputting a literal ; output literals in 7 bits instead of 8 by using something like an MTF code ; write match lengths and flags with a tuned variable-bit code like lzp1b's ; etc. )

Side note : while writing this I stumbled on LZ4 . LZ4 is almost exactly "LZRW1". It uses a hash table (hashing the bytes to match, not the previous bytes like LZP does) to find matches, then sends the offset (it's a normal LZ77, not an LZP). It encodes as 4 bit literal run lens and 4 bit match lengths.

There is some weird/complex stuff in the LZ4 literal run len code which is designed to prevent it from getting super slow on random data - basically if it is sending tons of literals (more than 128) it starts stepping by multiple bytes in the encoder rather than stepping one byte at a time. If you never/rarely compress random data then it's probably better to remove all that because it does add a lot of complexity.

REVISED : Yann has clarified LZ4 is BSD so you can use it. Also, the code is PC only because he makes heavy use of unaligned dword access. It's a nice little simple coder, and the speed/compression tradeoff is good. It only works well on reasonably large data chunks though (at least 64k). If you don't care so much about encode time then something that spends more time on finding good matches would be a better choice. (like LZ4-HC, but it seems the LZ4-HC code is not in the free distribution).

He has a clever way of handling the decoder string copy issue where you can have overlap when the offset is less than the length :

    U32     dec[4]={0, 3, 2, 3};

    // copy repeated sequence
    cpy = op + length;
    if (op-ref < 4)
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        ref -= dec[op-ref];
    while(op < cpy) { *(U32*)op=*(U32*)ref; op+=4; ref+=4; }
    op=cpy;     // correction

This is something I realized as well when doing my LZH decoder optimization for SPU : basically a string copy with length > offset is really a repeating pattern, repeating with period "offset". So offset=1 is AAAA, offset=2 is ABAB, offset=3 is ABCABC. What that means is once you have copied the pattern a few times the slow way (one byte at a time), then you can step back your source pointer by any multiple of the offset that you want. Your goal is to step it back enough so that the separation between dest and source is bigger than your copy quantum size. Though I should note that there are several faster ways to handle this issue (the key points are these : 1. you're already eating a branch to identify the overlap case, you may as well have custom code for it, and 2. the single repeating char situation (AAAA) is by far more likely than any other).

ADDENDUM : I just found the LZ4 guy's blog (Yann Collet, who also did the fast "LZP2"), there's some good stuff on there. One I like is his compressor ranking . He does the right thing ( I wrote about here ) which is to measure the total time to encode,transmit,decode, over a limitted channel. Then you look at various channel speeds and you can see in what domain a compressor might be best. But he does it with nice graphs which is totally the win.


05-13-11 - Avoiding Thread Switches

A very common threading model is to have a thread for each type of task. eg. maybe you have a Physics Thread, Ray Cast thread, AI decision thread, Render Thread, an IO thread, Prefetcher thread, etc. Each one services requests to do a specific type of task. This is good for instruction cache (if the threads get big batches of things to work on).

While this is conceptually simple (and can be easier to code if you use TLS, but that is an illusion, it's not actually simpler than fully reentrant code in the long term), if the tasks have dependencies on each other, it can create very complex flow with lots of thread switches. eg. thread A does something, thread B waits on that task, when it finishes thread B wakes up and does something, then thread A and C can go, etc. Lots of switching.

"Worklets" or mini work items which have dependencies and a work function pointer can make this a lot better. Basically rather than thread-switching away to do the work that depended on you, you do it immediately on your thread.

I started thinking about this situation :

A very simple IO task goes something like this :

Prefetcher thread :

  issue open file A

IO thread :

  execute open file A

Prefetcher thread :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

IO thread :

  do read on file A
  do close file A

Prefetcher thread :

  register file A to prefetched list

lots of thread switching back and forth as they finish tasks that the next one is waiting on.

The obvious/hacky solution is to create larger IO thread work items, eg. instead of just having "open" and "read" you could make a single operation that does "open, malloc, read, close" to avoid so much thread switching.

But that's really just a band-aid for a general problem. And if you keep doing that you wind up turning all your different systems into "IO thread work items". (eg. you wind up creating a single work item that's "open, read, decompress, parse animation tree, instatiate character"). Yay you've reduced the thread switching by ruining task granularity.

The real solution is to be able to run any type of item on the thread and to immediately execute them. Instead of putting your thread to sleep and waking up another one that can now do work, you just grab his work and do it. So you might have something like :

Prefetcher thread :

  queue work items to prefetch file A
  work items depend on IO so I can't do anything and go to sleep

IO thread :

  execute open file A

  [check for pending prefetcher work items]
  do work item :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

  do IO thread work :

  do read on file A
  do close file A

  [check for pending prefetcher work items]
  do work item :

  register file A to prefetched list

so we stay on the IO thread and just pop off prefetcher work items that depended on us and were waiting for us to be able to run.

More generally if you want to be super optimal there are complicated issues to consider :

i-cache thrashing vs. d-cache thrashing :

If we imagine the simple conceptual model that we have a data packet (or packets) and we want to do various types of work on it, you could prefer to follow one data packet through its chain of work, doing different types of work (thrashing i-cache) but working on the same data item, or you could try to do lots of the same type of work (good for i-cache) on lots of different data items.

Certainly in some cases (SPU and GPU) it is much better to keep i-cache coherent, do lots of the same type of work. But this brings up another issue :

Throughput vs. latency :

You can generally optimize for throughput (getting lots of items through with a minimum average time), or latency (minimizing the time for any one item to get from "issued" to "done"). To minimize latency you would prefer the "data coherent" model - that is, for a given data item, do all the tasks on it. For maxmimum throughput you generally preffer "task coherent" - that is, do all the data items for each type of task, then move on to the next task. This can however create huge latency before a single item gets out.


Let me say this in another way.

Say thread A is doing some task and when it finishes it will fire some Event (in Windows parlance). You want to do something when that Event fires.

One way to do this is to put your thread to sleep waiting on that Event. Then when the event fires, the kernel will check a list of threads waiting on that event and run them.

But sometimes what you would rather do is to enqueue a function pointer onto that Event. Then you'd like the Kernel to check for any functions to run when the Event is fired and run them immediately on the context of the firing thread.

I don't know of a way to do this in general on normal OS's.

Almost every OS, however, recognizes the value of this type of model, and provides it for the special case of IO, with some kind of IO completion callback mechanism. (for example, Windows has APC's, but you cannot control when an APC will be run, except for the special case of running on IO completion; QueueUserAPC will cause them to just be run as soon as possible).

However, I've always found that writing IO code using IO completion callbacks is a huge pain in the ass, and is very unpopular for that reason.


05-08-11 - Torque vs Horsepower

This sometimes confuses me, and certainly confuses a lot of other people, so let's go through it a bit.

I'm also motivated by this page : Torque and Horsepower - A Primer in which Bruce says some things that are slightly imprecise in a scientific sense but are in fact correct. Then this A-hole Thomas Barber responds with a dickish pedantic correction which adds nothing to our understanding.

We're going to talk about car engines, the goal is to develop sort of an intuition of what the numbers mean. If you look on Wikipedia or whatever there will be some frequently copy-pasted story about James Watt and horses pulling things and it's all totally irrelevant. We're not using our car engine to power a generator or grind corn or whatever. We want acceleration.

The horizontal acceleration of the car is proportional to the angular acceleration of the wheels (by the circumference of the wheels). The angular acceleration of the wheels is proportional to the angular acceleration of the flywheel, modulo the gear ratio in the transmission. The angular acceleration of the flywheel is proportional to the torque of the engine, modulo moment of inertia.

For a fixed gear ratio :

torque (at the engine) ~= vehicle acceleration

(where ~= means proportional)

So if we all had no transmission, then all we would care about is torque and horsepower could go stuff itself.

But we do have transmissions, so how does that come into play?

To maximize vehicle acceleration you want to maximize torque at the wheels, which means you want to maximize

vehicle acceleration ~= torque (at the engine) * gear ratio

where gear ratio is higher in lower gears, that is, gear ratio is the number of times the engine turns for one turn of the wheels :

gear ratio = (engine rpm) / (wheel rpm)

which means we can write :

vehicle acceleration ~= torque (at the engine) * (engine rpm) / (wheel rpm)

thus at any given vehicle speed (eg. wheel rpm held constant), you maximize acceleration by maximizing [ torque (at the engine) * (engine rpm) ] . But this is just "horsepower" (or more generally we should just say "power"). That is :

horsepower ~= torque (at the engine) * (engine rpm)

vehicle acceleration ~= horsepower / (wheel rpm)

Note that we don't have to say that the power is measured at the engine, because due to conservation of energy the power production must be the same no matter how you measure it (unlike torque which is different at the crank and at the wheels). Power is of course the energy production per unit time, or if you like it's the rate that work can be done. Work is force over distance, so Power is just ~= Force * velocity. So if you like :

horsepower ~= torque (at the engine) * (engine rpm)

horsepower ~= torque (at the wheels) * (wheel rpm)

horsepower ~= vehicle acceleration * vehicle speed

(note this is only true assuming no dissipative forces; in the real world the power at the engine is greater than the power at the wheels, and that is greater than the power measured from motion)

Now, let's go back to this statement : "any given vehicle speed (eg. wheel rpm held constant), you maximize acceleration by maximizing horsepower". The only degree of freedom you have at constant speed is changing gear. So this just says you want to change gear to maximize horsepower. On most real world engines this means you should be in as low a gear as possible at all times. That is, when drag racing, shift at the red line.

The key thing that some people miss is you are trying to maximize *wheel torque* and in almost every real world engine, the effect of the gear ratio is much more important that the effect of the engine's torque curve. That is, staying in as low a gear as possible (high ratio) is much more important than being at the engine's peak torque.

Let's consider some examples to build our intuition.

The modern lineup of 911's essentially all have the same torque. The Carrera, the GT3, and even the RSR all have around 300 lb-ft of torque. But they have different red lines, 7200, 8400 and 9400.

If we pretend for the moment that the masses are the same, then if you were all cruising along side by side in 2nd gear together and floored it - they would accelerate exactly the same.

The GT3 and RSR would only have an advantage when the Carrera is going to hit red line and has to shift to 3rd, and they can stay in 2nd - then their acceleration will be better by the factor of gear ratios (something like 1.34 X on most 2nd-3rd gears).

Note the *huge* difference in acceleration due to gearing. Even if the upshift got you slightly more torque by putting you in the power band of the engine, the 1.34 X from gearing is way too big to beat.

(I should note that in the real world, not only are the RSR/R/Cup (racing) versions of the GT3 lighter, but they also have a higher final drive ratio and some different gearing, so they are actually faster in all gears. A good mod to the GT3 is to get the Cup gears)

Another example :

Engine A has 200 torques (constant over the rpm range) and revs to 4000 rpm. Engine B has 100 torques and revs to 8000 rpm. They have the exact same peak horsepower (800 torques*krpm) at the top of their rev range. How do they compare ?

Well first of all, we could just gear down Engine B by 2X so that for every two turns it made the output shaft only made one turn. Then the two engines would be exactly identical. So in that sense we should see that horsepower is really the rating of the potential of the engine, whereas torque tells you how well the engine is optimized for the gearing. The higher torque car is essentially steeper geared at the engine.

How do they compare on the same transmission? In 1st gear Car A would pull away with twice the acceleration of Car B. It would continue up to 4000 rpm then have to change gears. Car B would keep running in 1st gear up to 8000 rpm, during which time it would have more acceleration than car A (by the ratio of 1st to 2nd gear).

So which is actually faster to 100 mph ?

You can't answer that without knowing about the transmission. If gear changes took zero time (and there was no problem with traction loss under high acceleration), the faster car would be the higher torque car. In fact if gear changes took zero time you would want an infinite number of gears so that you could keep the car at max rpm at the time, not because you are trying to stay in the "power band" but simply because max rpm means you can use higher gearing to the wheels.

I wrote a little simulator. Using the real transmission ratios from a Porsche 993 :

Transmission Gear Ratios: 3.154, 2.150, 1.560, 1.242, 1.024, 0.820 
Rear Differential Gear Ratio: 3.444 
Rear Tire Size: 255/40/17  (78.64 inch cirumference)
Weight : 3000 pounds

and 1/3 of a second to shift, I get :

200 torque, 4000 rpm redline :

time_to_100 = 15.937804

100 torque, 8000 rpm redline :

time_to_100 = 17.853252

higher torque is faster. But what if we can tweak our transmission for our engine? In particular I will make only the final drive ratio free and optimize that with the gear ratios left the same :

200 torque, 4000 rpm redline :

c_differential_ratio = 3.631966
time_to_100 = 15.734542

100 torque, 8000 rpm redline :

c_differential_ratio = 7.263932
time_to_100 = 15.734542

exact same times, as they should be, since the power output is the same, with double the gear ratio.

In the real world, almost every OEM transmission is geared too low for an enthusiast driver. OEMs offer transmission that minimize the number of shifts, offer over-drive gears for quiet and economy, etc. If you have a choice you almost always want to gear up. This is one reason why in the real world torque is king ; low-torque high-power engines could be good if you had sufficiently high gearing, but that high gearing just doesn't exist (*), so the alternative is to boost your torque.

(* = drag racers build custom gear boxes to optimize their gearing ; there are also various practical reasons why the gear ratios in cars are limitted to the typical range they are in ; you can't have too many teeth, because you want the gears to be reasonably small in size but also have a minimum thickness of teeth for strength, high gear ratios tend to produce a lot of whine that people don't like, etc. etc.)

One practical issue with this these days is that more and more sports cars use "transaxles". Older cars usually had the transmission up front and then a rear differential. It was easy to change the final drive ratio in the rear differential so all the old American muscle cars talk about running a 4.33 or whatever different ratios. Nowadays lots of cars have the transmission and rear differential together in the back to balance weight (from the Porsche 944 design). While that is mostly a cool thing, it makes changing the final drive much more expensive and much harder to find gears for. But it is still one of the best mods you can do for any serious driver.

(another reason that car gear ratios suck so bad is the emphasis on 0-60 times means that you absolutely have to be able to reach 60 in 2nd gear. That means 1st and 2nd can't be too high ratio. Without that constraint you might actually want 2nd to max out at 50 mph or something. There are other stupid goals that muck up gearings, like trying to acheive a high top speed).

Let's look at a final interesting case. Drag racers often use a formula like :

speed at end of 1/4 mile :

MPH = 234 * (Horsepower / Pounds) ^ .3333

and it is amazingly accurate. And yet it doesn't contain anything about torque or gear ratios. (they of course also use much more complex calculators that take everything into account). How does this work ?

A properly set up drag car is essentially running at power peak the whole time. They start off the line at high revs, and then the transmission is custom geared to keep the engine in power band, so it's a reasonable approximation to assume constant power the entire time.

So if you have constant power, then :

  d/dt E = P

  d/dt ( 1/2 mv^2 ) = P

  integrate :

  1/2 mv^2 = P * t

  v^2 = 2 * (P/m) * t 

  distance covered is : 
  x = 1/2 a t^2

  and P = m a v

  a = (P/m) / v


  t = sqrt( 2*x*v / (P/m) )

  sqrt( 2*x*v / (P/m) ) = v^2 / ( 2 * (P/m) )

  simplify :

  v = 2 * ( x * (P/m) ) ^(1/3)

which is the drag racer's formula. Speed is proportional to distance covered times power-to-weight to the one third power.

If you're looking at "what is the time to reach X" (X being some distance or some mph), the only thing that matters is power-to-weight *assuming* the transmission has been optimized for the engine.

I think there's more to say about this, but I'm bored of this topic.


Currently the two figures that we get to describe a car's engine are Horsepower (at peak rpm) and Torque (at peak rpm) (we also get 0-60 and top speed which are super useless).

I propose that the two figures that we'd really like are : Horsepower/weight (at peak rpm) and Horsepower/weight (at min during 10-100 run).

Let me explain why :

(Power/weight) is the only way that power ever actually shows up in the equations of dynamics (in a frictionless world). 220 HP in a 2000 pound car is better than 300 HP in a 3000 pound car. So just show me power to weight. Now, in the real world, the equations of dynamics are somewhat more complicated, so let's address that. One issue is air drag. For fighting air, power (ignoring mass) is needed, so for top speed you would prefer a car with more power than just power to weight. However, for braking and turning, weight is more important. So I propose that it roughly evens out and in the end just showing power to weight is fine.

Now, what about this "Horsepower/weight (at min during 10-100 run)" ? Well let's back up a second. The two numbers that we currently get (Power and Torque both at their peak) give us some rough idea of how broad the power band of an engine is, because Power is almost always at peak near the max rpm, and Torque is usually at peak somewhere around the middle, so a higher torque number (power being equal) indicates a broader power band. But good gearing (or bad gearing) can either hide or exagerate that problem. For example a tuned Honda VTEC might have a narrow power band that's all in the 7k - 10k RPM range, but with a "crossed" transmission you might be perfectly happy never dropping out of that rev range. Another car might have a wide power band, but really huge gear steps so that you do get a big power drop on shifts. So what I propose is you run the cars from 10mph-100 , shifting at red line, and measure the *min* horsepower the engine puts out. This will tell you what you really want to know, which is when doing normal upshifts do you drop out of the power band, and how bad is it? eg. what is the lowest power you will experience.

Of all the numbers that we actually get, quarter mile time is probably the best.

old rants