4/29/2009

04-29-09 - QCD

Someone made me think briefly about QCD (Quantum Chromo Dynamics).

I never really got my head around the standard model in grad school. I think I understood QED pretty well, and Weak isn't really that bad either, but then you get into QCD and the maths gets really tough and there's this sea of particles and I had no idea what's going on. Part of the problem is that a lot of the texts go through a historical perspective and teach you the stages of understanding and the experiments that led to modern QCD. I think that's a big mistake and I discourage anyone from reading that. I was always really confused by all the talk of the various mesons and baryons. Often the classes would start with talking about K+ transitions or pion decay or scattering coefficients for "Omegas" and I'd be like "WTF are these particles and who cares what they do?".

I think it's way better just to say "we have quarks and gluons". And yes, the quarks can combine together into these various things, but we don't even really need to talk about them because nobody fucking cares about what exactly the meson made from (strange-antistrange) is called.

I much prefer a purely modern approach to QFT based on symmetry. In particular I really like Weinberg's approach in his textbook which is basically - we expect to observe every phenomenon in the universe which is *possible* to exist. If something is possible but doesn't ever happen, that is quite strange and we should wonder why. In particular with QFT - every possible Lagrangian which leads to a consistent theory should correspond to something in nature. When you start to write these down it turns out that very few are actually possible (given a few constraints, such as the postulate that relativity is required, etc.).

Anyway, I was never really happy with my intuition for QFT. Part of the problem is the math is just so hard, you can't do a lot of problems and get really comfortable with it. (David Politzer at Caltech once gave me a standard model homework problem to actually compute some real scattering coefficients that had been experimentally tested. It took me about 50 pages and I got it horribly wrong).

The whole gauge-field symmetry-group idea seems like it should be very elegant and lead to some intuition, but I just don't see it. You can say hand wavey things, like : electromagnetism is the presence of an extra U(1) symmetry; you can think of this as an extra circular dimension that's rolled up tiny so it has no spatial size, or if you like you can do the Feynman way and say that everything flying around is a clock that is pointing in some direction (that's the U(1) angle). In this picture, the coupling of a "charge" to the field is the fact that the charge distorts the U(1) dimension. If you're familiar with the idea of general relativity where masses distort spacetime and thus create the gravity force, it's the same sort of thing, but instead of distorting spacetime, charge distorts the U(1) fiber. As charges move around in this higher-D space, if they are pushed by variation of the U(1) fiber clock angle, that pushes them in real space, which is how they get force. Charges are a pole in the curvature of the fiber angle; in a spacetime sense it's a pinched spot that can't be worked out by any stretching of the space fabric. Okay this is sort of giving us a picture, but it's super hand wavey and sort of wrong, and it's hard to reconcile with the real maths.

Anyway, the thing I wanted to write about QCD is the real problem of non-perturbative analysis.

When you're taught QED, the thing people latch onto are the simple Feynman diagrams where two electrons fly along and exchange a photon. This is appealingly classical and easy to understand. The problem is, it's sort of a lie. For one thing, the idea that the photon is "thrown" between the electrons and thus exchanges momentum and forces them apart is a very appealing picture, but kind of wrong, since the photon can actually have negative momentum (eg. for an electron and positron, the photon exchanged between them pulls them together, so the sort of spacemen playing catch kind of picture just doesn't work).

First of all, let's back up a bit. QFT is formulated using the sum of all complex exponential actions mechanism. Classically this would reduce to "least action" paths, which is equivalent to Lagragian classical mechanics. There's a great book which teaches ordinary Quantum Mechanics using this formulation : Quantum Mechanics and Path Integrals by Feynman & Hibbs (this is a serious textbook for physics undergrads who already know standard QM ; it's a great bridge from standard QM to QFT, because it introduces the sum-on-action formalism in the more familiar old QM). Anyway, the math winds up as a sum of all possible ways for a given interaction to happen. The Feynman diagram is a nice way to write down these various ways and then you still integrate over all possible ways each diagram can happen.

Now let's go back to the simple QED diagram that I mentioned. This is often shown as your first diagram, and you can do the integral easily, and you get a nice answer that's simple and cute. But what happened? We're supposed to sum on *all* ways that the interaction can happen, and we only did one. In fact, there are tons of other possibilities that produce the same outcome, and we really need to either sum them all, or show that they are small.

One thing we need to add is all the ways that you can add vacuum -> vacuum graphs. You can make side graphs that start from nothing, particles pop out of the vacuum, interact, then go back to the vacuum. These are conveniently not mentioned because if you add them all up they have an infinite contribution, which would freak out early students. Fortunately we have the renormalization mechanism that sweeps this under the rug just fine, but it's quite complex.

The other issue is that you can add more and more complex graphs; instead of just one photon exchange, what about two? The more complex graphs have higher powers of the coupling constant (e in this case). If the coupling constant is small, this is like a Taylor expansion, each term is higher powers of e, and e is small, so we can just go up to 3rd order accuracy or whatever we want. The problem with this is that even when e is small, as the graphs get more complex there are *more* of them. As you allow more couplings, there are more and more ways to make a graph of N couplings. In order for this kind of Taylor expansion to be right, the number of graphs must go up more slowly than 1/e. Again it's quite complex to prove that.

Starting with a simple problem that we can solve exactly, and then adding terms that make us progressively more accurate is the standard modus operandi in physics. Usually the full system is too hard to solve analytically, and too hard to get intuition for, so we rely on what's called a perturbation expansion. Take your complex system that you can't solve, and expand it into Simple + C * Complex1 + C^2 * Complex2 + ... - higher and higher powers of C, which should be small.

And with QCD we get a real problem. Again you can start with a simple graph of quarks flying along passing gluons. First of all, unlike photons, there are gluon-gluon couplings which means we need to add a bunch more graphs where gluons interact with other gluons. Now when we start adding these higher order terms, we have a problem. In QCD, the coupling constant is not small enough, and the number of graphs that are possible for each order of the coupling constant is too high - the more complex terms are not less important. In fact in some cases, they're *more* important than the simpler terms.

This makes QCD unlike any other field theory. Our sort of classical intuition of particles flying around exchanging bosons completely breaks down. Instead the quarks live in a foaming soup of gluons. I don't really even want to describe it in hand wavey terms like that because any kind of picture you might have like that is going to be wrong and misleading. Even the most basic of QCD problems is too hard to do analytically; in practice people do "lattice QCD" numerical computations (in some simple cases you can do the summations analytically and then take the limit of the lattice size going to zero).

The result is that even when I was doing QFT I never really understood QCD.

4/28/2009

04-28-09 - Quadratic

I'm doing a little refinement of my old cubic interpolator ("Smooth Driver") thing. (see also : here and here and here ).

One thing I'm trying to do is fix up all the nasty epsilon robustness issues. A small part of that is solving a quadratic. "Easy!" I hear you say. Everyone knows how to solve a quadratic, right? Not so.

I found this page which has a nice summary of the issues, written by a sour old curmudgeon who just whines about how dumb we all are but doesn't actually provide us with a solution.

You can also find the Wikipedia page or the Numerical Recipes (5.6) snippet about the more robust numerical way to find the roots that avoids subtracting two nearly identical numbers. Okay, that's all well and good but there's a lot more code to write to deal with all the degenerate cases.

This is what I have so far : (I'm providing the case where the coefficients are real but the solutions may be complex; you can obviously modify to complex coefficients or only real solutions)


// A t^2 + B t + C = 0;
// returns number of solutions
int SolveQuadratic(const double A,const double B,const double C,
                    ComplexDouble * pT0,ComplexDouble * pT1)
{
    // first invalidate :
    *pT0 = FLT_MAX;
    *pT1 = FLT_MAX;
        
    if ( A == 0.0 )
    {
        if ( B == 0.0 )
        {
            if ( C == 0.0 )
            {
                // degenerate - any value of t is a solution
                *pT0 = 0.0;
                *pT1 = 0.0;
                return -1;
            }
            else
            {       
                // no solution
                return 0;
            }
        }
        
        double t = - C / B;
        *pT0 = t;
        *pT1 = t;
        return 1;
    }
    else if ( B == 0.0 )
    {
        if ( C == 0.0 )
        {
            // A t^2 = 0;
            *pT0 = 0.0;
            *pT1 = 0.0;
            return 1;
        }
        
        // B is 0 but A isn't
        double discriminant = -C / A;
        ComplexDouble t = ComplexSqrt(discriminant);
        *pT0 = t;
        *pT1 = - t;
        return 2;
    }
    else if ( C == 0.0 )
    {
        // A and B are not zero
        // t = 0 is one solution
        *pT0 = 0.0;
        // A t + B = 0;
        *pT1 = -B / A;
        return 2;
    }

    // Numerical Recipes 5.6 : 

    double discriminant = ( B*B - 4.0 * A * C );
    
    if ( discriminant == 0.0 )
    {
        double t = - 0.5 * B / A;
        *pT0 = t;
        *pT1 = t;
        return 1;
    }
    
    ComplexDouble sqrtpart = ComplexSqrt( discriminant );
    
    sqrtpart *= - 0.5 * fsign(B);
    
    ComplexDouble Q = sqrtpart + (- 0.5 * B);
    
    // Q cannot be zero
    
    *pT0 = Q / A;
    *pT1 = C / Q;
        
    return 2;
}

One thing that is missing is refinement of roots by Newton-Raphson. The roots computed this way can still have large error, but gradient descent can improve that.

4/24/2009

04-24-09 - Convex Hulls and OBB's

I wrote last month a bit about OBB fitting. I mentioned at the time that it would be nice to have an implementation of the exact optimal OBB code, and also the bounded-best OBB in reasonable time. I found the Barequet & Har-Peled work on this topic but didn't read it at the time.

Well, I finally got through it. Their paper is pretty ugly. Let me briefly explain their method :

They, like my old OBB stuff, take heavy advantage of the fast rotating-calipers method to find the optimal rectangle of a convex hull in 2d (it's O(n)). Also finding the convex hull is O(nlogn). What that means is, given one axis of an OBB, you can find the optimal other two axes in O(nlogn). So the problem just comes down to finding one of the optimal axes.

Now, as I mentioned before, the number of actual axes you must consider to be truly optimal is O(n^2) , making your total run time O(n^3). These axes are the face normals of the convex hull, plus axes where the OBB is supported by two edges and a vert (I don't know an easy way to even enumerate these).

It's now pretty well known around the industry that you can get a very good OBB by just trying a bunch of scattered initial axes instead of doing O(n^2). If you try some fixed number of axes, like say 256, it doesn't count against your big-O at all, so your whole OBB fit is still O(nlogn).

Well this is exactly what the Barequet & Har-Peled method is. They try a fixed number of directions for the seed axis, then do the rectangle fit for the other two axes. The main contribution of the paper is the proof that if you try enough fixed directions, you can get the error of the bbox within whatever tolerance you want. That's sort of intuitively obvious - if you try more and more fixed directions you must get closer and closer to the optimal box. Their construction also provides a specific method for enumerating enough directions.

Their enumeration goes like this :

Start with some seed box S. They use the box that is made from taking one axis to be the "diameter" of the point set (the vector between the two most seperated points). Using that box is important to their proof, but I don't think which seed box you use is actually terribly important in practice.

The seed box S has normalized edge vectors S.x , S.y, S.z (the three axes that define the box).

Enumerate all sets of 3 (non-negative) integers whose sum is <= K , that is {i,j,k} such that (i+j+k) <= K

Construct the normal N = i * S.x + j * S.y + k * S.z ; normalize it, and use this as the direction to fit a new OBB. (note that there are a lot of points {ijk} that generate the same normal - any points that are integer multiples of another; those can be skipped).

Barequet & Har-Peled prove that the OBB made this way is within (1/K) to some power or other of optimal, so as you increase K you get ever closer.

Now, this is almost identical to my old "OptimalOBBFixedDirections" which tried various static directions and then optimized the box from there. My OptimalOBBFixedDirections always tests 42 directions in the {+,+,+} octant which I made by subdividing an octahedron. I have found that the Barequet & Har-Peled method does in fact find better boxes with fewer tests, but the difference is very very small (thousandths of a percent). I'll show numbers in a second.

First I want to mention two other things.

1. There's a "common wisdom" around that net that while it is bad to make an OBB from the covariance matrix of the *points* , it is good to make an OBB from the covariance matrix of the *volume*. That is, they claim if you have a closed mesh, you can use the Mirtich (or Eberly or Blow/Binstock) method to compute the covariance matrix of the solid body, and use that for your OBB axes.

I have seen no evidence that this is true. Yes, the covariance matrix of the points is highly dependent on the tesselation, while the covariance matrix of the solid is more a property of the actually shape of the object, so that is intuitively pleasing. In practice it appears to be completely random which one is actually better. And you just shouldn't use the covariance matrix method anyway.

2. Barequet & Har-Peled mention the iterative refinement of OBB's using the caliper-fit. This is something I've known a while, but I've never seen it published before; I think it's one of those gems of wisdom that lots of people know but don't consider worth a paper. They mention it almost in passing, but it's actually perhaps the most valuable thing in their whole paper.

Recall if you have one axis of the OBB fixed, you can easily find the optimal directions of the other two axes using rotating calipers to fit a rectangle. The thing is, once you do that, you can then hold one of those new axes fixed, and fit the other two. So like, fix X, then caliper to get YZ, then fix Y, and caliper to get XZ. Each step of the iteration either improves your OBB or does nothing. That means you descend to a local minimum in a finite number of steps. (in practice I find you usually get there in only 3 steps, in fact that might be provable (?)).

Assuming your original seed box is pretty close to optimal, this iteration is kind of like taking your OBB and trying to spin it along one axis and pinch it to see if you get a tighter fit; it's sort of like wiggling your key as you put it into a lock. If your seed OBB is close to being right, but isn't supported by one of the necessary support conditions, this will wiggle it tighter until it is supported by a face or an edge pair.

The methods shown in the test below are :


True convex hull (within epsilon) :
    note that area optimization is usually what you want
    but volume shows a bigger difference between methods

Hull simplificiation :

    I simplify the hull by just doing PM on the triangles
    Then convert the triangles to planes
    Push the planes out until all points are behind them
    Then clip the planes against each other to generate new faces
    This is a simpler hull that strictly contains the original hull

k-dop :
    Fits 258 planes in fixed directions on the sphere
    Just pushes each plane to the edge of the point set
    Clips them all against each other
    strictly this is O(n) but in practice it's slower than the true convex hull
        and much worse quality
    seems pointless to me

The rating we show on all the OBB's is surface area

AxialOBB  :
    axis-aligned box

OBBByCovariance :
    vertex covariance matrix sets axes

OBBByCovariance+it :
    OBBByCovariance followed by iterative greedy optimization

OBBByCovarianceOptimized :
    like OBBByCovariance+it, but tries all 3 initial fixed axes

OptimalOBBFixedDirections :
    tries 42 fixed directions

OptimalOBBFixedDirections+it :
    tries 42 fixed directions, takes the best, then optimizes

OptimalOBBFixedDirections opt :
    tries 42 fixed directions, optimizes each one, then takes the best

OBBGoodHeuristic :
    takes the best of OBBByCovarianceOptimized and "OptimalOBBFixedDirections opt"

OBBGivenCOV :
    this is OBBByCovarianceOptimized but using the solid body covariance instead of points  

OptimalOBB :
    tries all face normals of the convex hull (slow)
    I'd like to also try all the edge-support directions here, but haven't figured it out

OptimalOBBBarequetHarPeled 5     
 kLimit : 5 , numBuilds : 19
    BarequetHarPeled method with (i+j+k) <= 5
    causes it to try 19 boxes
    optimizes each one, then picks the best
    very similar to "OptimalOBBFixedDirections opt"

And the results :


-----------------------------------------

dolphin.x :

    Made Hull with 206 faces
    hull1 volume : 1488557 , area : 95330

Making hull from k-dop planes...
    Made Hull with 142 faces
    hull2 volume : 2081732 , area : 104951

Making OBB...
    AxialOBB                         : 193363.109
    OBBByCovariance                  : 190429.594
    OBBByCovariance+it               : 179504.734
    OBBByCovarianceOptimized         : 179504.719
    OptimalOBBFixedDirections        : 181693.297
    OptimalOBBFixedDirections+it     : 181693.297
    OptimalOBBFixedDirections opt    : 176911.750
    OBBGoodHeuristic                 : 179504.719
    OBBGivenCOV                      : 178061.406
    OptimalOBB                       : 176253.359

     kLimit : 3 , numBuilds : 3
    OptimalOBBBarequetHarPeled 3     : 179504.703
     kLimit : 5 , numBuilds : 19
    OptimalOBBBarequetHarPeled 5     : 178266.047
     kLimit : 10 , numBuilds : 160
    OptimalOBBBarequetHarPeled 10    : 176508.109
     kLimit : 20 , numBuilds : 1222
    OptimalOBBBarequetHarPeled 20    : 176218.344
     kLimit : 50 , numBuilds : 18037
    OptimalOBBBarequetHarPeled 50    : 176116.156

-----------------------------------------

teapot.x :

hull1 faces : 612
    hull1 volume : 3284935 , area : 117470

simplified hull2 faces : 366
    hull2 volume : 3384222 , area : 120357

Making hull from k-dop planes...
    Made Hull with 234 faces
    hull2 volume : 3761104 , area : 129271

Making OBB...
    AxialOBB                         : 253079.797
    OBBByCovariance                  : 264091.344
    OBBByCovariance+it               : 222514.219
    OBBByCovarianceOptimized         : 220723.844
    OptimalOBBFixedDirections        : 219071.703
    OptimalOBBFixedDirections+it     : 218968.844
    OBBGoodHeuristic                 : 218968.844
    OptimalOBB                       : 218968.844
    OBBGivenCOV                      : 220762.766

     kLimit : 3 , numBuilds : 3
    OptimalOBBBarequetHarPeled 3     : 220464.766
     kLimit : 5 , numBuilds : 19
    OptimalOBBBarequetHarPeled 5     : 219540.203
     kLimit : 10 , numBuilds : 160
    OptimalOBBBarequetHarPeled 10    : 218968.000
     kLimit : 20 , numBuilds : 1222
    OptimalOBBBarequetHarPeled 20    : 218965.406
     kLimit : 50 , numBuilds : 18037
    OptimalOBBBarequetHarPeled 50    : 218963.109

-----------------------------------------

Some highlights :

OBBByCovariance is quite bad. OptimalOBBFixedDirections is the only other one that doesn't do the iterative optimization, and it can be bad too, though not nearly as bad.

Any of the methods that do the iterative optimization is perfectly fine. The differences are very small.

"OptimalOBBBarequetHarPeled 7" does about the same number of tests as "OptimalOBBFixedDirections opt" , and it's very microscopically better because of the way the directions are distributed.

OBBGivenCOV (the solid mass covariance) is worse than OBBByCovarianceOptimized (point covariance) on teapot.

Also - the Convex Hull simplification thing I did was just pulled out of my ass. I did a quick Google to see if I could find any reference, and I couldn't find any. I'm surprised that's not a solved problem, it seems like something right up the Geometers' alley.

Problem : Find the convex bounding volume made of N faces (or N planes) that strictly encloses the original mesh, and has minimum surface area (or volume).

In general, the optimal N-hull can not be reached by greedy simplification from the full-detail convex hull. In practice I found my hacky PM solution to work fine for moderate simplification levels. To make it more correct, the "push out to enclose" step should be done in each PM collapse to keep the hull valid as you go (instead of at the end). Also the PM collapse metric should be the metric you are trying to optimize - surface area or volume (I just used my old geometric error collapser).

The main thing I was interested in with convex hull simplification was eating away highly tesselated bits. The mesh I mainly tested on was "fandisk" because it's got these big flat surfaces, and then some rounded bits. If you imagine a mesh like a big cube minowski summed with a small sphere, you get a cube with rounded edges and corners. If the sphere is highly tessleated, you can get a hull with tons and tons of faces, but they are very unimportant faces. You want to sort of polygonate those corners, replaces the rounded sphere with a less tesselated one that's pushed out.

4/23/2009

04-23-09 - Telling Time

.. telling time is a huge disaster on windows.

To start see Jon Watte's old summary that's still good .

Basically you have timeGetTime() , QPC, or TSC.

TSC is fast (~ 100 clocks) and high precision. The problems I know of with TSC :

TSC either tracks CPU clocks, or time passing. On older CPUs it actually increments with each cpu cycle, but on newer CPUs it just tracks time (!). The newer "constant rate" TSC on Intel chips runs at some frequency which so far as I can tell you can't query.

If TSC tracks CPU cycles, it will slow down when the CPU speedsteps. If the CPU goes into a full sleep state, the TSC may stop running entirely. These issues are bad on single core, but they're even worse on multi-proc systems where the cores can independently sleep or speedstep. See for example these linux notes or tsc.txt .

Unfortunately, if TSC is constant rate and tracking real time, then it no longer tracks cpu cycles, which is actually what you want for measuring performance (you should always report speeds of micro things in # of clocks, not in time).

Furthermore on some multicore systems, the TSC gets out of sync between cores (even without speedsteps or power downs). If you're trying to use it as a global time, that will hose you. On some systems, it is kept in sync by the hardware, and on some you can get a software patch that makes rdtsc do a kernel interrupt kind of thing which forces the TSC's of the cores to sync.

See this email I wrote about this issue :

Apparently AMD is trying to keep it hush hush that they fucked up and had to release a hotfix. I can't find any admission of it on their web site any more ;

this is the direct download of their old utility that forces the cores to TSC sync : TscSync

they now secretly put this in the "Dual Core Optimizer" : Dual Core Optimizer Oh, really AMD? it's not a bug fix, it's an "optimizer". Okay.

There's also a seperate issue with AMD C&Q (Cool & Quiet) if you have multiple cores/processors that decide to clock up & down. I believe the main fix for that now is just that they are forbidden from selecting different clocks. There's an MS hot fix related to that : MS hotfix 896256

I also believe that the newest version of the "AMD Processor Driver" has the same fixes related to C&Q on multi-core systems : AMD Driver I'm not sure if you need both the AMD "optimizer" and processor driver, or if one is a subset of the other.

Okay, okay, so you decide TSC is too much trouble, you're just going to use QPC, which is what MS tells you to do anyway. You're fine, right?

Nope. First of all, on many systems QPC actually is TSC. Apparently Windows evaluates your system at boot and decides how to implement QPC, and sometimes it picks TSC. If it does that, then QPC is fucked in all the ways that TSC is fucked.

So to fix that you can apply this : MS hotfix 895980 . Basically this just puts /USEPMTIMER in boot.ini which forces QPC to use the PCI clock instead of TSC.

But that's not all. Some old systems had a bug in the PCI clock that would cause it to jump by a big amount once in a while.

Because of that, it's best to advance the clock by taking the delta from previous and clamping that delta to be in valid range. Something like this :


U64 GetAbsoluteQPC()
{
    static U64 s_lastQPC = GetQPC();
    static U64 s_lastAbsolute = 0;

    U64 curQPC = GetQPC();

    U64 delta = curQPC - s_lastQPC;

    s_lastQPC = curQPC;

    if ( delta < HUGE_NUMBER )
        s_lastAbsolute += delta;

    return s_lastAbsolute;
}

(note that "delta" is unsigned, so when QPC jumps backwards, it will show up as as very large positive delta, which is why we compare vs HUGE_NUMBER ; if you're using QPC just to get frame times in a game, then a reasonable thing is to just get the raw delta from the last frame, and if it's way out of reasonable bounds, just force it to be 1/60 or something).

Urg.

BTW while I'm at I think I'll evangelize a "best practice" I have recently adopted. Both QPC and TSC have problems with wrapping. They're in unsigned integers and as your game runs you can hit the end and wrap around. Now, 64 bits is a lot. Even if your TSC frequency is 1000 GigaHz (1 THz), you won't overflow 64 bits for 194 days. The problem is they don't start at 0. (

Unsigned int wrapping works perfectly when you do subtracts and keep them in unsigned ints. That is :


in 8 bits :

U8 start = 250;
U8 end = 3;

U8 delta = end - start;
delta = 8;

That's cool, but lots of other things don't work with wrapping :


U64 tsc1 = rdtsc();

... some stuff ...

U64 tsc2 = rdtsc();

U64 avg = ( tsc1 + tsc2 ) /2;

This is broken because tsc may have wrapped.

The one that usually gets me is simple compares :


if ( time1 < time2 )
{
    // ... event1 was earlier
}

are broken when time can wrap. In fact with unsigned times that wrap there is no way to tell which one came first (though you could if you put a limit on the maximum time delta that you consider valid - eg. any place that you compare times, you assume they are within 100 days of each other).

But this is easily fixed. Instead of letting people call rdtsc raw, you bias it :


uint64  Timer::GetAbsoluteTSC()
{
    static uint64 s_first = rdtsc();
    uint64 cur = rdtsc();
    return (cur - s_first);
}

this gives you a TSC that starts at 0 and won't wrap for a few years. This lets you just do normal compares everywhere to know what came before what. (I used the TSC as an example here, but you mainly want QPC to be the time you're passing around).

04-23-09 - iPod Hardware - the harbinger of suck

Is god fucking awful. Stop touting it as the greatest example of product design in this century. Yes, yes, the screen is nice, and the basic shape and weight of it is appealing. If it was just a paperweight I would be pretty pleased with it. But when you actually try to *use* it, it's rubbish. (it's the Angelina Jolie of product design if you will - it's the canonical example that everyone uses of something that's great, but it's actually awful).

Try to actually browse through the menus with that fucking wheel. Scan through a big list of artists, back up, change to album view, scan down, it's awful.

The wheel is a disaster for volume control. The right thing for volume is a knob, or a dial. Something physical that you rotate. And it shouldn't be a fucking digital dial that just spins like all the shit that they're giving us on computers now. It should be an *absolute* dial with an actual zero point, so that I can turn the volume down to zero when the thing is off. Hitting play on an iPod is fucking ear drum roulette, you never know when it's going to explode your head.

You're playing a song, you pause it, you go browse to some other song. You want to just resume the original song. How do you even do that !? I suppose it must be possible, but I don't know. It should just be a button.

Design for music playing devices has been perfect for a long time. You have play, pause, skip, volume. You have those on different buttons that are ALWAYS those buttons. They're physical buttons you can touch, so you can use the device while it's in your pocket or your eyes are closed. They should be rubber (or rubberized) so they're tactile, and each button should have a different shape so you know you're on the right one.

It's just fucking rubbish. It's like so many cars these days, changing user interfaces for no good reason and making them worse. Don't give me fucking digital buttons to increment and decrement the air conditioning, that's awful! Give me a damn dial or a slider that has an absolute scale. I don't want to be hitting plus-plus-plus.

The damn "Start" button that's on so many cars now really pisses me off. You used to stick in a key, then turn it. What's wrong with that? It works perfectly fucking fine. Now I have to stick in a key, make sure I'm pressing the brake or the clutch or whatever, then press a start button. Why !? It's more steps, it's just worse.

The worst of course is the menu shit like iDrive that's basically an iPod style interface with a fucking wheel and menus and context-dependent actions. Context-dependent actions are fucking horrible user interface design, quit it. With consumer electronic devices there should just be a few buttons, and those buttons always execute the same action and do it immediately. I know some jack-hole is going to run into me because he was trying to mate his bluetooth and was browsing around the menus.

4/17/2009

04-17-09 - Oodle File Page Cache

I'm trying to figure something out, maybe someone out there has a good idea.

This is about the Oodle File Page Cache that I mentioned previously. I'm not doing the fully general page cache thing yet (I might not ever because it's one of those things you have to buy into my philosophy which is what I'm trying to avoid).

Anyway, the issue is about how to prioritize pages for reclamation. Assume you're in a strictly limitted memory use scenario, you have a fixed size pool of say 50 pages or so.

Now obviously, pages that are actually current locked get memory. And the next highest priority is probably sequentially prefetched pages (the pages that immediately follow the currently locked pages) (assuming the file is flagged for sequential prefetching, which it would be by default).

But after that you have to decide how to use the remaining pages. The main spot where you run into a question is : I need to grab a new page to put data into, but all the pages are taken - which one do I drop and recycle? (Or, equivalently : I'm thinking about prefetching page X, the least important page currently in the pool is page Y - should I reclaim page Y to prefetch page X, or should I just wait and not do the prefetch right now).

The main ambiguity comes from "past" pages vs. "prefetched" pages (and there's also the issue of old prefetched pages that were never used).

A "past" page is one that the client has unlocked. It was paged in, the client locked it, did whatever, then unlocked it. There's one simple case, if the client tells me this file is strictly forward-scan streaming, then the page can be dropped immediately. If not, then the past page is kept around for some amount of time. (there's another sequence point when the file containing the page is closed - again optionally you can say "just drop everything when I close the file" or the pages can be kept around for a while after the close to make sure you were serious about closing it).

A "prefetched" page obviously can be prefetched by sequential scan in an open file. It could also be from a file in the prefetch-ahead file list that was generated by watching previous runs.

Prefetches pages create two issues : one is how far ahead do I prefetch. Basically you prefetch ahead until you run out of free pages, but when you have no free pages, the question is do I reclaim past pages to do new prefetches?

The other issue with prefetches is what do you do with prefetched pages that were never actually used. Like I prefetched some pages but then the client never locked them, so they are still sitting around - at what point do I reclaim those to do new prefetches?

To make it more clear here's a sort of example :


Client gives me a prefetch file list - {file A, file B, file C, file D}

Client opens file A and touches a bunch of pages.  So I pull in {A:0,A:1,A:2} (those are the page numbers in the file).

I also start prefetching file B and file C , then I run out of free pages, so I get {B:0,B:1,C:0}.

Client unlocks the pages in file A but doesn't close file A or tell me I can drop the pages.

Client now starts touching pages in file C.  I give him C:0 that I already prefetched.

Now I want to prefetch C:1 for sequential scan, I need to reclaim a page.

Do I reclaim a page from B (prefetched but not yet used) or a page from A (past pages) ?  Or not prefetch at all?

When client actually asks for C:1 to lock then I must reclaim something.

Should I now start prefetching {D:0} ?  I could drop a page from {A} to get it.

Anyway, this issue just seems like a big mess so I'm hoping someone has a clever idea about how to make it not so awful.

There's also very different paradigms for low-page-count vs high-page-count caches. On something like the PS2 or XBox 1 where you are super memory limitted, you might in fact run with only 4 pages or something tiny like that. In that case, I really want to make sure that I am using each page for the best purpose at all times. In that scenario, each time I need to reclaim a page, I should reevaluate all the priorities so they are fresh and make the best decision I can.

On something like Windows you might run with 1024 pages. (64k page * 1024 pages = 64 MB page cache). In that case I really don't want to be walking every single page to try to pick the best one all the time. I can't just use a heap or something, because page priorities are not static - they can change just based on time passing (if I put any time-based prioritization in the cache). Currently I'm using a sort of cascaded priority queue where I have different pools of priority groups, and I only reevaluate the current lowest priority group. But that's rather complicated.

4/15/2009

04-15-09 - Oodle Page Cache

So I'm redoing the low level file IO part of Oodle. Actually I may be retargetting a lot of Oodle. One thing I took from GDC and something I've been thinking about a long time is how to make Oodle simpler. I don't want to be making a big structure that you have to buy into and build your game on. Rather I want to make something "leafy" that's easy for people to plug in at the last minute to solve specific problems.

Pursuant to that, the new idea is to make Oodle a handful of related pieces. You can use one or more of the pieces, and each is easy to plug in at the last minute.

1. Async File IO ; the new idea with this is that it's cross platform, all nice and async, does the LZ decompression on a thread, can do DVD packaging and DVD emulation, can handle the PS3/Xenon console data transfers - but it just looks like regular files to you. This is less ambitious than the old system ; it no longer directly provides things like paging data in & out, or hot-loading artist changes; you could of course still do those things but it leaves it more up to the client to do that.

The idea is that if you just write your game on the PC using lazy loose file loads, boom you pop in Oodle and hardly touch the code at all, and you automatically get your files packed up nice and tight into like an XBLA downloadable pack, or a DVD for PS3, or whatever, and it's all fast and good. Oh, and it also integrates nicely with Bink and Miles and Granny so that you can things like play a Bink video while loading a level, and the data streamers share the bandwidth and schedule seeks correctly.

2. Texture goodies. We'll provide the most awesome threaded JPEG decoders, and also probably a better custom lossy and custom lossless texture compressors that are specifically designed for modern games (with features like good alpha-channel support, support for various bit depths and strange formats like X16Y16 , etc.). Maybe some nice DXTC realtime encoding stuff and quality offline encoding stuff. Maybe also a whole custom texture cache thing, so you can say Oodle use 32 MB for textures and do all the paging and decompression and such.

3. Threading / Async utilities. You get the threaded work manager, the thread profiler, we'll probably do "the most awesome" multithreaded allocator. We'll try to give these specific functions that address something specific that people will need to ship a game. eg. a customer is near ship and their allocator is too slow and taking too much memory so they don't fit in the 256 MB of the console. Boom plug in Oodle and you can ship your game.

Anyway, that's just the idea, it remains to be worked out a bit. One thing I'm definitely doing is the low level IO is now going through a page cache.

As I'm writing it I've been realizing that the page cache is a super awesome paradigm for games in general these days. Basically the page cache is just like an OS virtual memory manager. There's a certain limited amount of contiguous physical memory. You divide it into pages, and dynamically assign the pages to the content that's wanted at the time.

Now, the page cache can be used just for file IO, and then it's a lot like memory mapped files. The client can use the stdio look-alike interface, and if they do that, then the page cache just automatically does the "right thing", prefetching ahead pages as they sequentially read through a file, etc.

But since we're doing this all custom and we're in a video game environment where people are willing to get more manual and lower to the bone, we can do a lot more. For example, you can tell me whether a file should sequential prefetch or not. You can manually prefetch at specific spots in the file that you expect to jump to. You can prefetch whole other files that you haven't opened yet. And perhaps most importantly, you can assign priorities to the various pages, so that when you are in a low memory situation (as you always are in games), the pages will be used for the most important thing. For example you can prefetch the whole next file that you expect to need, but you would do that at very low priority so it only uses pages if they aren't needed for anything more urgent.

The next awesome thing I realized about the page cache is that - hey, that can just be the base for the whole game memory allocator. So maybe you give 32 MB to page cache. That can be used for file IO, or video playback - or maybe you want to use it to pop up your in game "pause menu" GUI. Or say you want to stream in a compressed file - you map pages to read in the packed bits, and then you map pages as you decompress; as you decompress you toss the pages with the packed bits and leave the uncompressed pages in the cache.

Say you have some threaded JPEG decompress or something - it grabs a page to decompress to. The other cool thing is because all this is manual - it can grab the page with a certain priority level. If no page is available, it can do various things depending on game knowledge to decide how to respond to that. For example if it's just decompressing a high res version of a texture you already have in low res, it can just abort the decompress. If it's a low priority prefetch, it can just go to sleep and wait for a page to become available. If it's high priority, like you need this texture right now, that can cause other pages that are less important to get dropped.

Pages could also cache procedurally generated textures, data from a network, optional sounds, etc. etc.

I think about it this way - in the future of multicore and async and threading and whatnot, you might have 100 jobs to run per frame. Some of those jobs need large temp work memory. You can't just statically assign memory to various purposes, you want it to be used by the jobs that need it in different ways.

4/06/2009

04-06-09 - Multi-threaded Allocators

I've been reading a bit about Multi-threaded Allocators. A quick survery :

The canonical base allocator is Hoard . Hoard is a "slab allocator"; it takes big slabs from the OS and assigns each slab to a fixed-size block. In fact it's extremely similar to my "Fixed Restoring" allocator that's been in Galaxy3 forever and we shipped in Stranger. The basic ideas seem okay, though it appears to use locks when it could easily be lock-free for most ops.

One thing I really don't understand about Hoard is the heuristic for returning slabs to the shared pool. The obvious way to do a multi-threaded allocator is just to have a slab pool for each thread. The problem with that is that if each thread does 1 alloc of size 8, every thread gets a whole slab for itself and the overhead is proportional to the number of threads, which is a bad thing going into the massively multicore future. Hoard's heuristic is that it checks the amount of free space in a slab when you do a free. If the slab is "mostly" free by some measure, it gets returned to the main pool.

My problem with this is it seems to have some very bad cases that IMO are not entirely uncommon. For example a usage like this :


for( many )
{
    allocate 2 blocks
    free 1 block
}

will totally break Hoard. From what I can tell what hoard will do is allocate you a slab for your thread the first time you go in, then when you free() it will be very empty, so it will return the slab to the shared pool. Then after that when you allocate you will either get from the shared pool, or pull the slab back. Very messy.

There are two questions : 1. how greedy is new slab creation? That is, when a thread allocates an object of a given size for the first time, does it immediately get a brand new slab, or do you first give it objects from a shared slab and wait to see if it does more allocs before you give it its own slab. 2. how greedy is slab recycling? eg. when a thread frees some objects, when do you give the slab back to the shared pool. Do you wait for the slab to be totally empty, or do you do it right away.

The MS "Low Fragmentation Heap" is sort of oddly misnamed. The interesting bit about it is not really the "low fragmentation", it's that it has better multi-threaded performance. So far as I can tell, the LFH is 99.999% identical to Hoard. See MS PPT on the Low Fragmentation Heap

We did a little test and it appears that the MS LFH is faster than Hoard. My guess is there are two things going on : 1. MS actually uses lock-free linked lists for the free lists in each slab (Hoard uses locks), and 2. MS makes a slab list *per processor*. When a thread does an alloc it gets the heap for the processor it's on. They note that the current processor is only available to the kernel (KeGetCurrentProcessorNumber ). Hoard also makes P heaps for P processors (actually 2P), but they can't get current processor so they use the thread Id and hash it down to P which leads to occasional contention and collisions.

The other canonical allocator is tcmalloc . I still haven't been able to test tcmalloc because it doesn't build on VS2003. tcmalloc is a bit different than Hoard or LFH. Instead of slab lists, it uses traditional SGI style free lists. There are free lists for each thread, and they get "garbage collected" back to the shared pool.

One issue I would be concerned about with tcmalloc is false sharing. Because the free lists get shared back to a global pool and then redistributed back to threads, there is no real provision to prevent little items from going to different threads, which is bad. Hoard and LFH don't have this problem because they assign whole slabs to threads.

The Hoard papers make some rather ridiculous claims about avoiding false sharing, however. The fact is, if you are passing objects between threads, then no general purpose allocator can avoid false sharing. The huge question for the allocator is - if I free an object on thread 1 that was allocated on thread 2 , should I put it on the freelist for thread 1, or on the freelist for thread 2 ? One or the other will make false sharing bad, but you can't answer it unless you know the usage pattern. (BTW I think tcmalloc puts it on thread 1 - it uses the freelist of the thread that did the freeing - and Hoard puts it on thread 2 - the freelist of the thread that did the allocation; neither one is strictly better).

Both tcmalloc and the LFH have high memory overhead. See here for example . They do have better scalability of overhead to high thread counts, but the fact remains that they may hold a lot of memory that's not actually in use. That can be bad for consoles.

In fact for video games, what you want in an allocator is a lot of tweakability. You want to be able to tweak the amount of overhead it's allowed to have, you want to be able to tweak how fast it recycles pages for different block types or shares them across threads. If you're trying to ship and you can't fit in memory because your allocator has too much overhead, that's a disaster.

BTW false sharing and threaded allocators are clearly a place that a generational copying garbage collector would be nice. (this does not conflict with my contention that you can always be faster by doing very low level manual allocation - the problem is that you may have to give tons of hints to the allocator for it to do the right thing). With GC it can watch the usage and be smart. For example if a thread does a few allocs, they can come from a global shared pool. It does some more allocs of that type, then it gets its own slab and the previous allocs are moved to that slab. If those objects are passed by a FIFO to another thread, then they can be copied to a slab that's local to that thread. Nice.

It seems clear to me that the way to go is a slab allocator. Nice because it's good for cache-coherence and false sharing. The ops inside a slab can be totally thread-local and so no need to worry about multithread issues. Slabs are large enough that slab management is rare and most allocations only take the time do inside-slab work.

One big question for me is always how to get from an object to its owning slab (eg. how is free implemented). Obviously it's trivial if you're okay with adding 4 bytes to every allocation. The other obvious way is to have some kind of page table. Each slab is page-aligned, so you take the object pointer and truncate it down to slab alignment and look that up. If for example you have 32 bit pointers (actually 31), and you pages are 4k, then you need 2 to the 19 page table entries (512k). If a PTE is 4 bytes that's 2 MB of overhead. It's more reasonable if you use 64k pages, but that means more waste inside the page. It's also a problem for 64-bit pointers.

There are some other options that are a bit more hard core. One is to reserve a chunk of virtual address for slabs. Then whenever you see a free() you check if the pointer is in that range, and if so you know you can just round the pointer down to get the slab head. The problem is the reserving of virtual address.

Another option is to try to use pointer alignment. You could do something like make all large allocs be 4k aligned. Then all small allocs are *not* 4k aligned (this happens automatically because they come from inside a 4k page), and to get the base of the page you round down to the next lowest 4k.

04-06-09 - The Work Dispatcher

I thought I'd write a bit about my multithreaded "Worklet" dispatcher before I forget about it. I call little units of work "worklets" just because I like to be nonstandard and confusing.

The basic idea is that the main thread can at any time fire up a bunch of worklets. The worklets then go to a bunch of threads and get done. The main thread can then wait on the worklets.

There are a few things I do differently in the design from most people. The most common "thread pool" and "job swarm" things are very simple - jobs are just an isolated piece of independent work, and often once a bunch is fired you can only wait on them all being done. I think these are too limiting to be really generally useful, so I added a few things.

1. Worker threads that are not in use should go completely to sleep and take 0 cpu time. There should be a worker thread per processor. There might also be other threads on these processors, and we should play nice with arbitrary other programs running and taking some of the cpu time. Once a worker thread wakes up to do work, it should stay awake as long as possible and do all the work it can, that is, it shouldn't have to go to sleep and wait for more work to get fired so it can wake up again.

2. Worklets can have dependencies on other worklets. That way you can set up a dependency tree and fire it, and it will run in the right order. eg. if I want to run A, then B, then run C only after A & B are both done, you fire worklets {A}, {B} and {C: dependent on AB}. Dependencies can be evaluated by the worker threads. That's crucial because it means they don't need to stall and wait for the main thread to fire new work to them.

3. The main thread can block or check status on any Worklet. The main thread (or other game threads) might keep running along, and the worker threads may be doing the work, and we want to be able to see if they are done or not. In particular we don't want to just support the OpenMP style "parallel for" where we fork a ton of work and then immediately block the main thread on it - we want real asynchronous function calls. Often in games we'll want to make the main thread(s) higher priority than the worker threads, so that the worker threads only run in idle time.

The actual worker threads I implemented with a work-stealing scheme. It's not true work-stealing at all, because there is a concept of a "main thread" that runs the game and pushes work, and there's also the dependencies that need to be evaluated. All of the thread-thread communication is lock free. When the main thread adds new work items it just jams them onto queues. When the worker threads pop off work items they just pop them off queues. I do currently use a lock for dependency evaluation.

Traditional work stealing (and the main papers) are designed for operating system threads where the threads themselves are the ones making work. In that environment, the threads push work onto queues and then pop it off for themselves or steal from peers. There are custom special lock-free data structures designed for this kind of operation - they are fast to push & pop at one end, but also support popping at the other end (stealing) but more slowly. What I'm doing is not traditional work stealing. I have external threads (the "game threads") that do not participate in the work doing, but they can push work to the workers. In my world the workers currently can never make new work (that could be added if it's useful).

There are a lot of nice things about work stealing. One is you don't need a seperate dispatcher thread running all the time (which would hurt you with more context switches). Another is that workers who have cpu time can just keep jamming along by stealing work. They sort of do their own dispatching to themselves, so the threads that have the cpu do the work of the dispatching. It also offloads the dispatching work from the main thread. In my system, the workers do all work they can that's known to be depency-okay. Once that work is exhausted they reevaluate dependencies to see if more work can be done, so they do the dependency checking work for themselves off the main thread.

Another nice thing about work stealing is that it's self-balancing in the face of external activity. Anything running on a PC has to face lots of random CPU time being stolen. Even in a console environment you have to deal with the other threads taking variable amounts of time. For example if you have 6 cores, you want 6 workers threads. But you might also have 3 other threads, like a main thread, a gpu-feeding thread, and a sound thread. The main thread might usually take 90% of the cpu, so the worker on that core rarely gets any time, the gpu-feeder might usually take 50% of the cpu time on that thread, but in phases, like it takes 100% of the cpu for half the frame then goes idle for the other half. With work stealing your worker thread will automatically kick in and use that other time.

In order for the self balancing to work as well as possible you need small worklets. In fact, the possible wasted time is equal to the duration of the longest task. The time waste case happens like this :


Fire N work items , each taking time T to N cores
For some reason one of the cores is busy with something else so only (N-1) of them do work
Now you block on the work being done and have to wait for that busy core, so total time is 2T

Total time should have been N*T/(N-1)
For N large the waste approaches T

Basically the smaller T is (the duration of longest work) the more granular the stealing self-allocation is. Another easy way to see it is :

You are running N workers and a main thread
The main thread takes most of the time on core 0
You fire (N-1) very tiny work items and the (N-1) non-main cores pick them up
You fire a very large work item and core 0 worker picks it up

That's a disaster. The way to avoid it is to never fire single large work items - if you would have split that into lots of little work items it would have self-balanced, because the stealing nature means that only worker threads that have time take tasks.

For example, with something like a DXTC encoder, rather than split the image into something like N rectangles for N cores and fire off 1 work item to each core, you should go ahead and split it into lots of tiny blocks. Of course this requires that the per-work-item overhead is extremely low, which of course it is because we are all lock-free and goodness.

There are some things I haven't done yet that I might. One is to be a bit smarter about the initial work dispatching, try to assign to the CPU that will have the most idle time. If you actually did make nice tiny worklets all the time, that wouldn't be an issue, but that's not always possible. In the case that you do make large work items, you want those to go the cores that aren't being used by other threads in your game.

Another issue is the balance of throughput vs latency. That is, how fast does the system retire work, vs. how long does it take any individual work item to get through. Currently everything is optimized for throughput. Work is done in a roughly FIFO order, but with dependencies it's not gauranteed to be FIFO, and with the work stealing and variations in CPU Time assignment you can have individual work items that take a lot longer to get through the system than is strictly necessary. Usually this isn't a big deal, but sometimes you fire a Worklet and you need it to get done as quickly as possible. Or you might need to get done inside a certain deadline, such as before the end of the frame. For example you might fire a bunch of audio decompression, but set a deadline to ensure it's done before the audio buffers run out of decompressed data. Handling stuff like that in a forward-dispatched system is pretty easy, but in work-stealing it's not so obvious.

Another similar issue is when the main thread decides to block on a given work item. You want that item to get done as soon as possible by the thread that has the highest probability of getting a lot of CPU time. Again not easy with work-stealing since some worker thread may have that item in its queue but not be getting much CPU time for some reason.

old rants