9/12/2010

09-12-10 - Challenges in Data Compression 2 - Sparse Contexts

A "sparse" context is one with very little information in it. Exactly what defines "sparse" depends on the situation, but roughly it means it hasn't been seen enough times that you can trust the evidence in it. Obviously a large-alphabet context needs a lot more information to become confident than a binary context, etc.

In the most extreme case, a sparse context might only have 1 previous occurance. Let's take the binary case for clarity, so we have {n0=1,n1=0} or {n0=0,n1=1} in a single context.

There are two issues with a sparse context :

1. What should our actual estimate for P0 (the probability of a zero) be? There's a lot of theory about ideal estimators and you can go read papers on the unbiased estimator or the KT estimator or whatever that will give you formulas like


P0 = (n0 + k) / (n0 + n1 + 2k)

k = 0 or 1/2 or whatever depending on the flavor of the month

but this stuff is all useless in the sparse case. At {n0=1,n1=0} the actual probabilitiy of a 0 might well be 99% (or it could be 10%) and these estimators won't help you with that.

2. Making an estimate of the confidence you have in the sparse context. That is, deciding whether to code from it at all, or how to weight it vs other contexts. Again the important thing is that the statistics in the context itself do not really help you solve this problem.

Now let me take a second to convince you how important this problem is.

In classical literature this issue is dismissed because it is only a matter of "initialization" and the early phase, and over infinite time all your contexts become rich, so asymptotically it doesn't affect ratio at all. Not only is that dismissive of practical finite size data, but it's actually wrong even for very large data. In fact sparse contexts are not just an issue for the early "learning" phase of a compressor.

The reason is that a good compressor should be in the "learning" phase *always*. Basically to be optimal, you want to be as close to the end of sparse contexts as possible without stepping over the cliff into your statistics becoming unreliable. There are two reasons why.

1. Model growth and the edges of the model. The larger your context, or the more context information you can use, the better your prediction will be. As you get more data, the small contexts might become non-sparse, but that just means that you should be pushing out more into longer contexts. You can think of the contexts as tree structured (even if they actually aren't). Around the root you will have dense information, the further out you go to the leaves the sparser they will be.

For example, PPM* taught us that using the longest context possible is helpful. Quite often this context has only occurred once. As you get further into the file, you longest possible context simply gets longer, it doesn't get less sparse.

2. Locality and transients. Real files are not stationary and it behooves you to kill old information. Furthermore, even in dense contexts there is a subset of the most recent information which is crucial and sparse.

For example, in some context you've seen 50 A's and 20 B's. Now you see two C's. Suddenly you are in a sparse situation again. What you have is an insufficient sample sitation. Should I still use those 50 A's I saw, or am I now 99% likely to make more C's ? I don't have enough statistics in this context to tell, so I'm in a sparse situation again.

There are various attempts at addressing this, but like all the problems in this serious there's been no definitive attack on it.

9/06/2010

09-06-10 - WeightedDecayRand

I'm sure I've talked to people about this many times but I don't think I've ever written it up. Anyway, here's one from the files of game developer arcana. I'm sure this is common knowledge, but one of those things that's too trivial to write an article about.

When you use random numbers in games, you almost never actually want random numbers. Rather you want something that *feels* sort of random to the player. In particular the biggest difference is that real random numbers can be very bunchy, which almost always feels bad to the player.

Common cases are like random monster spawns or loot drops. It's really fucking annoying when you're trying to cross some terrain in an Ultima type game and you get unlucky with the random number generator and it wants to spawn a random encounter on every single fucking tile. Or if you're killing some mobs who are supposed to drop a certain loot 1 in 10 times, and you kill 100 of them and they don't fucking drop it because you keep getting unlucky in rand().

What we want is not actually rand. If you design a monster to drop loot roughly 1 in 10 times, you want the player to get it after 5 - 20 kills. You don't really want them to get it on first kill, nor do you want them to sit around forever not getting it.

In all cases, to do this what we need is a random generator which has *state* associated with the event. You can't use a stateless random number generator. So for example with loot drop, the state remembers that it has dropped nothing N times and that can start affecting the probability of next drop. Basically what you want to do for something like that is start with a very low probability to drop loop (maybe even 0) and then have it ratchet up after non-drop. Once the loot is dropped, you reset the probability back to 0. So it's still random, but a more controllable experience for the designer.

Now, this state generally should decay in some way. That is, if you kill mobs and get no loot and then go away and do a bunch of stuff, when you come back it should be back to baseline - it doesn't remember what happened a long time ago. And then the next important issue is how does that state decay? Is it by time or by dicrete events? Whether time or event driven makes the most sense depends on the usage.

The simplest version of a stateful semi-random generator is simply one that forbids repeats. Something like :


int Draw()
{
    for(;;)
    {
        int i = RandMod(m_num);
        if ( i != m_last )
        {
            m_last = i;
            return i;
        }
    }
}

which is an okay solution in some cases. But more generally you want to not just forbid repeats, rather allow them but make them less likely. And for N-ary events you don't just care about the last one, but all the last ones.

A simple binary stateful semirandom generator is like this :


int Draw()
{
    Tick();

    float p = frandunit() * ( m_p0 + m_p1 );
    if ( p < m_p0 )
    {
        m_p0 -= m_repeatPenalty;
        m_p0 = MAX(m_p0,0.f);
        return 0;
    }
    else
    {
        m_p1 -= m_repeatPenalty;
        m_p1 = MAX(m_p1,0.f);       
        return 1;
    }
}

You should be able to see that this is a simple weighted coin flip and we penalize repeats by some repeat parameter. Here we have introduced the idea of a Tick() - the tick is some function that push the probabilities back towards 50/50 , either by time evolution or by a fixed step.

More generally you want N-ary with various parameters. Here's some code :


//-----------------------------------------------------------

class WeightedDecayRand
{
public:

    // redrawPenalty in [0,1] - 0 is a true rand, 1 forbids repeats
    // restoreSpeed is how much chance of redraw is restored per second or per draw
    explicit WeightedDecayRand(int num,float redrawPenalty,float restoreSpeed);
    ~WeightedDecayRand();

    int Draw();
    void Tick(float dt);
    
    int DrawFixedDecay();
    int DrawTimeDecay(double curTime);

private:
    int     m_num;
    float * m_weights;  
    float   m_redrawPenalty;
    float   m_restoreSpeed;
    float   m_weightSum;
    double  m_lastTime;

    FORBID_CLASS_STANDARDS(WeightedDecayRand);
};

//-----------------------------------------------------------

WeightedDecayRand::WeightedDecayRand(int num,float redrawProbability,float restoreSpeed) :
    m_num(num),
    m_redrawPenalty(redrawProbability),
    m_restoreSpeed(restoreSpeed)
{
    m_weights = new float [num];
    for(int i=0;i < num;i++)
    {
        m_weights[i] = 1.f;
    }
    m_weightSum = (float) num;
}

WeightedDecayRand::~WeightedDecayRand()
{
    delete [] m_weights;
}

int WeightedDecayRand::Draw()
{
    float p = frandunit() * m_weightSum;
    for(int i=0;;i++)
    {
        if ( i == m_num-1 || p < m_weights[i] )
        {
            // accepted !
            m_weightSum -= m_weights[i];
            m_weights[i] -= m_redrawPenalty;
            m_weights[i] = MAX(m_weights[i],0.f);
            m_weightSum += m_weights[i];
            return i;
        }
        else
        {
            p -= m_weights[i];
        }
    }
}

void WeightedDecayRand::Tick(float dt)
{
    m_weightSum = 0;
    for(int i=0;i < m_num;i++)
    {
        if ( m_weights[i] < 1.f )
        {
            m_weights[i] += dt * m_restoreSpeed;
            m_weights[i] = MIN(m_weights[i],1.f);
        }
        m_weightSum += m_weights[i];
    }
}

int WeightedDecayRand::DrawFixedDecay()
{
    int ret = Draw();
    
    Tick( 1.f );
    
    return ret;
};

int WeightedDecayRand::DrawTimeDecay(double curTime)
{
    if ( curTime != m_lastTime )
    {
        Tick( (float)(curTime - m_lastTime) );
        m_lastTime = curTime;
    }

    int ret = Draw();
        
    return ret;
};

//-----------------------------------------------------------

09-06-10 - Cross Platform SIMD

I did a little SIMD "Lookup3" (Bob Jenkin's hash), and as a side effect, it made me realize that you can almost get away with cross platform SIMD these days. All the platforms do 16-byte SIMD, so that's nice and standard. The capabilities and best ways of doing things aren't exactly the same, but it's pretty close, and you can mostly cover the differences. Obviously to get super maximum optimal you would want to special case per platform, but even then having a base cross-platform SIMD implementation to start from would let you get all your platforms going easier and identify the key-spots to do platform specific work.

Certainly for "trivial SIMDifications" this works very well. Trivial SIMDification is when you have a scalar op that you are doing a lot of, and you change that to doing 4 of them at a time in parallel with SIMD. That is, you never do horizontal ops or anything else funny, just a direct substitution of scalar ops to 4-at-a-time vector ops. This works very uniformly on all platforms.

Basically you have something like :

U32 x,y,z;

    x += y;
    y -= z;
    x ^= y;
and all you do is change the data type :
simdU32 x,y,z;

    x += y;
    y -= z;
    x ^= y;
and now you are doing four of them at once.

The biggest issue I'm not sure about is how to define the data type.

From a machine point of view, the SIMD register doesn't have a type, so you might be inclined to just expose a typeless "qword" and then put the type information in the operator. eg. have a generic qword and then something like AddQwordF32() or AddQwordU16() . But this is sort of a silly argument. *All* registers are typeless, and yet we find data types in languages to be a convenient way of generating the right instructions and checking program correctness. So it seems like the ideal thing is to really have something like a qwordF32 type, etc for each way to use it.

The problem is how you actually do that. I'm a little scared that anything more than a typedef might lead to bad code generation. The simple typedef method is :


#if SPU
typedef qword simdval;
#if XENON
typedef __vector4 simdval;
#else if SSE
typedef __m128 simdval;
#endif

But the problem is if you want to make them have their type information, like say :

typedef __vector4 simdF32;
typedef __vector4 simdU32;

then when you make an "operator +" for F32 and one for U32 - the compiler can't tell them apart. (if for no good reason you don't like operator +, you can pretend that says "Add"). The problem is the typedef is not really a first class type, it's just an alias, so it can't be used to select the right function call.

Of course one solution is to put the type in the function name, like AddF32,AddU32,etc. but I think that is generally bad code design because it ties operation to data, which should be as indepednent as possible, and it just creates unnecessary friction in the non-simd to simd port.

If you actually make them a proper type, like :


struct simdF32 { __vector4 m; };
struct simdU32 { __vector4 m; };

then you can do overloading to get the right operation from the data type, eg :

RADFORCEINLINE simdU32 operator + ( const simdU32 lhs, const simdU32 rhs )
{
    return _mm_add_epi32(lhs,rhs);
}

RADFORCEINLINE simdF32 operator + ( const simdF32 lhs, const simdF32 rhs )
{
    return _mm_add_ps(lhs,rhs);
}

The problem is that there is some reason to believe that anything but the fundamental type is not handled as well by the compiler. That is, qword,__vector4, etc. get special very good handling by the compiler, and anything else, even a struct which consists of nothing but that item, gets handled worse. I haven't actually seen this myself, but there are various stories around the net indicating this might be true.

I think the typedef way is too just too weak to actually be useful, I have to go the struct way and hope that modern compilers can handle it. Forunately GCC has the very good "vector" keyword thingy, so I don't have to do anything fancy there, and MSVC is now generally very good at handling mini objects (as long as everything inlines).

Another minor annoying issue is how to support reinterprets. eg. I have this simdU32 and I want to use it as a simdU16 with no conversion. You can't use the standard C method of value at/address of, because that might go through memory which is a big disaster.

And the last little issue is whether to provide conversion to a typeless qword. One argument for that would be that things like Load() and Store() could be implemented just from qword, and then you could fill all your data types from that if they have conversion. But if you allow implicit conversion to/from typeless, then all your types can be directly made into each other. That usually confuses the hell out of function overloading among other woes.

9/04/2010

09-04-10 - Holy fuck balls

MSVC 2005 x64 turns this :

static RADFORCEINLINE void rrMemCpySmall(void * RADRESTRICT to,const void * RADRESTRICT from, SINTa size)
{
    U8 * RADRESTRICT pto = (U8 * RADRESTRICT) to;
    const U8 * RADRESTRICT pfm = (const U8 * RADRESTRICT) from;
    for(SINTa i=0;i < size;i++) pto[i] = pfm[i];
}

into

call memcpy

god damn you, you motherfuckers. The whole reason I wrote it out by hand is because I know that size is "small" ( < 32 or so) so I don't want the function call overhead and the rollup/rolldown of a full memcpy. I just want you to generate rep stosb. If I wanted memcpy I'd fucking call memcpy god dammit. Friggle fracking frackam jackam.


In other news, I know it's "cool" to run with /W4 or MSVC or /Wall on GCC (or even /Wextra), but I actually think it's counterproductive. The difference between /W3 and /W4 is almost entirely warnings that I don't give a flying fuck about, shit like "variable is unused". Okay fine, it's not used, fucking get rid of it and don't tell me about it.

Shit like "variable initialized but not used" , "conditional is constant", "unused static function removed" is completely benign and I don't want to hear about it.

I've always been peer-pressured into running with max warning settings because it's the "right" thing to do, but actually I think it's just a waste of my time.


MoveFileTransacted is pretty good. There's other transactional NTFS shit but really this is the only one you need. You can write your changes to a temp file (on your ram drive, for example) then use MoveFileTransacted to put them back on the real file, and it's nice and safe.

BTW while I'm ranting about ram drives; how the fuck is that not in the OS? And my god people are getting away with charging $80 for them. AND even those expensive ones don't support the only feature I actually want - dynamic sizing. It should be the fucking size of the data on it, not some static predetermined size.

9/03/2010

09-03-10 - Page file on Ramdrive = 3LIT3

One of the awesome new "tweaks" that computer buffoons are doing is making ramdisks and putting their windows swap file on the ram disk. Yes, brilliant, that speeds up your swap file alright. So awesome.

(BTW yes I do know that it actually makes sense in one context : some of the fancy ram drives can access > 3 GB RAM in 32 bit windows, in which case it gives you a way to effectively access your excess RAM in an easy way on the old OS; but these geniuses are on Win7-64).

(And of course the right thing to do is to disable page file completely; sadly Windows has not kept up with this modern era of large RAMs and the page file heuristics are not well suited to use as an emergency fallback).


What I have copied from the tweakers is putting my firefox shite on a ram disk :

SuperSpeed Firefox By Putting Profile and SQLite Database in RAMDisk Raymond.CC Blog
Guide Move 99% of All Firefox Writes off your SSD

I can get away with a 128M ram disk and it massive reduces the amount of writes to my SSD. Reducing writes to the SSD is nice for my paranoia (it also speeds up firefox startup and browsing), but the really strong argument for doing this is that I've caught SQLite eating its own ass a few times. I really hate it when I'm just sitting there doing nothing and all of a sudden my fan kicks in and my CPU shoots up, I'm like "WTF is going on god dammit". Well the last few times that's happened, it's been the SQL service fucking around on its files for no god damn good reason. I'm not sure if putting the firefox SQL DB files on the ramdisk actually fixes this, but it makes it cheaper when it does happen anyway.

Also, don't actually use the above linked methods. Instead do this :

Do "about:config" and add "browser.cache.disk.parent_directory" . Obviously also check browser.cache.disk.enable and browser.cache.disk.capacity ; capacity is in KB BTW.

Run "firefox -ProfileManager". Make a new profile and call it "ramprofile" or whatever. Change the dir of that profile to somewhere on the ramdisk. Look at your other profile (probably "default") and see what dir it's in. Make "ramprofile" the default startup profile.

Quit firefox and copy the files from your default profile dir to your ramprofile dir. Run firefox and you are good to go.

Do not set your ramdisk to save & restore itself (as some of fancy ramdisks can do these days). Instead put the profile dir copy operation in your machine startup bat. It's somewhat faster to zip up your default profile dir and have your machine startup bat copy the zip over to the ramdisk and unzip it there.

There's another advantage of this, which is that your whole Firefox profile is now temporary for the session, unless you explicitly copy it back to your hard disk. That's nice. If you pick up tracking cookies or whatever while browsing, or some malicious script changes your home page or whatever, they go away when you reboot.

09-03-10 - LZ and Exclusions

I've been thinking a lot about LZ and exclusions.

In particular, LZMA made me think about this idea of excluding the character that follows from the previous match. eg. after a match, if the string we were matching from was ABCD... and we just wrote ABC of length 3, we know the next character is not a D. LZMA captures this information by using XOR (there are possibly other effects of the xor, I'm not sure).

An earlier LZ77 variant named "LZPP" had even more ideas about exclusions. I won't detail that, but I'll try to do a full treatment here.

There are various ways that exclusions arise from LZ77 coding. First of all we will simplify and assume that you always code the longest possible match at each location, and you always take a match if a match is possible. These assumptions are in fact not valid in modern optimal parse coders, but we will use them nontheless because it makes it easier to illustrate the structure of the LZ77 code.

First let's look at the exclusion after matches. Any time you are in the character after a match, you know the next character cannot be the one that continues the match, or you would just written a longer match. But, that is not just true of the match string you chose, but also of all the match strings you could have chosen.

Say for example you just wrote match of length 3 and got the string [abc]. Now you are on the next character. But previously occuring in the file are [abcd] and [abce] and [abcf]. ALL of those following characters can be excluded, not just the one that was in your actual match string. In particular, in a context coding sense, if the match length was L, you are now effectively coding from a context of length L and you know the next character must be novel in that context.

Also note that this applies not only to literal coding, but also to match coding. All offsets you could code a match from that start with one of the excluded characters can be skipped from the coding alphabet.

There's another type of exclusion in LZ coding that is powerful, and that is exclusion due to failure to match. Say your min match length is 2. You've written a literal and now you are about to write another literal. You can exclude all characters which would have been a match at the last position. That is,

You've just written [a] as a literal. The digrams [ab], [ac], [ad] occur in the file. You can thus exclude {b,c,d} from the current literal, because if it was one of those you would have written a match at the previous spot.

There are two primary states of an LZ coder : after a match and after a literal (there's also an implicit state which is inside a match, continuing the match).

Coding with general exclusions is difficult to do fast, and difficult to do with a bitwise binary style coder. You can code one exclusion using the xor method, but getting beyond that seems impossible. It looks like you have to go to a full general alphabet coder and a tree structure like Fenwick or some relative.


Another issue for LZ is the redundancy of offsets. In particular the most obvious issue is that when you write short matches, many offsets are actually identical, so having the full set to choose from is just wasting code space.

eg. say you code a match of length 2, then offsets which point to a string which is the same as another string up to length 2 are redundant.

There are two ways to take advantage of this :

1. Code length first. Walk offsets backwards from most recent (lowest offset) to highest. Each time you see a substring of the [0,length) prefix that you've never seen before, count that as offset++, otherwise just step over it without incrementing offset. Basically you are just considering a list of the unique [length] substrings that have occured in the file, and they are sorted in order of occurance.

2. Code offset first. Walk from that offset back towards the current position (from large offset down to small). Match each string against the chosen one at offset. The largest substring match length is the base match length. Code (length - base). This relies on the fact that if there was a match of that substring with a lower offset, we would have coded from the lower offset. So whenever we code a larger offset, we know it must be a longer match than we could have gotten somewhere later in the list.

These seem outrageously expensive, but they actually can be incorporated into an ROLZ type coder pretty directly. Method 1 can also incorporate exclusions easily, you just skip over offsets that point to a substring that begins with an excluded character. One disadvantage of method 1 is that it ruins offset structure which we will talk about in the next post.

8/31/2010

08-31-10 - LOOP

Every once in a while I think this is a good idea :

#define LOOP(var,count) for(int var=0;(var)<(int)(count);var++)
#define LOOPBACK(var,count) for(int var=(int)(count)-1;(var)>=0;var--)

but then I never wind up using it, and I find the code that does use it looks really ugly. Maybe if I got in the habit that ugliness would go away.

Certainly adding a "loop" keyword would've been a good idea in C99. Instead we have GCC trying to optimize out signed int wrapping, and many compilers now specifically look for the for(;;) construct and special-case handling it as if it was a loop() keyword.


In other minor news, I'm running with the "NoScript" addon now. I've used FlashBlock for a long time to much happiness, so this is just the next level. It does break 90% of web sites out there, but it has also made me shockingly aware of how many random sites are trying to run scripts that are highly questionable (mainly data-mining and ad-serving).


People sometimes ask me about laptops because I wrote about them before :

cbloom rants 05-07-10 - New Lappy
cbloom rants 04-15-10 - Laptop Part 2
cbloom rants 04-12-10 - Laptop search
cbloom rants 01-20-09 - Laptops Part 3

A small update : I'm still very happy with the Dell Precision 6500. They've updated it with some newer GPU options, so you can now get it with an ATI 7820 , which is a full DX11 part and close to as fast as you can get mobile. Other than that all my advice remains the same - get USB 3 of course, quad i7, LED backlighting, install your own SSD and RAM and do a clean windows install. Do NOT get RAID disks, and do NOT install any Intel or Dell drivers. I have no DPC latency problems or any shite like that. The only thing about it that's slightly less than perfect is the temperature / fan control logic is not quite right.

It looks like there's a problem with the NV GTX 280 / Dual chip / Powermizer stuff. See :

My Solution for Dell XPS M1530 DPC Latency
Dell, DPC Latency, and You - Direct2Dell - Direct2Dell - Dell Community
Dell Latitude DPC Latency Issues

So I recommend staying away from that whole family. The new Vaio Z looks really awesome for a small thin/light, however there is a bit of a nasty problem. They only offer it with SSD's in RAID, and they are custom non-replaceable SSD's, and it appears that the reason for the 4 SSD's in RAID is because they are using cheapo low end flash. There's lots of early reports of problems with this setup, and the fact that you have to run RAID and can't change the drives is a big problem. Also, last time I checked Windows still can't send through TRIM to RAID, so that is borked, but theoretically that will be addressed some day.

8/27/2010

08-27-10 - Cumulative Probability Trees

I've been thinking about Cumulative Probability Trees today; if you prefer, it's partial-sum trees. Basically you have a count for each index C[i] , and you want to be able to quickly get the sum of all counts between 0 and i = S[i]. You want to be able to query and update both in <= logN time. So for example just using S[i] as an array makes query be O(1), but then update is O(N), no good.

The standard solution is Fenwick Trees . They are compact (take no more room than the C[i] table itself). They are O(log(N)) fast. In code :


F[i] contains a partial sum of some binary range
the size of the binary range is equal to the bottom bit on of i
if i&1  - it contains C[i]
if i&3 == 2 - contains Sum of 2 ending with i (eg. C[i]+C[i-1] )
if i&7 == 4 - contains Sum of 4 ending with i
if i&F == 8 - contains Sum of 8 ending with i
(follow the link above to see pictures). To get C[i] from F[i] you basically have to get the partial sums S[i] and S[i-1] and subtract them (you can terminate early when you can tell that the rest of their sum is a shared walk). The logN walk to get S from F is very clever :

sum = 0;
while (i > 0)
  {
    sum += F[i];
    i = i & (i-1);
  }

The i & (i-1) step turns off the bottom bit of i, which is the magic of the Fenwick Tree structure being the same as the structure of binary integers. (apparently this is the same as i -= i & -i , though I haven't worked out how to see that clearly).

If you put F[0] = 0 (F starts indexing at 1), then you can do this branchless if you want :


sum = 0;
UNROLL8( sum += F[i]; i = i & (i-1); );

(for an 8-bit symbol, eg 256 elements in tree).

You can't beat this. The only sucky thing is that just querying a single probability is also O(logN). There are some cases where you want to query probability more often than you do anything else.

One solution to that is to just store the C[i] array as well. That doesn't hurt your update time much, and give you O(1) query for count, but it also doubles your memory use (2*N ints needed instead of N).

One option is to keep C[i], and throw away the bottom level of the Fenwick tree (the odd indexes that just store C[i]). Now your memory use is (3/2)*N ; it's just as fast but a little ugly.

But I was thinking what if we start over. We have the C[i], what if we just build a tree on it?

The most obvious thing is to build a binary partial sum tree. At level 0 you have the C[i], at level 1 you have the sum of pairs, at level 2 you have the sum of quartets, etc :


showing the index that has the sum of that slot :

0:01234567
1:00112233
2:00001111
3:00000000

So update is very simple :

Tree[0][i] ++;
Tree[1][i>>1] ++;
Tree[2][i>>2] ++;
...

But querying a cumprob is a bit messy. You can't just go up the tree and add, because you may already be counted in a parent. So you have to do something like :

sum = 0;

if ( i&1 ) sum += Tree[0][i-1];
i>>=1;
if ( i&1 ) sum += Tree[1][i-1];
i>>=2;
if ( i&1 ) sum += Tree[1][i-1];
..

This is O(logN) but rather uglier than we'd like.

So what if we instead design our tree to be good for query. So we by construction say that our query for cumprob will be this :


sum = Tree[0][i];
sum += Tree[1][i>>1];
sum += Tree[2][i>>2];
...

That is, at each level of the tree, the index (shifted down) contains the amount that should be added on to get the partial sum that preceeds you. That is, if i is >= 64 , then Tree[6][1] will contain the sum from [0..63] and we will add that on.

In particular, at level L, if (i>>L)is odd , it should contain the sum of the previous 2^L items. So how do we do the update for this?


Tree[0][i] ++;
i >>= 1;
if ( i&1 ) Tree[1][i] ++;
i >>= 1;
if ( i&1 ) Tree[2][i] ++;
...

or

Tree[0][i] ++;
if ( i&2 ) Tree[1][i>>1] ++;
if ( i&4 ) Tree[2][i>>2] ++;
...

or

Tree[0][i] ++;
i >>= 1;
Tree[1][i] += i&1;
i >>= 1;
Tree[2][i] += i&1;
...

this is exactly complementary to the query in the last type of tree; we've basically swapped our update and query.

Now if you draw what the sums look like for this tree you get :

These are the table indexes :

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 1 2 3 4 5 6 7
0 1 2 3
0 1
0

These are the table contents :

C0 C0-C1 C2 C2-C3 C4 C4-C5 C6 C6-C7 C8 C8-C9 C10 C10-C11 C12 C12-C13 C14 C14-C15
0 C0-C1 0 C4-C5 0 C8-C9 0 C12-C13
0 C0-C3 0 C8-C11
0 C0-C7
0
it should be obvious that you can take a vertical sum anywhere and that will give you the partial sum to the beginning. eg. if I take a vertical sum in the 13th slot I get (C12-C13)+(0)+(C8-C11)+(C0-C7) = (C0-C13) , which is what I want.

Now what if we slide everything left so we don't have all those zeros in the front, and we'll go ahead and stick the total sum in the top :

C0 C0-C1 C2 C2-C3 C4 C4-C5 C6 C6-C7 C8 C8-C9 C10 C10-C11 C12 C12-C13 C14 C14-C15
C0-C1 0 C4-C5 0 C8-C9 0 C12-C13 0
C0-C3 0 C8-C11 0
C0-C7 0
C0-C15

Now, for each range, we stuff the value into the top slot that it covers, starting with the largest range. So (C0-C7) goes in slot 7 , (C0-C3) goes in slot 3, etc :

C0 -v- C2 -v- C4 -v- C6 -v- C8 -v- C10 -v- C12 -v- C14 -v-
C0-C1 -v- C4-C5 -v- C8-C9 -v- C12-C13 -v-
C0-C3 -v- C8-C11 -v-
C0-C7 -v-
C0-C15

(the -v- is supposed to be a down-arrow meaning you get those cell contents from the next row down). Well, this is exactly a Fenwick Tree.

I'm not sure what the conclusion is yet, but I thought that was like "duh, cool" when I saw it.

8/25/2010

08-25-10 - ProfileParser

I realized a long time ago that the work I did for my AllocParser could basically be applied as-is to parsing profiles. It's the same thing, you have a counter (bytes or clocks), you want to see who counted up that counter, get hierarchical info, etc.

Also Sean and I talked long ago about how to do the lowest impact possible manually-instrumented profiler with full information. Basically you just record the trace and don't do any processing on it during recording. All you have to do is :


Profiler_Push(label)  *ptr++ = PUSH(#label); *ptr++ = rdtsc();
Profiler_Pop( label)  *ptr++ = POP( #label); *ptr++ = rdtsc();

#define PUSH(str)  (U64)(str)
#define POP (str) -(S64)(str)

where ptr is some big global array of U64's , and we will later use the stringized label as a unique id to merge traces. Once your app is done sampling, you have this big array of pushes and pops, you can then parse that to figure out all the hierarichical timing information. In practice you would want to use this with a scoper class to do the push/pop for you, like :

class rrScopeProfiler { rrScopeProfiler() { Push; } ~rrScopeProfiler() { Pop; } };

#define PROFILE(label)  rrScopeProfiler RR_STRING_JOIN(label,_scoper) ( #label );

Very nice! (obviously the pop marker doesn't have to provide the label, but it's handy for consistency checking).

(if you want to get fancy, the profiler push/pop array should really be write-combining memory, and the stores should be movnti's on x64 (*), that way the profiler push/pop wouldn't even pollute your cache, which makes it very low impact indeed)

(* BTW movnti is exposed as _mm_stream_si64 ; unfortunately, this does not exist in VC2005, you need 2008; the fact that they took away inline asm and then failed to expose all the instructions in intrinsics was a huge "fuck you" to all low-level developers; it was terrible in the early days, they've caught up a bit more with each rev ) (note that movntq is not the same, movnti comes from normal registers).

So I did this, and made my AllocParser able to parse that kind of input and turn it into AllocRecords. (the normal mode of AllocParser is to handle stack traces of memory allocations).

So now I can make tabview output for either top-down or bottom-up hierarchies, and also graphiz output like : lz_profile.svg .

There are a few minor things to clean up, like I'd like to be able to show either seconds or clocks, I'd like to be able to divide clocks by the total # of bytes to make it a clocks per byte, and if a node only has a self time, I'd like to delete the redundant +self node.

Another option is to show the hierarchy in a totally different graphviz style, using the boxes-within-boxes method. I tried that before for allocs and it was hideous, but it might be okay for timing. If I do that, then I could combine this with a global timeline to show thread profiles over time, and then use the arrows to connect dependencies.

Then I have to buy that old plotter I've been talking about ...

8/24/2010

08-24-10 - Free Trail Maps

Don't support those Green Trails cocksuckers who charge $400 for their maps. (if you do have the Green Trails maps, I encourage you to post them for public use on the net; you fuckers have abused your right for me to respect your copyright; in fact I vaguely considered buying a set just to scan it and post it, but they're not even worth that).

It's most appalling because all the maps are based on the USGS data, which is paid for by our fucking tax dollars. Fortunately there are some perfectly good free map sites :

libremap.org : Libre Map Project - Free Maps and GIS data
digital-topo-maps.com : Free Printable Topo Maps - Instant Access to Topographic Maps
ACME Mapper 2.0

Of these, digital-topo-maps.com is the easiest to browse around cuz it just uses Google maps (actually, it seems to be like 10X faster than Google's own interface, so it's actually just a nice way to browse normal maps too).

Libre Maps is the most useful for hiking with because it has nice printer-ready pages in high quality.

Also, I sometimes forget that Google still has the Terrain maps because they are hidden under More... now. I'm sure somebody has done trail overlays for Google Terrain, but I haven't found it.

old rants