8/10/2010

08-10-10 - Transmission of Huffman Trees

Transmission of Huffman Trees is one of those peripheral problems of compression that has never been properly addressed. There's not really any research literature on it, because in the N -> infinity case it disappears.

Of course in practice, it can be quite important, particularly because we don't actually just send one huffman tree per file. All serious compressors that use huffman resend the tree every so often. For example, to compress bytes what you might do is extend your alphabet to [0..256] inclusive, where 256 means "end of block" , when you decode a 256, you either are at the end of file, or you read another huffman tree and start on the next block. (I wrote about how the encoder might make these block split decisions here ).

So how might you send a Huffman tree?

For background, you obviously do not want to actually send the codes. The Huffman code value should be implied by the symbol identity and the code length. The so-called "canonical" codes are created by assigning codes in numerical counting up order to symbols of the same length in their alphabetical order. You also don't need to send the character counts and have the decoder make its own tree, you send the tree directly in the form of code lengths.

So in order to send a canonical tree, you just have to send the code lens. Now, not all symbols in the alphabet may occur at all in the block. Those technically have a code length of "infinite" but most people store them as code length 0 which is invalid for characters that do occur. So you have to code :


which symbols occur at all
which code lengths occur
which symbols that do occur have which code length

Now I'll go over the standard ways of doing this and some ideas for the future.

The most common way is to make the code lengths into an array indexed by symbol and transmit that array. Code lengths are typically in [1,31] (or even less [1,16] , and by far most common is [4,12]), and you use 0 to indicate "symbol does not occur". So you have an array like :


{ 0 , 0 , 0 , 4 , 5 , 7 , 6, 0 , 12 , 5 , 0, 0, 0 ... }

1. Huffman the huffman tree ! This code length array is just another array of symbols to compress - you can of course just run your huffman encoder on that array. In a typical case you might have a symbol count of 256 or 512 or so, so you have to compress that many bytes, and then your "huffman of huffmans" will have a symbol count of only 16 or so, so you can then send the tree for the secondary huffman in a simpler scheme.

2. Delta from neighbor. The code lens tend to be "clumpy" , that is , they have correlation with their neighbors. The typical way to model this is to subtract each (non-zero) code len from the last (non-zero) code len, thus turning them into deltas from neighbors. You can then take these signed deltas and "fold up" the negatives to make them unsigned and then use one of the other schemes for transmitting them (such as huffman of huffmans). (actually delta from an FIR or IIR filter of previous is better).

3. Runlens for zeros. The zeros (does not occur) in particular tend to be clumpy, so most people send them with a runlen encoder.

4. Runlens of "same". LZX has a special flag to send a bunch of codelens in a row with the same value.

5. Golomb or other "variable length coding" scheme. The advantage of this over Huffman-of-huffmans is that it can be adaptive, by adjusting the golomb parameter as you go. (see for example on how to estimate golomb parameters). The other advantage is you don't have to send a tree for the tree.

6. Adaptive Arithmetic Code the tree! Of course if you can Huffman or Golomb code the tree you can arithmetic code it. This actually is not insane; the reason you're using Huffman over Arithmetic is for speed, but the Huffman will be used on 32k symbols or so, and the arithmetic coder will only be used on the 256-512 or so Huffman code lengths. I don't like this just because it brings in a bunch more code that I then have to maintain and port to all the platforms, but it is appealing because it's much easier to write an adaptive arithmetic coder that is efficient than any of these other schemes.

BTW That's a general point that I think with is worth stressing : often you can come up with some kind of clever heuristic bit packing compression scheme that is close to optimal. The real win of adaptive arithmetic coding is not the slight gain in efficiency, it's the fact that it is *so* much easier to compress anything you throw at it. It's much more systematic and scientific, you have tools, you make models, you estimate probabilities and compress them. You don't have to sit around fiddling with "oh I'll combined these two symbols, then I'll write a run length, and this code will mean switch to a different coding", etc.

Okay, that's all standard review stuff, now let's talk about some new ideas.

One issue that I've been facing is that coding the huffman tree in this way is not actually very nice for the decoder to be able to very quickly construct trees. (I wind up seeing the build tree time show up in my profiles, even though I only buld tree 5-10 times per 256k symbols). The issue is that it's in the wrong order. To build the canonical huffman code, what you need is the symbols in order of codelen, from lowest codelen to highest, and with the symbols sorted by id within each codelen. That is, something like :


codeLen 4 : symbols 7, 33, 48
codeLen 5 : symbols 1, 6, 8, 40 , 44
codeLen 7 : symbols 3, 5, 22
...

obviously you can generate this from the list of codelens per symbol, but it requires a reshuffle which takes time.

So, maybe we could send the tree directly in this way?

One approach is through counting combinations / enumeration . For each codeLen, you send the # of symbols that have that codeLen. Then you have to select the symbols which have that codelen. If there are M symbols of that codeLen and N remaining unclaimed symbols, the number of ways is N!/(M!*(N-M)!) , and the number of bits needed to send the combination index is log2 of that. Note in this scheme you should also send the positions of the "not present" codeLen=0 group, but you can skip sending the entire group that's largest. You should also send the groups in order of smallest to largest (actually in order or *complement* order, a group that's nearly full is as good as a group that's nearly empty).

I think this is an okay way to send huffman trees, but there are two problems : 1. it's slow to decode a combination index, and 2. it doesn't take advantage of modelling clumpiness.

Another similar approach is binary group flagging. For each codeLen, you want to specify which remaining symbols are of that codelen or not of that codelen. This is just a bunch of binary off/on flags. You could send them with a binary RLE coder, or the elegant thing would be Group Testing. Again the problem is you would have to make many passes over the stream and each time you would have to exclude already done ones.

(ADDENDUM : a better way to do this which takes more advantage of "clumpiness" is like this : first code a binary event for each symbol to indicate codelen >=1 (vs. codeLen < 1). Then, on the subset that is >= 1, code an event for is it >= 2, and so on. This the same amount of binary flags as the above method, but when the clumpiness assumption is true this will give you flags that are very well grouped together, so they will code well with a method that makes coherent binary smaller (such as runlengths)).

Note that there's another level of redundancy that's not being exploited by any of these coders. In particular, we know that the tree must be a "prefix code" , that is satisfy Kraft, (Sum 2^-L = 1). This constrains the code lengths. (this is most extreme for the case of small trees; for example with a two symbol tree the code lengths are completely specified by this; on a three symbol tree you only have one free choice - which one gets the length 1, etc).

Another idea is to use MTF for the codelengths instead of delta from previous. I think that this would be better, but it's also slower.

Finally when you're sending multiple trees in a file you should be able to get some win by making the tree relative to the previous one, but I've found this is not trivial.

I've tried about 10 different schemes for huffman tree coding, but I'm not going to have time to really solve this issue, so it will remain neglected as it always has.

08-10-10 - One Really nice thing about the PS3

Is that PS3RUN takes command line args and my app just gets argc/argv. And I get stdin/stdout that just works, and direct access to the host disk through app_home/. That is all hella nice.

It means I can make command line test apps for regression and profiling and just run them and pass in file name arguments for testing on.

08-10-10 - HeapAllocAligned

How the fuck is there no HeapAllocAligned on Win32 ?

and yes I know I can use clib or do it myself or whatever, but still, WTF ?

08-10-10 - A small note on memset16

On the SPU I'm now making heavy use of a primitive op I call "memset16" ; by this I don't mean that it must be 16-byte aligned, but rather that it memsets 16-byte patterns, not individual bytes

void rrSPU_MemSet16(void * ptr,qword pattern,int count)
{
    qword * RADRESTRICT p = (qword *) ptr;
    char * end = ((char *)ptr + count );

    while ( (char *)p < end )
    {
        *p++ = pattern;
    }
}

(and yes I know this could be faster, this is the simple version for readability).

The interesting thing has been that taking a 16-byte pattern as input actually makes it way more useful than normal byte memset. I can now also memset shorts and words, floats, doubles, and vectors! So this is now the way I do any array assignments when a chunk of consecutive elements are the same. eg. instead of doing :


float fval = param;
float array[16];

for(int i=0;i<16;i++) array[i] = fval;

you do :

float array[16];
qword pattern = si_from_float(fval);

rrSPU_MemSet16(array,pattern,sizeof(array));

In fact it's been so handy that I'd like to have it on other platforms, at least up to U32.

8/06/2010

08-06-10 - The Casey Method

The "Casey Method" of dealing with boring coding tasks is to instead create a whole new code structure to do them in. eg. need to write some boring GUI code? Write a new system for making GUIs.

When I was young I would reinvent the wheel for everything stupidly. Then I went through a maturation phase where I cut back on that and tried to use existing solutions and be sensible. But now I'm starting to see that a certain judicious amount of "Casey Method" is a good idea for certain programmers.

For example, if I had to write a game level editor tool in C# the straightforward way, it would probably be the most efficient way, but that's irrelevant, because I would probably kill myself becuase I finished, or just quit the job, so the actual time to finish would be infinite. On the other hand, if I invented a system to write the tool for me, and it was fun and challenging, it might take longer in theory, and not be the most efficient/sensible/responsible/mature way to go, but it would keep be engaged enough that I would actually do it.

08-06-10 - Visual Studio File Associations

This is my annoyance of the moment. I rarely double-click CPP/H files, but once in a while it's handy.

On my work machine, I currently have VC 2003,2005,2008, and 2010 installed. Which one should the CPP/H file open in?

The right answer is obviously : whichever one is currently running.

And of course there's no way to do that.

Almost every day recently I've had a moment where I am working away in VC ver. A , and I double click some CPP file, and it doesn't pop up, and I'm like "WTF", and then I hear my disk grinding, and I'm like "oh nooes!" , and then the splash box pops up announcing VC ver. B ; thanks a lot guys, I'm really glad you started a new instance of your gargantuan hog of a program so that I could view one file.

Actually if I don't have a DevStudio currently running, then I'd like CPP/H to just open in notepad. Maybe I have to write my own tool to open CPP/H files. But even setting the file associations to point at my own tool is a nightmare. Maybe I have to write my own tool to set file associations. Grumble.

(for the record : I have 2008 for Xenon, 2005 is where I do most of my work, I keep 2003 to be able to build some old code that I haven't ported to 2005 templates yet, and 2010 to check it out for the future).

08-06-10 - More SPU

1. CWD instruction aligns the position you give it to the size of the type (byte,word,dword). God dammit. I don't see this documented anywhere. CWD is supposed to generate a shuffle key for word insertion at a position (regA). In fact it generates a shuffle key for insertion at position ( regA & ~3 ). That means I can't use it to do arbitrary four byte moves.

2. Shufb is not mod 32. Presumably because it has those special 0 and FF selections. This is not a huge deal because you can fix it by doing AND splats(31) , but that is enough slowdown that it ruins shufb as a fast path for doing byte indexing. (people use rotqby instead because that is mod 16).

A related problem for Shufb is that there's no Byte Add. If you had byte add, and shufb was mod 32, then you could generate grabbing a 16-byte portion of two quads by adding an offset.

In order to deal with this, you have to first mod your offset down to [0,15] so that you won't overflow, then you have to see if your original offset before modding had the 16 bit set, and if so, swap the reg order you pass to shuffle. If you had a byte add and shuffle was mod 32, you wouldn't have to do any of that and it would be way faster to do grabs of arbitrary qwords from two neighboring qwords.

(also there's no fast way to generate the typical shuffle mask {010203 ...}. )

3. You can make splats a few different ways; one is to rely on the "spu_splats" builtin, which figures out how to spread the value (usually using FSMB or some such variant). But you can also write it directly using something like :

(vec_int4) { -1,-1,-1,-1 }

which the compiler will either turn into loading a constant or some code to generate it, depending on what it thinks is faster.

Some important constants are hard to generate, the most common being (vec_uchar16){0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15} , so you might want to load that into a register and then pass it around if you need it much. (if you are using the stdlib memcpy at all, you can also get it from __Shuffles).

4. Something that I always want with data compression and bit manipulation is a "shift right with 1 bits coming in". Another one is bit extract and bit insert , grab N bits from position M. Also just "mask with N bits" would always be nice, so I don't have to do ((1 << N)-1) (or whatever) to generate masks. Obviously the SPU is not intended for bit manipulation. I'm sure getting Blu-Ray working on it was a nightmare.

5. If you build with -fpic (to make your ELF relocatable), then all your static addresses are relative to r126, which means if you just reference some static in your inner loop it will do a ton of math all the time (the add r126 is not a big deal, but just fetching the address a lot of work) ; as usual if you just copy it out to a local :


   combinedDecodeTable = s_combinedDecodeTable;

the compiler will "optimize" out the copy and will do all the math to compute s_combinedDecodeTable in your inner loop. So you have to use the "volatile trick" :

    volatile UINTa v_hack;
    v_hack = (UINTa) s_combinedDecodeTable;
    const CombinedDecodeTable * RADRESTRICT const combinedDecodeTable = (const CombinedDecodeTable * RADRESTRICT const) v_hack;

sigh.

6. memcpy seems to be treated differently than __builtin_memcpy by the compiler. They both call the same code, but sometimes calling memcpy() doesn't inline, but __builtin_memcpy does. I don't know WTF is up with that (and yes I have -fbuiltins set and all that shit, but maybe there's some GCC juju I'm missing). Also, it is a decent SPU-optimized memcpy, but the compiler doesn't have a true "builtin" knowledge of memcpy for SPU the way it does on some platforms - eg. it doesn't know that for types that are inherently 16-byte aligned it can avoid the rollup/rolldown, it just always calls memcpy. So if you're doing a lot of memcpys of 16-byte aligned stuff you should probably have your own.

7. This one is a fucking nightmare : I put in some "scheduling barriers" (asm volatile empty) to help me look at the disassembly for optimization (it makes it much easier to trace through because shit isn't spread all over). After debugging a bit, I forgot to take out the scheduling barriers and ran my speed test again -

it was faster.

Right now my code is 5-10% faster with a few scheduling barriers manually inserted (19 clocks per symbol vs 21). That's fucking bananas, it means the compiler's scheduler is fucking up somehow. It sucks because there's no way for me to place them in a systematic way, I have to randomly try moving them around and see what's fastest.

8. Lots of little things are painfully important on the SPU. For example, obviously most of your functions should be inline, but sometimes you have to make them forceinline (and it's a HUGE difference if you don't because once a val goes through the stack it really fucks you), but then sometimes it's faster if it's not inline. Another nasty one is manual scheduling. Obviously the compiler does a lot of scheduling for you (though it fucks up as noted previously), but in many cases it can't figure out that two ops are actually commutative. eg. say you have something like A() which does shared++ , and B() also does shared++ , then the compiler can't tell that the sequence {A B} could be reordered to {B A} , so you have to manually try both ways. Another case is :


A()

if ( x )
{
    B()
    .. more ..
}
else
{
    B()
    .. more ..
}

and again the compiler can't figure out that the B() could be hoisted out of the branches (more normally you would probably write it with B outside the branch, and the compiler won't bring it in). It might be faster inside the branches or outside, so you have to try both. Branch order reorganization can also help a lot. For example :

if ( x )
    if ( y )
        .. xy case
    else
        .. xNy case
else
    if ( y )
        .. Nxy case
    else
        .. NxNy case

obviously you could instead do the if(y) first then the if(x). Again the compiler can't do this for you and it can make a big difference.

This kind of shit affects every platform of course, but on a nice decent humane architecture like a Core iX/x86 it's < 1%. On SPU it's 5-10% which is nothing to sneeze at and forces you to do this annoying shit.

9. One thing that's annoying in general about the SPU (and even to some extent the PPU) is that some code patterns can be *SO* slow that your non-inner-loop parts of the code dominate.

On the wonderful PC/x86 , you basically can just find your hot spots and optimize those little routines, and ignore all your glue and startup code, cuz it will be reasonably fast. Not true on the SPU. I had some setup code that built some acceleration tables that's only called once, and then an inner loop that's called two hundred thousand times. The setup code takes 10% of the total time. On the PC it doesn't even show up as a blip. (On the Xenon you can get this same sort of thing if your setup code happens to do signed-16-bit loads, variable shifts, or load-hit-stores). On the SPU my setup code that was so bad was doing lots of byte reads & writes, and lots of branches.

The point is on the PC/x86 you can find 10% of your code base that you have to worry about for speed, and the other 90% you just make work. On the SPU you have to be aware of speed on almost all of your code. Blurg that blows so bad.

10. More generally, I found it just crucial to have my SPU ELF reloader running all the time. Every time I touch the code in any way, I hit "build" and then look over and see my speed, because my test app is just running on the PS3 all the time reloading the SPU ELF and running the test. Any time you do the most inoccuous thing - you move some code around, change some variable types - check the speed, because it may have made a surprisingly big difference. (*)

(* = there is a disadvantage to this style of optimization, because it does lead you into local minima; that is, I wind up being a greedy perturbation search in the code space; some times in order to find real minima you have to take a big step backwards).

11. For testing, be careful about what args you pass to system_init ; make sure you aren't sharing your SPUs with the management processes. If you see unreliable timing, something is wrong. (5,1) seems to work okay.

8/05/2010

08-05-10 - P4 Shelf

The fact that I can't access the new "p4 shelve" from the old P4Win client makes it almost useless to me. P4V is ridiculously awful. It takes like 30 seconds to start up, I don't know WTF they're doing, but I know it's not okay.

Shelving with just one workspace as a way to save your work seems to work okay, but I've been using it to make temp checkins to go between my various computers, and it's not awesome for that. I guess what I really have to do is make real branches for that, but branches scare the bejeesus out of me.

8/04/2010

08-04-10 - Initial Learnings of SPU

  • Yes I know I'm 4 (5?) years late.

  • The SPU decrementer runs at the same frequency as mftb on the PPU, that's 79.8 MHz

  • There are two different ways the intrinsics are exposed; spu_ are C++ and supposed to know about types; the si_ just expose the instructions directly and are untyped. The spu_ intrinsics are kind of annoying because they are overloaded C++ , but they don't get it right, so you get compile errors all the time. The vec_blah types don't really seem to work right with the intrinsics. The best practice I've found is just to use the untyped "qword" and the "si_" intrinsics, and cast to a vec_ type if you want to grab a value out (like ((vec_uint4)w)[0] ).

  • SPU GCC seems to have the same problem as the PPU GCC and the Xenon MSVC compiler, in that all of them incorrectly optimize out assignments to different types. I've never seen this problem on MSVC/x86 , but maybe it's just that x86 is pretty good about running the same speed no matter what the type of the variable. The problem with the PowerPC and SPU is that performance turns out to be very sensitive to data types. So, I have lots of code that does something like :
    
    int x = struct.array[7];
    
    for(... many .. )
    {
        .. do stuff with x ..
    }
    
    
    and what I'm seeing is that the "do stuff with x" is generating a load from struct.array[7] every time, and the assignment into my temp has been eliminated completely. On the SPU this shows up when you have something a struct member int m_x and you do something like :
    
    qword x = spu_promote( struct.m_x , 0 );
    
    for(... many .. )
    {
        .. do stuff with x ..
    }
    
    
    and the compiler decides that rather than load m_x once and keep it in a register it will reload it all the time. It's like the optimizer has a very early pass to merge variables that have just been renamed by assignment - but it is incorrectly not accounting for side effects on the code gen. Weird. Anyway this brings us to :

  • The SPU has only vector registers and instructions. Despite that, basic int math is mostly okay. The "top" ([0]) int in a qword can be used basically like an int register. You get all the basic ops on it and it runs full speed. The big problem is with loads and stores. You can only load/store aligned quadwords, so accessing something like a U32 from a memory location involves some shuffling around. rotqby solves your loads, and cbd/chd/cwd/cdd + shufb solves your stores.

    One annoyance is that since only the top word can be used for things like loads, you often lose your SIMD. Like say you want to generate 4 addresses and do 4 loads, you can do your math on 4 channels, but then you have to copy the results out of channels 1-3 into other registers in slot 0 (using shuf or rotqby), and this often is more expensive than just doing the math 4 separate times. (because the instructions to transfer across lanes are slower than math).

  • You can, unlike old SSE, do a lot of ops that are "horizontal" across the quadword. There are nice bit scatters and gathers, as well as byte rotates, shuffles, etc. You can treat the qword as a big 128 bit register and rotate around the whole thing, but it's not fast; a rotate left takes two instructs (rotbi and rotbybi) while a rotate right takes three (rotbi, rotbybi, + a subtract)

  • So far as I can tell, __align_hint, doesn't do anything for you. For review, __attribute__ ((aligned (16)) is the way you specify that a certain variable should be aligned by the compiler. __align_hint on the other hand lets it know that a given pointer which was not specified as aligned in fact is aligned (you know this as a programmer). In theory it could generate better code from that, but I don't see it, which is related to :

  • It certainly doesn't do magic about combining loads. For example, say you have a pointer to a struct that's struct { int x,y,z,w; }. You tell it __align_hint and then you do struct.x ++ , struct.y ++. In theory, it should load that whole struct as one single quadword load, do a bunch of work on it, then store it back as one quadword. It won't do that for you. The compiler generates separate loads and shuffles to get out each member, so you have to do all this kind of thing by hand.

  • The SPU has no branch predictor. Or rather it has a "zero bit" branch predictor - it always predicts branch not taken (unless you manually hint it, see later). It is deeply pipelined, so it will run ahead on the expected branch, and then if you actually go the other way, it's a 20 clock penalty. That's not nearly as bad as a missed branch on most chips, so it's nothing to tear up your code over (eg. it's less than an L1 miss on the PPU, which is 40 clocks!).

    To make use of the static branch predictor, organize branches so the likely part is first (the "if" part is likely, the "else" is unlikely). You can override this with if (__builtin_expect(expr, 1/0)) , I use these :

    
    #define IF_LIKELY(expr)   if ( __builtin_expect(expr,1) )
    #define IF_UNLIKELY(expr) if ( __builtin_expect(expr,0) )
    
    
    but Mike says it's best just to inherently arrange your code. The way this is actually implemented on the SPU is with a branch hint instruction. The instruction sequence to the SPU will look like :
    
    branch hint - says br is likely
    .. stuff ..
    branch
    
    
    if branch was unlikely, there's no need for a hint, so it's only generated to tell the SPU that the branch is likely, eg. the instruction fetch needs to jump ahead to the branch location. The branch hint needs to be 15 clocks or so before the branch. One compile option that turns out to be important is
    
    -mhint-max-nops=2
    
    
    which tells GCC to insert nops to make the branch hint far enough away from the branch. Right now 2 is optimal for me, but YMMV, it's a number you have to fiddle with.

    Now, in theory you can also do dynamic branch prediction. If you can compute the predicate for your branch early enough, you can see if it will be taken or not and either hint or not hint (obviously not using a branch, but using a cmov/select). I have not found a way to make the compiler generate this, so this seems to be reserved for pure assembly in super high performance code.

  • Dual issue; The SPU is a lot like old Pentium, it has dual pipes and different types of instructions go on either pipe. Unfortunately, for data compression almost everything I want is on the odd pipe - all the loads & store, branches and all the cross-quadword stuff (shuffles and rotqby and such) are all odd pipe. The instructions have to be aligned right (they have to alternate even and odd); if you're writing ASM you need to be super-aware of this; if you're just writing C, the compiler should do it for you; with intrinsics, you're sort of in between. The compiler should theoretically schedule the intrinsics to make them line up with the right pipe, but if you only give it odd pipe intrinsics it can't do anything. Because of that I saw odd things like :
    
    // these two instructions are much faster 
    
        qword vec_item_addr = si_a( vec_fastDecodeTable , vec_peek );
        qword vec_item = si_rotqby(vec_item_data,vec_item_addr);
    
    // than this one !?
    
        qword vec_item = si_rotqby(vec_item_data,vec_peek);
    
    
    Part of the solution to that is to compile with "-mdual-nops=1" which lets the compiler insert nops to fix dual issue alignment. The =1 was the best for me, but again YMMV. BTW one problem with the SPU is that there is code size pressure (just to fit in the 256k), and all these nops do make the code size bigger.

    ADDENDUM : just found an important issue with the dual pipe; instruction fetch is on odd pipe (obvious I guess, all fetch is on odd pipe). That means when you overload the odd pipe, not only are you failing to dual issue - you are fighting with instruction fetch for time. That's a disaster. So pretty much any time you can replace a load/store/rotq/shufb/etc. with a math op on even pipe, it's a win, even if you need 2-3 math instructions to do the same odd pipe instruction.

  • Optimizing for SPU is not as obvious as Insomniac and some others might lead you to believe. If you just replace C with vector ops, you can easily make your code slower. The problem is that the latencies and dependency stalls are not obvious. Latencies are mostly around 2-6 , with lots at 4, which seems short until you realize that 4 clocks = 8 instructions, which makes it pretty hard to make sure you are not dependency-bound. For example , this :
    
        4370:   42 7f ff ad     ila $45,65535   # ffff
        4374:   38 8d c8 33     lqx $51,$16,$55 // load quad for codelen
        4378:   18 0d c8 36     a   $54,$16,$55 // 54 = address of codelen
        437c:   18 0d db b5     a   $53,$55,$55 // 53 = peek*2
        4380:   1c 03 5b 34     ai  $52,$54,13  // 52 = align byte for codelen
        4384:   38 83 9a b0     lqx $48,$53,$14 // load qw for table[peek]
        4388:   18 03 9a b2     a   $50,$53,$14 // 50 = address of symbol
        438c:   3b 8d 19 af     rotqby  $47,$51,$52 // rotate to get byte for codelen
        4390:   1c 03 99 31     ai  $49,$50,14  // 49 = adjust to get word symbol
        4394:   14 3f d7 ac     andi    $44,$47,255 // &= 0xFF to get byte 44 = codelen
        4398:   3b 8c 58 2e     rotqby  $46,$48,$49 // rot by to get symbol
        439c:   3b 0b 06 2b     rotqbi  $43,$12,$44   // use
        43a0:   08 03 56 0d     sf  $13,$44,$13   // use bits
        43a4:   18 2b 57 07     and $7,$46,$45   // $7 = symbol , 45 = ffff
        43a8:   39 8b 15 8c     rotqbybi    $12,$43,$44  // use
    
    
    is faster than this :
    
        4360:   0f 60 d7 2c     shli    $44,$46,3   // *= 8 to make table address
        4364:   38 8b 07 aa     lqx $42,$15,$44 // load qw for table
        4368:   18 0b 07 ab     a   $43,$15,$44 // 43 = address of table item
        436c:   3b 8a d5 29     rotqby  $41,$42,$43 // rotate qword to get two dwords at top
        4370:   08 02 d4 8b     sf  $11,$41,$11 // use
        4374:   3b 0a 45 27     rotqbi  $39,$10,$41 // use
        4378:   3f 81 14 87     rotqbyi $7,$41,4    // rotate for symbol
        437c:   39 8a 53 8a     rotqbybi    $10,$39,$41 // use
    
    

    Anyway, the point is if you want to really get serious, you have to map out your dependency chains and look at the latency of each step and try to minimize that, rather than just trying to reduce instructions.

  • One particularly annoying example of bad code gen. If you do this :
    
    int x,y;
    
    loop {
    
    if ( x <= y )
    
    
    }
    
    
    it will *sometimes* do the right thing (the right thing is to make x and y qwords and do all the int ops on the [0] element). However, sometimes it decides that they aren't important enough and it will leave them as ints on the stack, in which case you get loads & shuffles to fetch them.

    More troubling is the fact that you cannot easily trick the compiler to fix it. You might think that this :

    
    vec_int4 x,y;
    
    loop {
    
    if ( x[0] <= y[0] )
    
    
    }
    
    
    would do the trick - you're telling the compiler that I want them to be qwords, but I only use the top int part. Nope, that doesn't work, in fact it often makes it *much worse*. In theory, the compiler should be able to tell that I am only ever touching x[0], so when I do
    
    x[0] ++
    
    
    it should be able to know that it can go ahead and increment the *whole* vector x with a plain old add (eg. also increment x[1],2,3), but it doesn't always do that, and you wind up getting instructions to extract the portion you're working on. Sigh.

    The only reliable way I have found is to go ahead and use the qword intrinsics

    
    qword x,y;
    
    loop {
    
    qword comp = si_clgt( x, y );
    if ( comp[0] )
    
    
    }
    
    
    but the problem with that is that you now have to be very careful about what instructions you choose to generate for your operations - you can often be slower than the plain C using intrinsics because you aren't thinking about scheduling and pairing. Also, once you start using intrinsics, you can't use too much of the [0] form of access any more because of the above note.

    To be clear, what I would like is a way to say "I'm going to only work on this as an int, but please keep it in a vector and ignore the bottom 3 components and generate the code that is fastest to give me the [0] result" , but there doesn't seem to be a reliable way to make the compiler do that. I dunno, maybe there's a way to get this.

8/02/2010

08-02-10 - Work

Work is a compulsion. I've been working way too much lately, it's hurting my back and my shoulder, making me depressed. There's no real pressure for me to do it, all the pressure comes from myself. For one thing I do feel like I need to get a lot of things done really quick. First I have to finish this optimization / cross-platform shit I'm doing, then I want to get my threading stuff cross-platform and tested better, then I need to get back to video and finish some things. I feel like I really need to finish this video stuff and I have to do it fast.

But more than that, work is like a mental tick that I sometimes indulge in. It's an autistic fugue. You go into this hole where all you can think about is technical issues, and it's horrible, but it also sort of feels good. Like taking a poo, or playing with a loose tooth. Then when I get into this state I just can't stop. I try to relax with N, but all I can think about is spu_shufb and should I be using _align_hint ? And does DMA invalid ll-sc reservations? And I have to go back to work.

I imagine it's a bit like having OCD. It's not like the OCD guy really wants to count the lines in the wood grain. But if he walks into the room and doesn't count them, it just eats at his brain - "must count wood grain" - repeating over and over. It's not like working really makes me happy; after a day of solid work I don't feel good, in fact quite the opposite, my brain feel fried from fatigue, and my body is in great pain from sitting too much, but I can't resist it, and if I don't work I just keep thinking "work work work".

old rants