08-28-13 - How to Crunch

Baby is like the worst crunch ever. Anyway it's got me thinking about things I've learned about how to cope with crunch.

1. There is no end date. Never push yourself at an unsustainable level, assuming it's going to be over soon. Oh, the milestone is in two weeks, I'll just go really hard and then recover after. No no no, the end date is always moving, there's always another crunch looming, never rely on that. The proper way to crunch is to find a way to lift your output to the maximum level that you can hold for an indeterminate amount of time. Never listen to anyone telling you "it will be over on day X, let's just go all-out for that", just smile and nod and quietly know that you will have the energy to keep going if necessary.

2. Don't stop taking care of yourself. Whatever you need to do to feel okay, you need to keep doing. Don't cut it because of crunch. It really doesn't take that much time, you do have 1-2 hours to spare. I think a lot of people impose a kind of martyrdom on themselves as part of the crunch. It's not just "let's work a lot" it's "let's feel really bad". If you need to go to the gym, have a swim, have sex, do yoga, whatever it is, keep doing it. Your producers and coworkers who are all fucking stupid assholes will give you shit about it with passive aggressive digs; "ooh I'm glad our crunch hasn't cut into your workout time, none of the rest of us are doing that". Fuck you you retarded inefficient less-productive martyr pieces of crap. Don't let them peer pressure you into being stupid.

3. Resist the peer pressure. Just decided this is worth it's own point. There's a lot of fucking retarded peer pressure in crunches. Because others are suffering, you have to also. Because others are putting in stupid long hours at very low productivity, you have to also. A classic stupid one is the next point -

4. Go home. One of the stupidest ideas that teams get in crunches is "if someone on the team is working, we should all stay for moral support". Don't be an idiot. You're going to burn out your whole team because one person was browsing the internet a month ago when they should have been working and is therefore way behind schedule? No. Everyone else GO HOME. If you aren't on the critical path, go sleep, you might be critical tomorrow. Yes the moral support is nice, and in rare cases I do advocate it (perhaps for the final push of the game if the people on the critical path are really hitting the wall), but almost never. Unfortunately as a lead you do often need to stick around if anyone on your team is there, that's the curse of the lead.

5. Sleep. As crunch goes on, lack of sleep will become a critical issue. You've got to anticipate this and start actively working on it from the beginning. That doesn't just mean making the time to lie in bed, it means preparing and thinking about how you're going to ensure you are able to get the sleep you need. Make rules for yourself and then be really diligent about it. For me a major issue is always that the stress of crunch leads to insomnia and the inability to fall asleep. For me the important rules are things like - always stop working by 10 PM in order to sleep by 12 (that means no computer at all, no emails, no nothing), no coffee after 4 PM, get some exercise in the afternoon, take a hot shower or bath at night, no watching TV in bed, etc. Really be strict about it; your sleep rules are part of your todo list, they are tasks that have to be done every day and are not something to be cut. I have occasionally fallen into the habit of using alcohol to help me fall asleep in these insomnia periods; that's a very bad idea, don't do that.

6. Be smart about what you cut out of your life. In order to make time for the work crunch you will have to sacrifice other things you do with your life. But it's easy to cut the wrong things. I already noted don't cut self care. (also don't cut showering and teeth brushing, for the love of god, you still have time for those). Do cut non-productive computer and other electronics time. Do cut anything that's similar to work but not work, anything where you are sitting inside, thinking hard, on a computer, not exercising. Do cut "todos" that are not work or urgent; stuff like house maintenace or balancing your checkbook, all that kind of stuff you just keep piling up until crunch is over. Do cut ways that you waste time that aren't really rewarding in any way (TV, shopping, whatever). Try not to cut really rewarding pleasure time, like hanging out with friends or lovers, you need to keep doing that a little bit (for me that is almost impossible in practice because I get so stressed I can't stop thinking about working for a minute, but in theory it sounds like a good idea).

7. Be healthy. A lot of people in crunch fall into the trap of loading up on sugar and caffeine, stopping exercising, generally eating badly. This might work for a few days or even weeks, but as we noted before crunch is always indeterminate, and this will fuck you badly long term. In fact crunch is the most critical time to be careful with your body. You need it to be healthy so you can push hard, in fact you should be *more* careful about healthy living than you were before crunch. It's a great time to cut out all sugary snacks, fast food, and alcohol.

08-28-13 - Crying Baby Algorithm

Diagnosis and solution of crying baby problem.

1. Check diaper. This is always the first check because it's so easy to do and is such a conclusive result if positive. Diaper is changed, proceed to next step.

2. Check for boredom. Change what you're doing, take baby outside and play a new game. If this works, hunger or fatigue are still likely, you're just temporarily averting them.

3. Check hunger. My wife disagrees about this being so early in the order of operations, but testing it early seems like a clear win to me. After diaper this is the test with the most reliable feedback - if baby eats, that's a positive result and you're done. You can do an initial test for hungry-sucking with a finger in the mouth (hungry-sucking is much harder and more aggressive than comfort-sucking); hungry-sucking does not 100% of the time indicate actual hunger, nor does lack of hungry-sucking mean no hunger, but there is strong correlation. Doing a test feed is much easier with breastfed babies than with bottles (in which case you have to warm them up and find a clean nipple and so on). If baby is extremely upset, then failure to respond to a test-feed does not mean it is not hungry, you may have to repeat with stronger measures (see below).

4. Check for obvious stomach pain. Crying from stomach pain (gas, burps, acid, whatever) can be hard to diagnose once it becomes constant or severe (we'll get to that case below). But in the early/mild stages it's pretty easy to detect by testing body positions. If baby cries like crazy on its back but calms down upright or on its stomach (in a football hold), stomach pain is likely. If baby quiets down with abdomen pressure and back-patting, stomach pain is likely.

5. Look for obvious fatigue. The signs of this are sometimes clear - droopy eyes, yawning, etc. Proceed to trying to make baby sleep.

This is the end of the easy quick checks. If none of these work you're getting into the problem zone, where there may be multiple confounding factors, or the factors may have gone on so long that baby no longer responds well to them (eg. hungry but won't eat, gassy and patting doesn't stop the crying), so you will have to do harder checks.

Before proceeding, go back to #1 and do the easy checks again. Often it's the case that baby was just hungry, but after you did #1 it pooped and so wouldn't eat. It also helps to have a different person do the re-check; baby may not have responded to the first person but will respond to the next person doing exactly the same check the second time.

This is worth repeating beacuse it's something that continues to be a stumbling block for me to this day - just becaues you checked something earlier in the crying process doesn't mean it's not the issue. You tested hunger, she wouldn't eat. That doesn't mean you never test hunger again! You have to keep working on issues and keep re-checking. You have not ruled out anything! (except dirty diaper, stick your finger in there again to make sure, nope that's not it).

If that still doesn't work, proceed to the longer checks :

6. Check for overstimulation. This is pretty straightforward but can take a while. Take baby to a dark room, pat and rock, make shushes or sing to baby, perhaps swaddle. Just general calming. Crying may continue for quite a while even if this is the right solution so it takes some patience. (and then do 1-5 again)

7. Check for fatigue but won't go to sleep. The usual technique here is a stroller walk. (and then do 1-5 again)

8. Check for hungry but won't eat. You will have to calm baby and make it feel okay in order to eat. Proceed as per #6 (overstimulation) but stick baby on a nipple, in its most secure eating position. For us it helps to do this in the bedroom where baby sleeps and eats at night, that seems to be the most comforting place. Swaddling also helps here. Baby may even turn away and reject eating several times before it works. (and then do 1-5 again)

9. Assume unsignalled stomach pain. Failing all else, assume the problem is gastrointestinal, despite the lack of a clear GI relief test result (#4). So just walk baby around and pat its back, there's nothing else to do. (and keep repeating 1-5 again)

10. If you reach this point, your baby might just be a piece of shit. Throw it out and get a better one.


08-27-13 - Email Woes

I'm having some problem with emails. Somewhere between gmail and my verio POP host, some emails are just not getting through.

If you send me an important email and I don't reply, you should assume I didn't get it. Maybe send it again with return-receipt to make sure. (if you send me a "funny" link and I don't reply, GTFO)

Fucking hell, I swear everyone is fired. How is this so hard. Sometimes I think that the project I should really be working on (something that would actually be important and make a difference in the world to a lot of people) is making a new internet from scratch. Fully encrypted and peer-to-peer so governments can never monitor it, and no corporate overlord controls the backbone, and text-only so fucking flash or whatever can never ruin it. Maybe no advertising either, like the good old days. But even if I built it, no one would come because it wouldn't have porn or lolcats or videos of nut-shots.


08-26-13 - OMG E46 M3's have the best ads

Everyone knows that E46 M3's attract an amazing demographic . I'm still occasionally keeping my eye out for them (I can't face the minivan!) , and you come across the most incredible ads for these things.

Check this one out :

2004 E46 Bmw Laguna Blue M3

OMG it starts so good and then just gets better and better. Like you can't believe that he can top his last sentence, and then he just blows it away. Amazing. (if it's fake, it's fucking brilliant)

In white for when that expires (fucking useless broken ass piece of shit internet can't even hold some text permanently, what a fakakta load of crap everything in this life is, oy) :

Hello i got my baby blue BMW m3. Finally decided to sell. You will never find a cleaner BMW i promise. Best deal for best car. -black on black -6 speed manual -66,435 miles (all freeway) (half city)(1/2country) never racetracked. -Rebuilt title( -fixed by my brother Yuri with quality shop, he did good job). -19 inch wheels perfect for race, burnout, drift. I never do.. I promise. -AC blows hard like Alaska. -333hp but i put intake and its now 360hp at least, trust me. Powerfull 3,2 engine almost as fast as Ferarri 360. You dont believe? Look it up, youtube i have video! I can show you in the test drive.. You will feel like king on the roads... Trust me. This car puts you in extasy feeling. The pistons push hard. sometimes i say, "Big mistake- big payment, sucks job". But for you this the best because i believe ur serious. I keep it very clean, oil change every 20,000 miles. Ladies like it. My wife is mad sell, because she drives to work everyday, in seattle, and likes the power. I say we buy toyota and go vacation to Hawai. CASH ONLY( no credit, debit , fake checks, paypal, ebay, trade only for mercedes AMG, i like power. No lowball, serious buyers only, im tired of low balls offers, so please serious. I have pictures here. Thank you. I love the car so i could keep it if you dont want it. And go to casino to make payment. Im good with blackjack.

You dont believe? Look it up, youtube i have video!


08-22-13 - Sketch of Suffix Trie for Last Occurance

I don't usually like to write about algorithms that I haven't actually implemented yet, but it seems in my old age that I will not actually get around to doing lots of things that I think about, so here goes one.

I use a Suffix Trie for string matching for my LZ compressors when optimal parsing.

reminder: Suffix Tries are really the super-awesome solution, but only for the case that you are optimal parsing not greedy parsing, so you are visiting every byte, and for large windows (sliding window Suffix Tries are not awesome). see : LZ String Matcher Decision Tree (w/ links to Suffix Trie posts)

Something has always bothered me about it. Almost the entire algorithm is this sweet gem of computer science perfection with no hackiness and a perfect O(N) running time. But there's one problem.

A Suffix Trie really just gives you the longest matching substring in the window. It's not really about the *location* of that substring. In particular, the standard construction using pointers to the string that was inserted will give you the *first* occurance of each substring. For LZ compression what you want is the *last* occurance of each substring.

(I'm assuming throughout that you use path compression and your nodes have pointers into the original window. This means that each step along the original window adds one node, and that node has the pointer to the insertion location.)

In order to get the right answer, whenever you do a suffix query and find the deepest node that you match, you should then visit all children and see if any of them have a more recent pointer. Say you're at depth D, all children at depth > D are also substring matches of the same first D bytes, so those pointers are equally valid string matches, and for LZ you want the latest one.

An equivalent alternative is instead of searching all children on query, you update all parents on insertion. Any time you insert a new node, go back to all your parents and change their pointers to your current pointer, because your pointer must match them all up to their depth, and it's a larger value.

Of course this ruins the speed of the suffix trie so you can't do that.

In Oodle I use limitted parent updates to address this issue. Every time I do a query/insert (they always go together in an optimal parse, and the insert is always directly under the deepest match found), I take the current pointer and update N steps up the parent links. I tested various values of N against doing full updates and found that N=32 gave me indistinguishable compression ratios and very little speed hit.

(any fixed value of N preserves the O(N) of the suffix trie, it's just a constant multiplier). (you need to walk up to parents anyway if you want to find shorter matches at lower offsets; the normal suffix lookup just gives you the single longest match).

So anyway, that heuristic seems to work okay, but it just bothers me because everything else about the Suffix Trie is so pure with no tweak constants in it, and then there's this one hack. So, can we solve this problem exactly?

I believe so, but I don't quite see the details yet. The idea goes like this :

I want to use the "push pointer up to parents method". But I don't actually want to update all parents for each insertion. The key to being fast is that many of the nodes of the suffix trie will never be touched again, so we want to kind of virtually mark those nodes as dirty, and they can update themselves if they are ever visited, but we don't do any work if they aren't visited. (BTW by "fast" here I mean the entire parse should still be O(N) or O(NlogN) but not fall to O(N^2) which is what you get if you do full updates).

In particular in the degenerate match cases, you spend all your time way out at the leaves of the suffix trie chasing the "follows" pointer, you never go back to the root, and many of the updates overwrite each other in a trivial way. That is, you might do substring "xxaaaa" at "ptr", and then "xxaaaaa" at "ptr+1" ; the update of "ptr" back up the tree will be entirely overwrittten by the update from "ptr+1" (since it also serves as an "xxaa" match and is later), so if you just delay the update it doesn't need to be done at all.

(in the end this whole problem boils down to a very simple tree question : how do you mark a walk from a leaf back to the root with some value, such that any query along that walk will get the value, but without actually doing O(depth) work if those nodes are not touched? Though it's not really that question in general, because in order to be fast you need to use the special properties of the Suffix Trie traversal.)

My idea is to use "sentries". (this is a bit like skip-lists or something). In addition to the "parent" pointer, each node has a pointer to the preceding "sentry". Sentry steps take you >= 1 step toward root, and the step distance increases. So stepping up the sentry links might take you 1,1,2,4,.. steps towards root. eg. you reach root in log(depth) steps.

When you insert a new node, instead of walking all parents and changing them to your pointer, you walk all sentries and store your pointer as a pending update.

When you query a node, you walk to all sentries and see if any of them has a lower pointer. This effectively finds if any of your children did an update that you need to know about.

The pointer that you place in the sentry is really a "pending update" marker. It means that update needs to be applied from that node up the tree to the next sentry (ADD: I think you also need to store the range that it applies to, since a large-step range can get broken down to smaller ranges by updates). You know what branch of the tree it applies to because the pointer is the string and the string tells you what branch of the tree to follow.

The tricky bit happens when you set the pointer in the sentry node, there may be another pointer there from a previous insertion that is still pending update. You need to apply the previous pending update before you store your new pointer in the pending update slot.

Say a node contains a pending update with the pointer "a", and you come in and want to mark it with "b". You need to push the "a" update into the range that it applies to, so that you can set that node to be pending with a "b".

The key to speed is that you only need to push the "a" update where it diverges from "b". For example if the substring of "a" and "b" is the same up to a deeper sentry that contains "b" then you can just throw away the "a" pending update, the "b" update completely replaces it for that range.

Saying it all again :

You have one pointer update "a" that goes down a branch of the tree. You don't want to actually touch all those nodes, so you store it as applying to the whole range. You do a later pointer update "b" that goes down a branch that partially overlaps with the "a" branch. The part that is "a" only you want to leave as a whole range marking, and you do a range-marking for "b". You have to find the intersection of the two branches, and then the area where they overlap is again range-marked with "b" because it's newer and replaces "a". The key to speed is that you're marking big ranges of nodes, not individual nodes. My proposal for marking the ranges quickly is to use power-of-2 sentries, to mark a range of length 21 you would mark spans of length 16+4+1 kind of a thing.

Maybe some drawings are clearer. Here we insert pointer "a", and then later do a query with pointer "b" that shares some prefix with "a", and then insert "b".

The "b" update to the first sentry has to push the "a" update that was there up until the substrings diverge. The update back to the root sees that "a" and "b" are the same substring for that entire span and so simply replaces the pending update of "a" with a pending update of "b".

Let's see, finishing up.

One thing that is maybe not clear is that within the larger sentry steps the smaller steps are also there. That is, if you're at a deep leaf you walk back to the root with steps that go 1,1,2,4,8,16,32. But in that last big 32 step, that doesn't mean that's one region of 32 nodes with no other sentries. Within there are still 1,2,4 type steps. If you have to disambiguate an update within that range, it doesn't mean you have to push up all 32 nodes one by one. You look and see hey I have a divergence in this 32-long gap, so can I just step up 16 with "a" and "b" being the same? etc.

I have no idea what the actual O() of this scheme is. It feels like O(NlogN) but I certainly don't claim that it is without doing the analysis.

I haven't actually implemented this so there may be some major error in it, or it might be no speed win at all vs. always doing full updates.

Maybe there's a better way to mark tree branches lazily? Some kind of hashing of the node id? Probabilistic methods?


08-19-13 - Sketch of multi-Huffman Encoder

Simple way to do small-packet network packet compression.

Train N different huffman code sets. Encoder and decoder must have a copy of the static N code sets.

For each packet, take a histogram. Use the histogram to measure the transmitted length with each code set, and choose the smallest. Transmit the selection and then the encoding under that selection.

All the effort is in the offline training. Sketch of training :

Given a large number of training packets, do this :

for - many trials - 

select 1000 or so seed packets at random
(you want a number 4X N or so)

each of the seeds is a current hypothesis of a huffman code set
each seed has a current total histogram and codelens

for each packet in the training set -
add it to one of the seeds
the one which has the most similar histogram
one way to measure that is by counting the huffman code length

after all packets have been accumulated onto all the seeds,
start merging seeds

each seed has a cost to transmit; the size of the huffman tree
plus the data in the seed, huffman coded

merge seeds with the lowest cost to merge
(it can be negative when merging makes the total cost go down)

keep merging the best pairs until you are down to N seeds

once you have the N seeds, reclassify all the packets by lowest-cost-to-code
and rebuild the histograms for the N trees using only the new classification

those are your N huffman trees

measure the score of this trial by encoding some other training data with those N trees.

It's just k-means with random seeds and bottom-up cluster merging. Very heuristic and non-optimal but provides a starting point anyway.

The compression ratio will not be great on most data. The advantage of this scheme is that there is zero memory use per channel. The huffman trees are const and shared by all channels. For N reasonable (4-16 would be my suggestion) the total shared memory use is quite small as well (less than 64k or so).

Obviously there are many possible ways to get more compresion at the cost of more complexity and more memory use. For packets that have dword-aligned data, you might do a huffman per mod-4 byte position. For text-like stationary sources you can do order-1 huffman (that is, 256 static huffman trees, select by the previous byte), but this takes rather more const shared memory. Of course you can do multi-symbol huffman, and there are lots of ways to do that. If your data tends to be runny, an RLE transform would help. etc. I don't think any of those are worth pursuing in general, if you want more compression then just use a totally different scheme.

Oh yeah this also reminds me of something -

Any static huffman encoder in the vernacular style (eg. does periodic retransmission of the table, more or less optimized in the encoder (like Zip)) can be improved by keeping the last N huffman tables. That is, rather than just throwing away the history when you send a new one, keep them. Then when you do retransmission of a new table, you can just send "select table 3" or "send new table as delta from table 5".

This lets you use locally specialized tables far more often, because the cost to send a table is drastically reduced. That is, in the standard vernacular style it costs something like 150 bytes to send the new table. That means you can only get a win from sending new tables every 4k or 16k or whatever, not too often because there's big overhead. But there might be little chunks of very different data within those ranges.

For example you might have one Huffman table that only has {00,FF,7F,80} as literals (or whatever, specific to your data). Any time you encounter a run where those are the only characters, you send a "select table X" for that range, then when that range is over you go back to using the previous table for the wider range.


08-13-13 - Subaru WRX Mini Review

We got a WRX Wagon a while ago to be our primary family car. It's basically superb, I absolutely love it.

Above all else I love that it is simple and functional. It feels like the best car that Subaru could have possibly built for the money. It has no extra frills. It's not trying to be "luxury" in any way, and I fucking love that. But where it counts, the engine, the drivetrain, the reliability, it is totally quality, superb quality, much better than the so-called German masters of engineering. And actually I like the feel of the interior plastics (for the most part); they feel functional and solid and unpretentious. Way better than the gross laminated wood in modern BMWs and the blingy rhinestone-covered interiors of modern Audis. So-called "nice" cars have all become like tacky McMansions catering to the Chelsea tastes, all faux-marble and oversized and so on. Trying to appear better than they actually are, whereas the Subaru is better quality than it looks.

It feels so crazy fast on the street. To be so sporty and also be a good family wagon is just amazing. Sometimes I wish I'd spent the extra money for the STI (and then spent some more to de-stupidify the ridiculously hard and low suspension of the STI) just for the diffs. The base WRX (non STI) has a totally non-functional viscous 4wd system that ruins the ability to throttle steer in a really predictable fun way. But realistically the number of times that I'm pushing the car that hard is vanishing and it would need lots of other mods to be a Ken Block trick machine.

Given that I basically love it and highly recommend it (*), here are a few things that I don't like about it :

(* = I believe that everyone should buy a WRX. If you want a car for X purpose, no matter what X is, the answer is WRX. The only reasons I see to buy anything but a WRX are 1. the gas mileage is not great, and 2. if you really need more space, like you if you have 8 children and 4 great danes or something ridiculous like that).

Sport Seats. God sports seats are so fucking retarded and are ruining cars. They are entirely for looks, put in by the marketting department to attract dumb teenage boys (the same dumb boys that like low profile tires and jutting hips). You can make a non-sports seat that holds you in place perfectly fine, and if you are actually doing performance driving you just need a harness (or a lap-belt lock). The specific problem in the WRX is the big solid head rests that are part of the seat - they block visibility really badly for backing up, and they also get in the way of the folding rear seats, so to fold the back seats up and down you have to go fold the front seats up first. It's lame.

Hill-start Assist. I hate hill-start assist. It would be okay if it was optional, actually I would like it if it was optional, on a button I could turn on and off so I would only get it when I actually wanted it, but no it's on all the time. I can start a car on a hill just fine by myself, thank you (where were you 15 years ago when I couldn't ?). The main result of hill-start assist is that I press the gas to get going and the car doesn't move and it feels like the brake is holding the car and I'm like "wtf is going on, the car is broken god dammit" and then it finally lets go and I realize it's the dumb HSA. Sucky. Stop trying to help me drive.

Variable Power Steering. The PS is variably boosted and *way* too variable. It's so boosted at low speed that the wheel feels all swimmy with no response. Variable PS in general hurts your ability to develop muscle-memory for turning. If you are going to do variable PS, it should be a very minor difference, not like a completely different car the way it is in the WRX; basically an ideal variable PS system should be so minor that the driver doesn't consciously know it's there at all, it should just feel right. At least it's still hydraulic, not that god-awful electric crap.

Hatch blockage. One of the big problems with the car is that the wagon back is not as useful as it should be. It's a decent amount of space, but there's a lot of weird bumps and lips that make it hard to get large things inside. The hatch opening in particular is a lot smaller that it should be, and it shouldn't be so sloped in the rear; if the roof was longer and the rear glass more vertical, you could actually get a small couch in the back (fast-backs in general are retarded; they are drawn by car designers because they *look* aerodynamic, but a hard edge is actually more aero and provides way more interior space). As is, it's unfortunately terrible for cargo carrying and just not a very good small station wagon.

Summer Tires. God damn summer tires are such a stupid scam. They're annoyingly low profile too. It just sucks that almost any car you buy these days you have to immediately buy new wheels and tires to undo the stupidity, which is again marketing department tinkering. Cars should be sold on all-season tires, and we need to stop this low-profile retardation. Rubber is amazing wonderful stuff, it's what makes wheels work well. (part of the reason cars are all sold on summer tires now is to make the magazine tests "on stock tires" look better. Those "on stock tires" tests have always been retarded, because there are a few sports cars sold on autocross tires (street legal track tires) which is totally unfair, and a few actually good-to-their-customers car companies sell their cars on all-seasons, so the magazines' claim that by testing all cars on stock tires is a fair way to compare is just bananas.)

Jack plates & hitch. The jack points on these cars absolutely suck (pinch welds with the actually reinforced part inboad of the pinch weld). What would it cost the factory to put a decent jack plate there, maybe $10? Come on. A factory hitch option would have been nice too.

Plastic bumpers. The body skirts are super low (too low, they don't clear a parking spot curb thingy) and what's worse is they're made from crunchy hard plastic that cracks or pops out of the weak plastic press-fittings rather than bends. I wish bumpers and skirts were still steel with rubber, or even soft polyurethane plastic that bends on small impacts would be fine. As usual with metric over-training, the fact that the official crash tests don't rate the damage to the car means that modern cars are very good at protecting the occupants, but self-destruct on even the slightest impact. These really aren't even "bumpers", the actual impact absorbing part is all hidden underneath; these are plastic skins over the bumpers, and they really should be designed so that you can have a 1 mph impact without cracking them or popping the fittings out.

But really the only serious complaint is the stupid shape that ruins its cargo capacity :

I just hate it when marketing people ruin the functionality of things for no good reason. First you make it as functional as possible. Then the marketing people can play with the look or whatever but *only* in ways that don't change the functionality. Almost all modern cars are really "haunchy" these days, with wheels jutting out from the body, and bodies that are cut in at the top and back to be "sleek". No it is *not* for handling or aerodynamics. It's entirely an aesthetic thing. It's making me long for something like the Honda Element that at least has not compromised function for stupid vanity.

08-13-13 - Kinesis Freestyle 2 Mini Review

Aaron got a KF2 so I tried it out. Very quick review :

(on the right side I've failed to circle the Shift and Control keys which are perhaps the worst part)

The key action is not awesome but is totally acceptable. It would not stop me from using it.

Even in brief testing I could feel that having my hands further apart was pretty cool. Getting your elbows back tucked into your ribs with forearms externally rotated helps reduce the way that normal computer use posture has the weight of the arms collapsing the chest, causing kyphosis.

The fact that it sits flat and is totally old-skool keyboard style is pretty retarded. Who is buying an ergonomic split keyboard that wants a keyboard that could have been made in the 80's ? You need to start from a shape like the MS Natural and *then* cut it in half and split it. You don't start with a keyboard that came on your Packard Bell BBS-hosting 8086 machine. In other words, each half should have a curve and should be lifted and tilted. (the thumbs should be much higher than the pinkies).

(ideally each half should sit on something like a really nice high-friction camera tripod mount, so that you can adjust the tilt to any angle you want; the perfectly-flat or perflect-vertical is no good).

The real absolute no-go killer is the fucking retarded arrow key layout. It's not a fucking laptop (and even on a laptop there's no excuse for that). What are you thinking? There is *one* way to do arrow keys and the pgup/home set, you do not change that. Also cramming the arrows in there makes the shift key too small and puts the right-control in the wrong place. Totally unnaceptable, why are they trying so hard to save on plastic? There should be a normal proper offset arrow key set over on the right side.

And get rid of all those custom-function buttons on the left side. They serve no purpose, and negatively move my left-hand-mouse out further left than it needs to be.

Why is everyone but me so retarded? Why am I not hired to design every product in the world? Clearly no one else is capable. Sheesh.

08-13-13 - Dental Biofeedback

Got my teeth cleaned the other morning, and as usual the dental hygienist cut the hell out of my gums with her pointy stick.

It occured to me that the real problem is that she can't feel what I feel; when the tip of the tool starts to go into my gum, she doesn't get the pain message. It's incredibly hard to do sensitive work on other people because you don't have their self-sensory information.

But there's no reason for it to be that way with modern technology. You should be able to put a couple of electrodes on the patient to pick up the pain signal (or perhaps easier is to detect the muscles of the face clenching) hooked up to like a vibrating pad in the hygienist's chair. Then the hygienist can poke away, and get haptic feedback to guide their pointy stick.

I should make a medical device so I can get in on the gold-rush which is the health care raping of America. Yeeeaaaah!

This reminds me how incredibly shitty all the physical therapists that I saw for my shoulder were.

One of my really tough lingering problems is that after my injury, my serratus anterior basically stopped firing, which has given me a winged scapula. I went and got nerve conduction testing, as long thoracic nerve dysfunction is a common cause of this problem, but it seems the nerve is fine, it's just that my brain is no longer telling that muscle to fire. When I do various movements that should normally be recruiting the SA, instead my brain tells various other muscles to fire and I do the movement in a weird way.

Anyway I did lots of different types of PT with various types of quacks who never really listened to me properly or examined the problem properly, they just started doing their standard routine that didn't quite apply to my problem. (by far the worst was the head of Olympic PT, who started me on his pelvic floor program; umm, WTF; I guess the guy is just a molester who likes to mess around with pelvic floors, I could see half of the PT office doing pelvic floor work). When someone has this problem of firing the wrong muscles to do movements, you can't just tell them to go do 100 thera-band external rotations, because they will wind up losing focus and doing them with the wrong muscles. Just having the patient do exercises is entirely the wrong prescription.

Not one of them did the correct therapy for this problem, which is biofeedback. The patient should be connected to electrodes that detect the firing of the non-functioning muscle. They should then be told to do a series of movements that in a normal human would involve the firing of that muscle. The patient should be shown a monitor or given a buzzer that lets them know if they are firing it correctly. The direct sensory feedback is the best way to retrain the muscle control part of the brain.

A very simple but effective way to do this is for the therapist to put a finger on the muscle that should be firing. By just applying slight pressure with the finger it makes the patient aware of whether that muscle is contracting or not and can guide them to do the movement with the correct muscles. (it's a shame that our society is so against touching, because it's such an amazing aid to have your personal trainer or your yoga teacher or whatever put their hands on the part of the body they are trying to get you to concentrate on, but nobody ever does that in stupid awful don't-molest-me America).

Everyone is fired.

A related note : I've tried various posture-correction braces over the years and I believe they all suck or are even harmful. In order to actually pull your shoulders back against your body weight, you have to strap yourself into a very tight device, and none of them really work. And even if they do work, having a device support your posture is contrary to building muscle and muscle-memory to train the patient to do it themselves. I always thought that the right thing was some kind of feedback buzzer system. Give the patient some kind of compression shirt with wiring on the front and back that can detect the length of the front of the shirt and the length of the back. Have them establish a baseline of correct posture. The shirt then has a little painful electric zapper in it, so if they are curving forward, which stretches the back of the shirt and compresses the front, they get a zap. The main problem with people with posture problems is just that they forget about it and fall into bad old habits, so you need to give them this constant monitoring and immediate feedback to fix it.


08-12-13 - Cuckoo Cache Tables - Failure Report

This is a report on a dead end, which I wish people would do more often.

Ever since I read about Cuckoo Hashing I thought, hmm even if it's not the win for hash tables, maybe it's a win for "cache tables" ?

(A cache table is like a hash table, but it never changes size, and inserts might overwrite previous entries (or not insert the new entry, though that's unusual). There may be only a single probe or multiple).

Let me introduce it as a progression :

1. Single hash cache table with no hash check :

This is the simplest. You hash a key and just look it up in a table to get the data. There is no check to ensure that you get the right data for your key - if you have collisions you may just get the wrong data back from lookup, and you will just stomp other people's data when you write.

Data table[HASH_SIZE];

lookup :

hash = hash_func(key);
Data & found = table[hash];

insert :

table[hash] = data;

This variant was used in LZP1 ; it's a good choice in very memory-limited situations where collisions are either unlikely or not that big a deal (eg. in data compression, a collision just means you code from the wrong statistics, it doesn't break your algorithm).

2. Single hash cache table with check :

We add some kind of hash-check value to our hash table to try to ensure that the entry we get actually was from our key :

Data table[HASH_SIZE];
int table_check[HASH_SIZE]; // obviously not actually a separate array in practice

lookup :

hash = hash_func(key);
check = alt_hash_func(key);
if ( table_check[hash] == check )
  Data & found = table[hash];

insert :

table_check[hash] = check;
table[hash] = data;

In practice, hash_func and alt_hash_func are usually actually the same hash function, and you just grab different bit ranges from it. eg. you might do a 64-bit hash func and grab the top and bottom 32 bits.

In data compression, the check hash value can be quite small (8 bits is common), because as noted above collisions are not catastrophic, so just reducing the probability of an undetected collision to 1/256 is good enough.

3. Double hash cache table with check :

Of course since you are now making two hashes, you could look up two spots in your table. We're basically running the primary hash and alt_hash above, but instead of unconditionally using only one of them as the lookup hash and one as the check, we can use either one.

Data table[HASH_SIZE];
int table_check[HASH_SIZE]; // obviously not actually a separate array in practice

lookup :

hash1 = hash_func(key);
hash2 = alt_hash_func(key);
if ( table_check[hash1] == hash2 )
  Data & found = table[hash1];
else if ( table_check[hash2] == hash1 )
  Data & found = table[hash2];

insert :

if ( quality(table[hash1]) <= quality(table[hash2]) )
    table_check[hash1] = hash2;
    table[hash1] = data;
    table_check[hash2] = hash1;
    table[hash2] = data;

Where we now need some kind of quality function to decide which of our two possible insertion locations to use. The simplest form of "quality" just checks if one of the slots is unused. More complex would be some kind of recency measure, or whatever is appropriate for your data. Without any quality rating you could still just use a random bool there or a round-robin, and you essentially have a hash with two ways, but where the ways are overlapping in a single table.

Note that here I'm showing the check as using the same number of bits as the primary hash, but it's not required for this type of usage, it could be fewer bits.

(also note that it's probably better just to use hash1 and hash1+1 as your two hash check locations, since it's so much better for speed, but we'll use hash1 and hash2 here as it leads more directly to the next -)

4. Double hash cache table with Cuckoo :

Once you get to #3 the possibility of running a Cuckoo is obvious.

That is, every entry has two possible hash table indices. You can move an entry to its alternate index and it will still be found. So when you go to insert a new entry, instead of overwriting, you can push what's already there to its alternate location. Lookup is as above, but insert is something like :

insert :

table_check[hash1] = hash2;
table[hash1] = data;

// I want to write at hash1; kick out whatever is there

if ( table[hash1] is empty ) return;

// move my entry from hash1 to hash2
hash2 = table_check[hash1];

table[hash2] = table[hash1];
table_check[hash2] = hash1;


Now of course that's not quite right because this is a cache table, not a hash table. As written above you have a gauranteed infinite loop because cache tables are usually run with more unique insertions than slots, so PushCuckoo will keep trying to push things and never find an empty slot.

For cache tables you just want to do a small limited number of pushes (maybe 4?). Hopefully you find an empty slot to in that search, and if not you lose the entry that had the lowest "quality" in the sequence of steps you did. That is, remember the slot with lowest quality, and do all the cuckoo-pushes that precede that entry in the walk.

For example, if you have a sequence like :

I want to fill index A

hash2[A] = B
hash2[B] = C
hash2[C] = D
hash2[D] = E

none are empty

entry C has the lowest quality of A-E

Then push :

B -> C
A -> B
insert at A

That is,

table[C] = table[B]
hash2[C] = B
table[B] = table[A]
hash2[B] = A
table[A],hash2[A] = new entry

The idea is that if you have some very "high quality" entries in your cache table, they won't be destroyed by bad luck (some unimportant event which happens to have the same hash value and thus overwrites your high quality entry).

So, I have tried this and in my experiments it's not a win.

To test it I wrote a simple symbol-rank compressor. My SR is order-5-4-3-2-1 with only 4 symbols ranked in each context. (I chose an SR just because I've been working on SR for RAD recently; otherwise there's not much reason to pursue SR, it's generally not Pareto). Contexts are hashed and looked up in a cache table. I compressed enwik8. I tweaked the compressor just enough so that it's vaguely competitive with state of the art (for example, I use a very simple secondary statistics table for coding the SR rank), because otherwise it's not a legitimate test.

For Cuckoo Caching, the hash check value must be the same size as the hash table index, so that's what I've done for most fo the testing. (in all the other variants you are allowed to set the size of the check value freely). I also tested 8-bit check value for the single lookup case.

I'm interested in low memory use and really stressing the cache table, so most of the testing was at 18-bits of hash table index. Even at 20 bits the difference between Cuckoo and no-Cuckoo disappears.

The results :

18 bit hash :

Single hash ; no confirmation :
Compress : 100000000 -> 29409370 : 2.353

Single hash ; 8 bit confirmation :
Compress : 100000000 -> 25169283 : 2.014

Single hash ; hash_bits size confirmation :
Compress : 100000000 -> 25146207 : 2.012

Dual Hash ; hash_bits size confirmation :
Compress : 100000000 -> 24933453 : 1.995

Cuckoo : (max of 10 pushes)
Compress : 100000000 -> 24881931 : 1.991

Conclusion : Cuckoo Caching is not compelling for data compression. Having some confirmation hash is critical, but even 8 bits is plenty. Dual hashing is a good win over single hashing (and surprisingly there's very little speed penalty (with small cache table tables, anyway, where you are less likely to pay bad cache miss penalties)).

For the record :

variation of compression with hash table size :

two hashes, no cuckoo :

24 bit o5 hash : (24,22,20,16,8)
Compress : 100000000 -> 24532038 : 1.963
20 bit o5 hash : (20,19,18,16,8)
Compress : 100000000 -> 24622742 : 1.970
18 bit o5 hash : (18,17,17,16,8)
Compress : 100000000 -> 24933453 : 1.995

Also, unpublished result : noncuckoo-dual-hashing is almost as good with the 2nd hash kept within cache page range of the 1st hash. That is, the good thing to do is lookup at [hash1] and [hash1 + 1 + (hash2&0xF)] , or some other form of semi-random nearby probe (as opposed to doing [hash1] and [hash2] which can be quite far apart). Just doing [hash1] and [hash1+1] is not as good.


08-08-13 - Oodle Static LZP for MMO network compression

Followup to my post 05-20-13 - Thoughts on Data Compression for MMOs :

So I've tried a few things, and Oodle is now shipping with a static dictionary LZP compressor.

OodleStaticLZP uses a static dictionary and hash table which is const and shared by all network channels. The size is set by the user. There is an adaptive per-channel arithmetic coder so that the match length and literal statistics can adapt to the channel a bit (this was a big win vs. using any kind of static models).

What I found from working with MMO developers is that per-channel memory use is one of the most important issues. They want to run lots of connections on the same server, which means the limit for per-channel memory use is something like 512k. Even a zlib encoder at 400k is considered rather large. OodleStaticLZP has 182k of per-channel state.

On the server, a large static dictionary is no problem. They're running 16GB servers with 10,000 connections, they really don't care if the static dictionary is 64MB. However, that same static dictionary also has to be on the client, so the limit on how big a static dictionary you can use really comes from the client side. I suspect that something in the 8MB - 16MB range is reasonable. (and of course you can compress the static dictionary; it's only something like 2-4 MB that you have to distribute and load).

(BTW you don't necessarily need an adaptive compression state for every open channel. If some channels tend to go idle, you could drop their state. When the channel starts up again, grab a fresh state (and send a reset message to the client so it wipes its adaptive state). You could do something like have a few thousand compression states which you cycle in an LRU for an unbounded number of open channels. Of course the problem with that is if you actually get a higher number of simultaneous active connections you would be recycling states all the time, which is just the standard cache over-commit problem that causes nasty thrashing, so YMMV etc.)

This is all only for downstream traffic (server->client). The amount of upstream traffic is much less, and the packets are tiny, so it's not worth the memory cost of keeping any memory state per channel for the upstream traffic. For upstream traffic, I suggest using a static huffman encoder with a few different static huffman models; first send a byte selecting the huffman table (or uncompressed) and then the packet huffman coded.

I also tried a static dictionary / adaptive statistics LZA (LZA = LZ77+arith) (and a few other options, like a static O3 context coder and some static fixed-length string matchers, and static longer-word huffman coders, but all those were much worse than static LZA or LZP). The static dictionary LZA was much worse than the LZP.

I could conjecture that the LZP does better on static dictionaries than LZA because LZP works better when the dictionary mismatches the data. The reason being that LZP doesn't even try to code a match unless it finds a context, so it's not wasting code space for matches when they aren't useful. LZ77 is always trying to code matches, and will often find 3-byte matches just by coincidence, but the offsets will be large so they're barely a win vs literals.

But I don't think that's it. I believe the problem with static LZA is simply for an offset-coded LZ (as I was using), it's crucial to put the most useful data at low offset. That requires a very cleverly made static dictionary. You can't just put the most common short substrings at the end - you have to also be smart about how those substrings run together to make the concatenation of them also useful. That would be very interesting hard algorithm to work on, but without that work I find that static LZA is just not very good.

There are obvious alternatives to optimizing the LZA dictionary; for example you could take the static dictionary and build a suffix trie. Then instead of sending offsets into the window, forget about the original linear window and just send substring references in the suffix trie directly, ala the ancient Fiala & Green. This removes the ugly need to optimize the ordering of the linear window. But that's a big complex bowl of worms that I don't really want to go into.

Some results on some real packet data from a game developer :

downstream packets only
1605378 packets taking 595654217 bytes total
371.0 bytes per packet average

O0 static huff : 371.0 -> 233.5 average

zlib with Z_SYNC_FLUSH per packet (32k window)
zlib -z3 : 371.0 -> 121.8 average
zlib -z6 : 371.0 -> 111.8 average

OodleLZH has a 128k window
OodleLZH Fast :
371.0 -> 91.2 average

OodleLZNib Fast lznib_sw_bits=19 , lznib_ht_bits=19 : (= 512k window)
371.0 -> 90.6 average

OodleStaticLZP [mb of static dic|bits of hash]

LZP [ 4|18] : 371.0 -> 82.8 average
LZP [ 8|19] : 371.0 -> 77.6 average
LZP [16|20] : 371.0 -> 69.8 average
LZP [32|21] : 371.0 -> 59.6 average

Note of course that LZP would also benefit from dictionary optimization. Later occurances of a context replace earlier ones, so more useful strings should be later in the window. Also just getting the most useful data into the window will help compression. These results are without much effort to optimize the LZP dictionary. Clients can of course use domain-specific knowledge to help make a good dictionary.

TODOS : 1. optimization of LZP static dictionary selection. 2. mixed static-dynamic LZP with a small (32k?) per-channel sliding window.

08-08-13 - Selling my Porsche 911 C4S

Well, it's time to sell my car (a 2006 911 C4S). Unfortunately I missed the best selling season (early summer) because of baby's arrival, so it may take a while. It's so horrible having to work with these web sites, they're so janky and broken, jesus christ the web is such total balls, you misclick the back button and lose all your work or hit enter and some required field wasn't entered so they wipe out all your entries. (Whenever I remember to, it's best practice to always do your writing into a text editor and then copy-paste it to the web site, don't edit directly in the web site). So far I'm just getting lots of emails from spammers and dealers and other lowballers, so it's a great waste of my time. (and a lot of the lowballers (guys will offer things like $30k for the car I'm asking $48k for) are delightful human beings who insult me when I politely tell them they're out of their mind). Anyhoo...

Why am I selling it? Primarily I just don't drive it anymore. Between baby and everything else I have no time for it. Also Seattle is just a horrible place for driving. With my back problems I just really can't sit in a sports car for long periods of time (even though the 911 is by far the best of any of the cars I tried, it's super roomy inside which is really awesome), I'm gonna have to get an SUV (*shudder*) or something more upright, I no longer bend at the waist. I don't have the cash to sit on a depreciating car that I'm not driving. I also have concluded that I should not own a car that I'm worried about scratching, it means I can't really enjoy it and I'm kind of tense about it all the time. I need a piece of junk that I can crash into things and not care. (I feel the same way about nice clothes; if it's espensive enough that I am worried about spilling my food on it, then I don't want to wear it).

I'm gonna take a huge value loss on the car even if I get a good price, because it's in great shape mechanically, but has a few cosmetic flaws. Porsche buyers would much rather have something that's spotless but a disaster in the engine bay. That is, there's a lot of ownership value in the car which just evaporates for me when I put it on sale. Oh well.

Autotrader ad here with pictures.

Here's the ad text properly formatted :

    Beautiful, well optioned and well cared for C4S. It has a manual transmission and the desirable Adaptive sports seats. Beautiful combination of Seal Gray Metallic on Sand Beige interior. 3.8L flat six with 355 hp.

    I have taken meticulous care of this car. The oil is changed every 5000 miles. I cut the oil filter open at every change and have never seen any chunks. It's regularly driven, and I carefully warm it up and then get the revs up high. Service has been done at Aker's Posrche in Seattle and they've remarked on what great shape the engine is in. This car does not eat oil quickly. The IMS and RMS both show no signs of any problem.

    I'm the 2nd owner; carfax is clean; I had a PPI done when I bought it and no flaws were found.

    options codes : c02 x70 267 342 377 378 431 446 537 575 615 635 670 680 983

    267  Automatically dimming mirrors;
    342  Heated seats;
    431  Three-spoke multi-function steering wheel;
    446  Colored crest wheel caps;
    475  PASM;
    607  Homelink;
    635  ParkAssist (parking aid rear);
    650  Electric slide/tilt sunroof;
    670  Navigation module for PCM;
    680  BOSE  Surround Sound System;
    P01  Adaptive sports seats;
    X70  Door entry guards in stainless steel;
    I've done a few very minor mods : clear side markers, Porsche GT3 adjustable rear sway bar, "4S" wheel caps (the colored crest caps are included), Rennline aluminum shift knob (OEM shift knob included), rear debadge (original badges included).

    Most of the expendable parts have been recently replaced; six new plugs and coils, new serpentine/drive belt, new battery, new shift cable, new water pump, new coolant cap, new air filters, new brake fluid, fresh oil and brakes pads. Tires are 3000 miles old. The clutch has not been replaced but has been tested for slip and is in good shape.

    There are a few cosmetic flaws, all shown in the pictures. The right rear wheel is curbed and buyer may choose to replace it. There are two scratches, one on a rear bumperette and one near the fuel filler cap. The emergency brake lever has some gouges in the leather. The paint on the backs of the seats has some worn spots. The seats bolsters and leather are excellent shape. The car is always garaged and kept out of the sun. Non smoker, no animals, no children, no food ever in the car.

old rants