12/20/2011

12-20-11 - Grocery Store Lines

Grocery store lines are a microcosm for how fucking shitty almost every human is, in almost every way.

First of all, you have the fact that nobody shows any basic human courtesy to each other. I almost never see someone with a ton of items let ahead the person with one item. But of course when I let the person with one item go ahead of me, they invariably do something fucked up like ask for a pack of cigarettes (which always takes forever in US grocery stores) or pay with food stamps or some shit. (aside : why is it always such a clusterfuck to pay with food stamps in some groceries? they must have done it a million times, but the checker always acts like someone handed them monopoly money, and the manager has to get called, and forms are filled out, wtf). Of course the people who are paying with coupons and civil war scrip never warn the person lining up behind them that maybe they should pick a different line.

But when the lines get long you really start to see people's souls.

There's the people who stand around and chat right in the middle of the lines. I watch people over and over asking "are you in line? oh, no? okay". Hmm, maybe you should get the fuck out of the line area to have your chit chat!

Then there's the people who can't seem to run a line in a reasonable direction and wind up blocking all the aisles or running it into another line. Invariably it takes a manager to come over and tell people to "please line up over here" since god knows they aren't going to sort themselves out.

Then you get the people who start stamping around and huffing and quickly looking from one side to another like this is the greatest injustice since slavery. You can just see wheels spinning in their heads about how "ridiculous this is" and so on.

There are the people who think that being really pushy in the back of the line is going to speed things up. We're eight people away from the register and they keep jamming their cart into my feet because the person three ahead of us moved. I get out of line to grab a magazine (leaving my cart) and they push into the gap where my body was. Whoah, slow down dick-face, we're still twenty feet from the register, you can chill a little now.

On the flip side then is the people who are absurdly slow about getting their checkout done with. (and of course to double-down on dickishness, it's often the same people who were impatient and pushy when they were way back in line (*)).

There are two general classes of people who fail to check out quickly :

1. The epically incompetent. These people pay with a check, either because they are ancient geezers (excusable) or because they think cards are somehow inferior or checks are more convenient (inexecusable). They might go digging around in their purse for half an hour trying to find exact change, or somehow still don't know they can scan their card before the checker is done.

2. The intentionally slow. These people think everyone needs to chill and slow down; what's the rush? They might chat with the checker a bit. They think everyone else is rotten for being in such a hurry. OMG you epic douchebag; it's fine if you want to live a slow, relaxed, life, but it's not okay to impose that on everyone behind you in line. You probably drive slowly too and think that everyone behind you is in the wrong for wanting to go faster. You probably have a "keep your laws off my body" bumper sticker, and fail to see that your own behavior is the same kind of selfish forcing of your values on others.

(* = the double-dick seems to be the norm for airplane passengers; who are reliably annoying and pushy and do a lot of huffing when they are way back in line, but when they actually get up to the TSA guy they still have their shoes on, don't have their ID in hand, are drinking a gallon of liquid, and act like it's some big surprise. Same thing with the overhead bin stowage of course).

12/19/2011

12-19-11 - SRAM

SRAM is rolling out a big promotional campaign this year, trying to convince people that their components are actually superior.

They've signed up lots of the pro teams. Just as with car racing, do not be mislead by what the pros use. I see lots of morons on forums saying "well the race team uses this, it must be great". No, the reason the race team uses it is because they are paid to use it.

SRAM double-tap shifters are fucking *awful*. Absolutely retarded. Imagine your right mouse button was taken away and instead you had to double-tap the left to accomplish that function. Yep, it's horrible.

Double-press GUI is always horrible and always should only be used as a last resort. We use it sometimes in games because there just aren't enough buttons on console controllers, but a smart game designer knows that only the secondary functions should go on double-tap buttons (the same goes for "hold" buttons) and the twitch functions should be on their own dedicated button.

Actually it's even worse than that. They don't do the right thing when you're at the edge of the gear shift limits. So like if you are at the low end, you can't go any lower (which you would accomplish by double-tapping), it will still let you single tap (to up-shift). So you're riding up a steep hill and you want a lower gear, you go to double-tap, and oh fuck half way through it the lever won't let you do the double-tap, but you've already single-tapped. There's no way to back out of it, when you let go it will up-shift you and you'll be fucked.

The STI system is just one million billion times better. But it's patented. That's why all these shift levers are so dang expensive, because they're patented.

It's also why there has to be a new lever system every year, a new size of bottom bracket, a new headset system - it's so that the manufacturer can patent it and/or make an exclusive line so that they can rip you off. The old system was perfectly fine functionally, the problem with it was that generic brands were starting to come out with cheap decent components for that system. We can't have that.

It's just like medicine of course, though with medicine it's much more diabolical.

Certainly with medicine it's obvious that there should be laws that prevent the pointless pushing of the new expensive product when it's not actually any better than cheap old solutions.

But I think it would actually be in the world's best interest to have a similar law for everything. It would be hard to phrase and hard to enforce, but the idea is something like - you must make components that are compatible with others on the market unless the incompatibility is for a necessary functional reason. It's actually much better for the free market and competition of products can plug and play and the consumer can choose based on pice and functionality, not compatibilty with some bullshit proprietary interface.

One that annoys me is car parts; most of the car parts for a Porsche or BMW or whatever are actually identical to the ones for a cheaper car (like a VW for example) - but they intentionally make the interface ever so slightly different so that you can't just go buy the cheaper part. The parts are all made by Bosch or whoever major part supplier anyway, it's not like you get a better brand of part for the money. The interesting thing to me is that the car maker doesn't really benefit from this, it's the part maker who does, so there must be some kind of collusion where the car maker gets a kickback in exchange for using the proprietary part.

Maybe the most obvious example is car wheels. Wheels are wheels, there's no need for them to be car specific, but the auto manufacturers intentionally use different bolt spacings (5x130, 4x110, etc) so that you can't go buy cheap mass market wheels for your fancy car. You can cross-shop the exact same wheel with different bolt spacings, and the price difference can be 2X or more.

12/17/2011

12-17-11 - LZ Optimal Parse with A Star Part 4

Continuing ...
Part 1
Part 2
Part 3

So we have our A star parse from last time.

First of all, when we "early out" we still actually fill out that hash_node. That is, you pop a certain "arrival", then you evaluate the early out conditions and decide this arrival is not worth pursuing. You need to make a hash_node and mark it as a dead end, so that when you pop earlier arrivals that see this node, they won't try to visit it again.

One option would be to use a separate hash of just bools that mark dead ends. This could be a super-efficient smaller hash table of bit flags or bloom filters or something, which would save memory and perhaps speed.

I didn't do this because you can get some win from considering parses that have been "early outed". What you do is when you decide to "early out" an arrival, you will not walk to any future nodes that are not yet done, but you *will* consider paths that go to nodes that were already there. In pseudo-code :


pop an arrival

check arrival early outs and just set a flag

for all coding choices at current pos
{
  find next_node
  if next_node exists
    compute cost to end
  else
    if ! early out flag
       push next_node on arrivals stack
}

So the early out stops you from creating any new nodes in the graph walk that you wouldn't have visited anyway, but you can still find new connections through that graph. What this lets you do in practice is drive the early out thresholds tighter.

The other subtlety is that it helps a lot to actually have two (or more) stages of early out. Rather than just stop consider all exit coding choices once you don't like your arrival, you have a couple of levels. If your arrival looks sort of bad but not terrible, then you still consider some of the coding choices. Instead of considering 8 or 16 coding choices, you reduce it to 2 or 4 which you believe are likely advantageous.

The exact details depend on the structure of your back end coder, but some examples of "likely advantangeous" coding choices that you would consider in the intermediate early out case : if you have a "repeat recent offset" structure like LZX/LZMA, then those are obvious things to include in the "likely advantageous". Another one might be RLE or continue-previous type of match codes. Another would be if the literal codes below a certain number of bits with the current statistics. Also the longest match if it's longer than a certain amount.

Okay, so our A star is working now, but we have a problem. We're still just not getting enough early outs, and if you ran this on a big file it will take forever (sometimes).

The solution is to use another aspect we expect from our LZ back end, which is "semi-locality". Locality means that a decision we make now will not have a huge effect way in the future. Yes, it has some effect, because it may change the state and that affects the future, but over time the state changes so many times and adapts to future coding that the decision 4000 bytes ago doesn't matter all that much.

Another key point is that the bad (slow) case occurs when there are lots of parses that cost approximately the same. Because of our early out structure, if there is a really good cheap parse we will generally converge towards it, and then the other choices will be more expensive and they will early out and we won't consider too many paths. We only get into bad degeneracy if there are lots of parses with similar cost. And the thing is, in that case we really don't care which one we pick. So when we find an area of the file that has a huge branching factor that's hard to make a decision about, we are imperfect but it doesn't cost us much overall.

The result is that we can cut up the file to make the parse space tractable. What I do is work in "quanta". You take the current chunk of the file as your quantum and parse it as if it was its own little file. The parse at the beginning of the quantum will be mostly unaffected by the quantum cut, but the parse at the end will be highly affected by the false EOF, so you just throw it out. That is, advance through the first 50% or 75% of the parse, and then start the next quantum there.

There is one special case for the quantum cutting which is long matches that extend past the end of the quantum. What you would see is when outputting the first 50% of the parse, the last code will be a match that goes to the end of the quantum. Instead I just output the full length of the match. This is not ideal but the loss is negligible.

For speed you can go even further and use adaptive quantum lens. On highly degenerate parts of the file, there may be a huge node space to parse that doesn't get early-out'ed. When you detect one of these, you can just reduce the quantum len for that part of the file. eg. you start with a quantum length of 4096 ; if as you are parsing that quantum you find that the hash table occupancy is beyond some threshold (like 1 million nodes for example), you decide the branching factor is too great and reduce the quantum length to 2048 and resume parsing on just the beginning of that chunk. You might hit 1 million nodes again, then you reduce to 1024, etc.

That's it! Probably a followup post with some results numbers and maybe some more notes about subtle issues. I could also do several long posts about ideas I tried that didn't work which I think are sort of interesting.

12-17-11 - LZ Optimal Parse with A Star Part 3

Continuing ...
Part 1
Part 2

At the end of Part 2 we looked at how to do a forward LZSS optimal parse. Now we're going to add adaptive "state" to the mix.

Each node in the walk of parses represents a certain {Pos,State} pair. There are now too many possible nodes to store them all, so we can't just use an array to store all {Pos,State} nodes we have visited. So hopefully we will not visit them all, so we will store them in a hash table.

We are parsing forward, so for any node we visit (a {Pos,State} will be called a "node") we know how we got there. There can be many ways of reaching the same node, but we only care about the cheapest one. So we only need to store one entering link into each node, and the total cost from the beginning of the path to get to that node.

If you think about the flow of how the forward LZSS parse completes, it's sort of like an ice tendril reaching out which then suddenly crystalizes. You start at the beginning and you are always pushing the longest length choice first - that is, you are taking big steps into the parse towards the end without filling in all the gaps. Once you get to the end with that first long path (which is actually the greedy parse - the parse made by taking the longest match available at each step), then it starts popping backwards and filling in all the gaps. It then does all the dense work, filling backwards towards the beginning.

So it's like the parse goes in two directions - reaching from the beginning to get to the end (with node that don't have enough information), and then densely bubbling back from the end (and making final decisions). (if I was less lazy I would make a video of this).

Anyhoo, we'll make that structure more explicit. The hash table, for each node, stores the cost to get to the end from that node, and the coding choice that gives that cost.

The forward parse uses entry links, which I will henceforth call "arrivals". This is a destination node (a {pos,state}), and the cost from the beginning. (you don't need to store how you got here from the beginning since that can be reproduced at the end by rewalking from the beginning).


Full cost of parse through this node =

arrival.cost_from_head + hash_node.cost_to_tail

Once a node has a cost in the hash table, it is done, because it had all the information it needed at that node. But more arrivals can come in later as we fill in the gaps, so the full cost from the beginning of the parse to the end of the parse is not known.

Okay, so let's start looking at the parse, based on our simple LZSS pseudo-code from last time :


hash table of node-to-end costs starts empty
stack of arrivals from head starts empty

Push {Pos 1,state initial} on stack of arrivals

While stack is not empty :

pop stack; gives you an arrival to node {P,state}

see if node {P,state} is already in the hash
if so
{
  total cost is arrival.cost_from_head + hash_node.cost_to_tail
  done with this arrival
  continue (back to stack popping);
}

For each coding choice {C} at the current pos
{
  find next_state = state transition from cur state after coding choice C
  next_pos = P + C.len
  next_node = {next_pos,next_state]

  if next_node is in the hash table :
  {
    compute cost to end from code cost of {C} plus next_node.cost_to_tail
  }
  else
  {
    push next_node to the arrivals stack (*1)
  }
}

if no pushes were done
{
  then processing of current node is done
  choose the best cost to end from the choices above
  create a node {P,state} in the hash with that cost
}

(*1 = if any pushes are done, then the current node is also repushed first (before other pushes). The pushes should be done in order from lowest pos to highest pos, just as with LZSS, so that the deep walk is done first).

So, we have a parse, but it's walking every node, which is way too many. Currently this is a full graph walk. What we need are some early outs to avoid walking the whole thing.

The key is to use our intuition about LZ parsing a bit. Because we step deep first, we quickly get one parse for the whole segment (the greedy parse). Then we start stepping back and considering variations on that parse.

The parse doesn't collapse the way it did with LZSS because of the presence of state. That is, say I parsed to the end and now I'm bubbling back and I get back to some pos P. I already walked the long length, so I'm going to consider a shorter one. When I walk to the shorter one with LZSS, then states I need would already be done. But now, the nodes aren't done, but importantly the positions have been visited. That is -


At pos P, state S
many future node positions are already done
 (I already walked the longest match length forward)

eg. maybe {P+3, S1} and {P+5, S2} and {P+7, S3} have been done

I a shorter length now; eg. to {P+2,S4}

from there I consider {P+5, S5}

the node is not done, but a different state at P+5 was done.

If the state didn't matter, we would be able to reuse that node and collapse back to O(N) like LZSS.

Now of course state does matter, but crucially it doesn't matter *that much*. In particular, there is sort of a limit on how much it can help.

Consider for example if "state" is some semi-adaptive statistics. Those statistics are adaptive, so if you go far enough into the future, the state will adapt to the coding parse, and the initial state won't have helped that much. So maybe the initial state helps a lot for the next 8 coding steps. And maybe it helps at most 4 bits each time. Then having a better initial state can help at most 32 bits.

When you see that some other parse has been through this same position P, albeit with different state at this position, if that parse has completed and has a total cost, then we know it is the optimal cost through that node, not just the greedy parse or whatever. That is, whenever a hash node has a cost_to_tail, it is the optimal parse cost to tail. If there is a good parse later on in the file, the optimal parse is going to find that parse, even if it starts from a non-ideal state.

This is the form of our early outs :


When you pop an arrival to node {P,S} , look at the best cost to arrive to pos P for any state, 

if arrival.cost_from_head - best_cost_from_head[P] > threshold
  -> early out

if arrival.cost_from_head + best_cost_to_tail[P] > best_cost_total + threshold
  -> early out

where we've introduced two arrays that track the best seen cost to head & tail at each pos, regardless of state. We also keep a best total cost, which is initially set to infinity until we get through a total parse, and then is updated any time we see a new whole-walk cost.

This is just A star. From each node we are trying to find a lower bound for the cost to get to the end. What we use is previous encodings from that position to the end, and we assume that starting from a different state can't help more than some amount.

Next time, some subtleties.

12-17-11 - LZ Optimal Parse with A Star Part 2

Okay, optimal parsing with A star. (BTW "optimal" parsing here is really a misnomer that goes back to the LZSS backwards parse where it really was optimal; with a non-trivial coder you can't really do an optimal parse, we really mean "more optimal" (than greedy/lazy type parses)).

Part 1 was just a warmup, but may get you in the mood.

The reason for using A Star is to handle LZ parsing when you have adaptive state. The state changes as you step through the parse forward, so it's hard to deal with this in an LZSS style backwards parse. See some previous notes on backwards parsing and LZ here : 1 , 2 , 3

So, the "state" of the coder is something like maybe an adaptive statistical mode, maybe the LZMA "markov chain" state machine variable, maybe an LZX style recent offset cache (also used in LZMA). I will assume that the state can be packed into a not too huge size, maybe 32 bytes or so, but that the count of states is too large to just try them all (eg. more than 256 states). (*1)

(*1 - in the case that you can collapse the entire state of the coder into a reasonably small number of states (256 or so) then different approaches can be used; perhaps more on this some day; but basically any adaptive statistical state or recent offset makes the state space too large for this).

Trying all parses is impossible even for the tiniest of files. At each position you have something like 1-16 options. (actually sometimes more than 16, but you can limit the choices without much penalty (*2)). You always have the choice of a literal, when you have a match there are typically several offsets, and several lengths per offset to consider. If the state of the coder is changed by the parse choice, then you have to consider different offsets even if they code to the same number of bits in the current decision, because they affect the state in the future.

(*2 - the details of this depend on the back end of coder; for example if your offset coder is very simple, something like just Golomb type (NOSB) coding, then you know that only the shortest offset for a given length needs to be considered, another simplification used in LZMA, only the longest length for a given offset is considered; in some coders it helps to consider shorter length choices as well; in general for a match of Length L you need to consider all lengths in [2,L] but in practice you can reduce that large set by picking a few "inflection points" (perhaps more on this some day)).

Okay, a few more generalities. Let's revisit the LZSS backwards optimal parser. It came from a forward style parser, which we can implement with "dynamic programming" ; like this :


At pos P , consider the set of possible coding choices {C}

For each choice (ci), find the cost of the choice, plus the cost after that choice :
{

  Cost to end [ci] = Current cost of choice C [ci] + Best cost to end [ P + C[ci].len ]

}

choose ci as best Cost to end
Best code to end[ P ] = Cost to end [ best ci ]

You may note that if you do this walking forward, then the "Best cost to end" at the next position may not be computed yet. If so, then you suspend the current computation and step ahead to do that, then eventually come back and finish the current decision.

Of course with LZSS the simpler way to do it is just to parse backwards from the end, because that ensures the future costs are already done when you need them. But let's stick with the forward parse because we need to introduce adaptive state.

The forward parse LZSS (with no state) is still O(N) just like the backward parse (this time cost assumes the string matching is free or previously done, and that you consider a fixed number of match choices, not proportional to the number of matches or length of matches, which would ruin the O(N) property) - it just requires more book keeping.

In full detail a forward LZSS looks like this :


Set "best cost to end" for all positions to "uncomputed"

Push Pos 1 on stack of needed positions.

While stack is not empty :

pop stack; gives you a pos P

If any of the positions that I need ( P + C.len ) are not done :
{
  push self (P) back on stack
  push all positions ( P + C.len ) on stack
    in order from lowest to highest pos
}
else
{
  make a choice as above and fill "best cost to end" at pos P
}
If you could not make a choice the first time you visit pos P, then because of the order that we push things on the stack, when you come back and pop P the second time it's gauranteed that everything needed is done. Therefore each position is visited at most twice. Therefore it's still O(N).

We push from lowest to highest len, so that the pops are highest pos first. This makes us do later positions first; that way earlier positions are more likely to have everything they need already done.

Of course with LZSS this is silly, you should just go backwards, but we'll use it to inspire the next step.

To be continued...

12/12/2011

12-12-11 - Things I want and cannot find

1. A true "sauna" near Seattle. A hot room right next to a lake, so I can steam it up and then go jump in the cold lake. We're in the perfect place for it, we have lots of nice cold swimmable lakes, and there are tons of Swedes around here, and yet I can't find one.

There are plenty of "saunas" at spas and health clubs, but without the lake swim it's fucking bullshit. I imagine some rich guy has one at his lakefront home, but I can't get the hookup.

2. A secluded cabin rental. I like the idea of going out in the woods and writing code alone. I can wear some flannel and chop wood for the fire like a real Northwesterner. But all the cabin rentals I can find are in sort of "cabin communities" or near a highway, or some shit. I want a place where you can look out of the big picture window and just see scenery for miles.

3. A good river swim. I found a bunch in CA but I can't find any up here. An ideal river swim has a nice deep "hole" due to rocks or waterfall or something. It should be a 4-5 mile hike from the closest parking to cut down on traffic (or be up a rough road or something, not just right off a highway). Ideally it should not be straight out of snow melt so that it's not ball-breaking freezing cold even in the middle of summer.

4. A nice place to ride. A country lane, no cars, good pavement. God damn I miss this.

12-12-11 - Sense

One of the most important skills in an employee is the sense to know when to ask for help and when not to. To know when they should just make a decision on their own vs. ask to make sure their choice is okay. To know when they need to call a meeting about something vs. when not to disturb others about it.

It's incredibly rare actually to find someone who has the sense to get this just right. I think it's very undervalued.

When you're a manager, the most awesome thing you can have is an employee you can trust. That means no unpleasant surprises. If you give them a task, it will be done on time, or you will be notified with enough notice to take action. You won't find out that they're slipping when it's too late to do anything about it. You won't have them claim to be done and then upon inspection find out that they've done it all wrong. You can just assign the task off and then you don't have to worry about it any more. You don't have to follow up and keep pinging them for status updates.

Someone with a great deal of sense will just know to give you status updates at the appropriate intervals. Not too often that they waste your time, but not too infrequently - they should always come in just before you start wondering "WTF happened to this task?".

One of the most crucial things is knowing what decisions they need to get approval for. It sucks to have an employee who asks about every little thing. "should I put this button here or here? should I make another file for this code or put it in this file?" Just make a fucking decision yourself, I don't care! But it also sucks to have someone go off and do all kinds of crazy shit without asking, like "oh yeah I ripped out the old animation system and am doing a new one" ; uh, you did what? and you didn't ask me first? WTF. Both are very common.

Of course the definition of the "right amount of approval" depends on the manager, and a key part of having good "sense" is actually social adaptation - it's about adapting to your situation and learning what is wanted of you. Many of the type-A left-brain coders never get this; part of your job as an employee is always interacting with other human beings, even if it's only with your boss, and there is no rational absolute answer about the right way to communicate, you have to feel it out and adapt.

Of course part of the role of a good manager is to teach these things, and to help people who may have good skills but not much "sense".

It's actually more annoying in personal life than in business life. For example you're having a dinner party and somebody volunteers to bring the wine, and then they show up with none, or they show up with a box of ripple. WTF dude, I could have just gotten it myself, if you're going to drop the ball, you need to notify someone with sufficient warning.

The annoying thing about the non-business world is you can't check up on them; like "hey can you give me a status update on that wine purchasing?" because you would be considered a huge dick.


A lot of this goes along with what I call "basic professionalism". Like if I assign you a crucial task that I need done today, don't go home without checking in with me and telling me it's done or not. If you think I assigned you too much and you can't get it done in time, don't go pout, come and tell me about it.

Another aspect of "basic professionalism" is knowing when to shut up. Like if you think the company is going in the wrong direction - raise the issue to your managers, that's good, if you have a good boss they want that feedback. But after they call a meeting and everyone disagrees with you and the decision is made to go on the path you don't like - it's time to shut up about it. We don't want to hear complaints every day.

A related aspect is knowing who it's appropriate to say things to. When we have someone from the publisher touring the studio, that is not the time to point out that you don't like the design of the lead character.

"Basic professionalism" is sort of a level below having good "sense" but it's also actually surprisingly hard to find.


One of the worst situations is to have someone who is not great about "sense" or "basic professionalism" but is touchy about it. Most people are not perfect on these points, and that's okay, but if you're not then you need a certain amount of supervision. That's just the way work gets done, but some people act like it's a personal affront to be monitored.

Like they occasionally drop the ball on tasks, you decide, okay I just have to ask for daily status reports. Then they get all pissy about it, "don't you trust me" or it's "too much beaurocracy" blah blah.

Or if they don't come to you and ask questions at the appropriate time, then you have to pre-screen all their approaches. Like sometimes you assign them a task and they'll just go off and start doing it wrong without saying anything. Now what you have to do is when you assign a task you have to say "can you tell me how you're going to approach this?" to make sure they don't say something nutso.

12/09/2011

12-09-11 - Kittens

We want to get a kitten (since we have a stable house now), and I would like to just get a kitten from a home but WTF they don't exist any more.

When I was a kid, every couple of weeks some family in the neighborhood would have kittens and put out a sign. You could go to their house and see the kittens play and pick one. You could see if they were coming from a good home where they got socialized. You could see how old they were to know they weren't separated from their mom at too young an age.

There just isn't any of this anymore. It seems to have all been corporatized into kitten adoption centers.

Yeah yeah yeah you should adopt an a deformed adult cat that drools and has mange. No thanks. Part of the reason why I can't just find a normal home to adopt from is all the pet-adoption-nazis make it so that you can't use craigslist (or whatever) to find pets.

Also all the adoption agencies have strict spay/neuter at time of adoption rules. When I was a kid when we would get a cat sometimes we would not spay right away so that we could get a batch of kittens and keep one and give the result away. It was delightful to have a line of generations and a bunch of kittens to play with. That tradition seems to be all gone. The result is that the babies only come from strays or weirdos outside the system. It's sort of like if all law abiding citizens were castrated, then the only children in the world would be from criminals.

Boo.


Also, in other cat news, it turns out our professional cat sitter grossly overfed our cat while we were away in Hawaii. This despite verbal and written instructions on the correct amount to feed her. So she's sickly obese on our return.

How fucking hard is it to follow basic instructions? Jesus christ, I'm trying not to be a rich old crank, but I can't help thinking things like "it's so hard to find good help" and that the poor are poor because they're fucking retarded. (*). Half a cup means fucking half a cup, not "oh, I'll feed her until she stops eating like she's starving". You are the fucking help, you don't get to make your own decisions when I gave you specific orders.

What makes it even worse is that she (the cat sitter) gave us the usual condescending "I know so much about cats" bullshit when we interviewed her. Hey lady, I've watched an episode of the Dog Whisperer too, I'm not impressed by your amateur pet psychiatry.

* = you would think that it should be easy to find someone who could like get your groceries for you, or build you a fence, or pay your bills, or whatever, but it's actually really hard. It's amazing how badly the average person will fuck up the most basic assignments. To get someone that is smart enough that you can trust them to do those things, you have to hire someone in the top 1% of intellects, someone who could make $100k a year. It's actually sort of easy to hire someone really smart who costs a lot of money, and it's easy to hire someone who is just manual labor that you have to constantly supervise, but to hire someone in between that you can trust enough not to supervise but doesn't cost a fortune is hard because people are epic fuck ups.

12/08/2011

12-08-11 - Some Semaphores

In case you don't agree with Boost that Semaphore is too "error prone" , or if you don't agree with C++0x that semaphore is unnecessary because it can be implemented from condition_var (do I need to point out why that is ridiculous reasoning for a library writer?) - here are some semaphores for you.

I've posted a fastsemaphore before, but here's a more complete version that can wrap a base semaphore.


template< typename t_base_sem >
class fastsemaphore_t
{
private:
    t_base_sem m_base_sem;
    atomic<int> m_count;

public:
    fastsemaphore_t(int count = 0)
    :   m_count(count)
    {
        RL_ASSERT(count > -1);
    }

    ~fastsemaphore_t()
    {
    }

    void post()
    {
        if (m_count($).fetch_add(1,mo_acq_rel) < 0)
        {
            m_base_sem.post();
        }
    }

    void post(int count)
    {
        int prev = m_count($).fetch_add(count,mo_acq_rel);
        if ( prev < 0)
        {
            int num_waiters = -prev;
            int num_to_wake = MIN(num_waiters,count);
            // use N-wake if available in base sem :
            // m_base_sem.post(num_to_wake);
            for(int i=0;i<num_to_wake;i++)
            {
                m_base_sem.post();
            }
        }
    }
    
    bool try_wait()
    {
        // see if we can dec count before preparing the wait
        int c = m_count($).load(mo_acquire);
        while ( c > 0 )
        {
            if ( m_count($).compare_exchange_weak(c,c-1,mo_acq_rel) )
                return true;
            // c was reloaded
            // backoff here optional
        }
        return false;
    }
        
    void wait_no_spin()
    {
        if (m_count($).fetch_add(-1,mo_acq_rel) < 1)
        {
            m_base_sem.wait();
        }
    }
    
    void wait()
    {
        int spin_count = 1; // ! set this for your system
        while(spin_count--)
        {
            if ( try_wait() ) 
                return;
        }
        
        wait_no_spin();
    }
    
    
    int debug_get_count() { return m_count($).load(); }
};

when m_count is negative it's the number of waiters (plus or minus people who are about to wait, or about to be woken).

Personally I think the base semaphore that fastsem wraps should just be your OS semaphore and don't worry about it. It only gets invoked for thread wake/sleep so who cares.

But you can easily make Semaphore from CondVar and then put fastsemaphore on top of that. (note the semaphore from condvar wake N is not awesome because CV typically doesn't provide wake N, only wake 1 or wake all).

Wrapping fastsem around NT's Keyed Events is particularly trivial because of the semantics of the Keyed Event Release. NtReleaseKeyedEvent waits for someone to wake if there is noone. I've noted in the past that Win32 event is a lot like a semaphore with a max count of 1 ; a problem with building a Semaphore from normal Event would be that you Set it when it's already Set, you effectively run into the max count and lose your Set, but this is impossible with KeyedEvent. With KeyedEvent you get exactly one wake from Wait for each Release.

So, if we wrap up keyed_event for convenience :


struct keyed_event
{
    HANDLE  m_keyedEvent;

    enum { WAITKEY_SHIFT = 1 };

    keyed_event()
    {
        NtCreateKeyedEvent(&m_keyedEvent,EVENT_ALL_ACCESS,NULL,0);
    }
    ~keyed_event()
    {
        CloseHandle(m_keyedEvent);
    }

    void wait(intptr_t key)
    {
        RL_ASSERT( (key&1) == 0 );
        NtWaitForKeyedEvent(m_keyedEvent,(PVOID)(key),FALSE,NULL);
    }

    void post(intptr_t key)
    {
        RL_ASSERT( (key&1) == 0 );
        NtReleaseKeyedEvent(m_keyedEvent,(PVOID)(key),FALSE,NULL);
    }
};

Then the base sem from KE is trivial :


struct base_semaphore_from_keyed_event
{
    keyed_event ke;

    base_semaphore_from_keyed_event() { }
    ~base_semaphore_from_keyed_event() { }
    
    void post() { ke.release(this); }   
    void wait() { ke.wait(this); }
};

(note this is a silly way to use KE just for testing purposes; in practice it would be shared, not one per sem - that's sort of the whole point of KE).

(note that you don't ever use this base_sem directly, you use it with a fastsemaphore wrapper).

I also revisited the semaphore_from_waitset that I talked about a few posts ago. The best I can come up with is something like this :


class semaphore_from_waitset
{
    waitset_simple m_waitset;
    std::atomic<int> m_count;

public:
    semaphore_from_waitset(int count = 0)
    :   m_count(count), m_waitset()
    {
        RL_ASSERT(count >= 0);
    }

    ~semaphore_from_waitset()
    {
    }

public:
    void post()
    {
        m_count($).fetch_add(1,mo_acq_rel);
        m_waitset.notify_one();
    }

    bool try_wait()
    {
        // see if we can dec count before preparing the wait
        int c = m_count($).load(mo_acquire);
        while ( c > 0 )
        {
            if ( m_count($).compare_exchange_weak(c,c-1,mo_acq_rel) )
                return true;
            // c was reloaded
        }
        return false;
    }

    void wait(wait_thread_context * cntx)
    {
        for(;;)
        {
            // could spin a few times on this :
            if ( try_wait() )
                return;
    
            // no count available, get ready to wait
            waiter w(cntx);
            m_waitset.prepare_wait(&w);
            
            // double check :
            if ( try_wait() )
            {
                // (*1)
                m_waitset.retire_wait(&w);
                // pass on the notify :
                int signalled = w.flag($).load(mo_acquire);
                if ( signalled )
                    m_waitset.notify_one();
                return;
            }
            
            w.wait();
            m_waitset.retire_wait(&w);
            // loop and try again
        }
    }
    
    void wait()
    {
        wait_thread_context cntx;
        wait(&cntx);
    }
};

The funny bit is at (*1). Recall before we talked about a race that can happen if two threads post and two other threads pop. If one of the poppers gets through to *1 , it dec'ed the sem but is still in the waitset, one pusher might then signal this thread, which is a wasted signal, and the other waiter will not get a signal, and you have a "deadlock" (not a true deadlock, but an unexpected permanent sleep, which I will henceforth call a deadlock).

You can fix that by detecting if you recieved a signal while you were in the waitset. That's what's done here now. While it is not completely ideal from a performance perspective, it's a rare race case, and even when it happens the penalty is small. I still don't recommend using semaphore_from_waitset unless you have a comprehensive waitset-based system.

(note that in practice you would never make a wait_thread_context on the stack as in the example code ; if you have a waitset-based system it would be in the TLS)

Another note :

I have mentioned before the idea of "direct handoff" semaphores. That is, making it such that thread wakeup implies you get to dec count. For example "base_semaphore_from_keyed_event" above is a direct-handoff semaphore. This is as opposed to "optimistic" semaphores, in which the wakeup just means "you *might* get to dec count" and then you have to try_wait again when you wake up.

Direct handoff is neat because it gaurantees a minimum number of thread wakeups - you never wake up a thread which then fails to dec count. But they are in fact not awesome. The problem is that you essentially have some of your semaphore count tied up in limbo while the thread wakeup is happening (which is not a trivial amount of time).

The scenario is like this :


1. thread 1 does a sem.wait

2. thread 2 does a sem.post 
  the sem is "direct handoff" the count is given to thread 1
  thread 1 starts to wake up

3. thread 3 (or thread 2) now decides it can do some consuming
  and tries a sem.wait
  there is no sem count so it goes to sleep

4. thread 1 wakes up and processes its received count

You have actually increased latency to process the message posted by the sem, by the amount of time between steps 3 and 4.

Basically by not pre-deciding who will get the sem count, you leave the opportunity for someone else to get it sooner, and sooner is better.

Finally let's have a gander at the Linux sem : sem_post and sem_wait

If we strip away some of the gunk, it's just :


sem_post()
{

    atomic_add( & sem->value , 1);

    atomic_full_barrier (); // (*1)

    int w = sem->nwaiters; // (*2)

    if ( w > 0 )
    {
        futex_wake( & sem->value, 1 );  // wake 1
    }

}

sem_wait()
{
    if ( try_wait() ) return;

    atomic_add( & sem->waiters , 1);

    for(;;)
    {
        if ( try_wait() ) break;

        futex_wait( & sem->value, 0 ); // wait if sem value == 0
    }

    atomic_add( & sem->waiters , -1);
}

Some quick notes : I believe the barrier at (*1) is unnecessary ; they should be doing an acq_rel inc on sem->value instead. However, as noted in the previous post about "producer-consumer" failures, if your producer is not strongly synchronized it's possible that this barrier helps hide/prevent bugs. Also at (*2) in the code they load nwaiters with plain C which is very sloppy; you should always load lock-free shared variables with an explicit load() call that specifies memory ordering. I believe the ordering constraint there is the load of nwaiters needs to stay after the store to value; the easiest way is to make the inc on value be an RMW acq_rel.

The similarity with waitset should be obvious, but I'll make it super-clear :


sem_post()
{

    atomic_add( & sem->value , 1);
    atomic_full_barrier ();

    // waitset.notify_one :
    {
        int w = sem->nwaiters;
        if ( w > 0 )
        {
            futex_wake( & sem->value, 1 );  // wake 1
        }
    }
}

sem_wait()
{
    if ( try_wait() ) return;

    // waitset.prepare_wait :
    atomic_add( & sem->waiters , 1);

    for(;;)
    {
        // standard double-check :
        if ( try_wait() ) break;

        // waitset.wait()
        // (*3)
        futex_wait( & sem->value, 0 ); // wait if sem value == 0
    }

    // waitset.retire_wait :
    atomic_add( & sem->waiters , -1);
}

It's exactly the same, but with one key difference at *3 - the wait does not happen if count is not zero, which means we can not receive the wait wakeup from futex_wake if we don't need it. This removes the need for the re-pass that we had in the waitset semaphore.

This futex semaphore is fine, but you could reduce the number of atomic ops by storing count & waiters in one word.

12/05/2011

12-05-11 - Surprising Producer-Consumer Failures

I run into these a lot, so let's have a quick glance at why they happen.

You're trying to do something like :


Thread1 :

Produce 1
sem.post

Thread2 :

Produce 2
sem.post

Thread 3 :

sem.wait
Consume 1

Thread 4 :

sem.wait
Consume 2

and we assert that the Consume succeeds in both cases. Produce/Consume use a queue or some other kind of lock-free communication structure.

Why can this fail ?

1. A too-weak semaphore . Assuming out Produce and Consume are lock-free and not necessarily synchronized on a single variable with something strong like an acq_rel RMW op, we are relying on the semaphore to synchronize publication.

That is, in this model we assume that the semaphore has something like an "m_count" internal variable, and that both post and wait do an acq_rel RMW on that single variable. You could certainly make a correct counting semaphore which does not have this behavior - it would be correct in the sense of controlling thread flow, but it would not provide the additional behavior of providing a memory ordering sync point.

You usually have something like :


Produce :
store X = A
sem.post // sync point B

Consume:
sem.wait // sync point B
load X  // <- expect to see A

you expect the consume to get what was made in the produce, but that is only gauranteed if the sem post/wait acts as a memory sync point.

There are two reasons I say sem should act like it has an internal "m_count" which is acq_rel , not just release at post and acquire at wait as you might think. One is you want sem.wait to act like a #StoreLoad, so that the loads which occur after it in the Consume will see preceding stores in the Produce. An RMW acq_rel is one way to get a #StoreLoad. The other is that by using an RMW acq_rel on a single variable (or behaving as if you do), it creates a total order on modifications to that variable. For example if T3 seems T1.post and T2.post and then does its T3.wait , T4 cannot see T1.post T3.wait T4.wait or any funny other order.

Obviously if you're using an OS semaphore you aren't worrying about this, but there are lots of cases where you use this pattern with something "semaphore-like" , such as maybe "eventcount".

2. You're on POSIX and forget that sem.wait has spurious wakeups on POSIX. Oops.

3. Your queue can temporarily appear smaller than it really is.

Say, as a toy example, adding a node is done something like this :


new_node->next = NULL;

old_head = queue->head($).exchange( new_node );
// (*)
new_node->next = old_head;

There is a moment at (*) where you have truncated the queue down to 1 element. Until you fix the next pointer, the queue has been made to appear smaller than it should be. So pop might not get the items it expects to get.

This looks like a bad way to do a queue, but actually lots of lock free queues have this property in more or less obvious ways. Either the Push or the Pop can temporarily make the queue appear to be smaller than it really is. (for example a common pattern is to have a dummy node, and if Pop takes off the dummy node, it pushes it back on and tries again, but this causes the queue to appear one item smaller than it really is for a while).

If you loop, you should find the item that you expected in the queue. However, this is a nasty form of looping because it's not just due to contention on a variable; if in the example above the thread is swapped out while it sits at point (*), then nobody can make progress on this queue until that thread gets time.

The result I find is that ensuring that waking from sem.wait always implies there is an item ready to pop is not worth the trouble. You can do it in isolated cases but you have to be very careful. A much easier solution is to loop on the pop.

12/03/2011

12-03-11 - Worker Thread system with reverse dependencies

In the previous episode we looked at a system for doing work with dependencies.

That system is okay; I believe it works, but it has two disadvantages : 1. It requires some non-standard synchronization primitives such as OR waits, and 2. There is a way that it can fail to do work as soon as possible; that is, there is the possibility for moments when work could be done but the worker that could do it is asleep. It's one of our design goals to not let that happen so let's see why it happens :

The problem basically is the NR (not ready) queue. When we have no RTR (ready to run) work, we popped one item from the NR queue and waited on its dependencies. But there could be other items later in the NR queue which become ready sooner. If the items in the NR queue become ready to run in order, this doesn't occur, but if they can become ready in different orders, we could miss out on chances to do work.

Anyhoo, both of these problems go away and everything becomes much simpler if we reformulate our system in terms of "forward dependencies" instead of "backward dependencies".

Normal "dependencies" are backwards; that is, A depends on B and C, which were created earlier in time. The opposite direction link I will call "permits" (is there a standard term for this?). That is, B and C permit A. A needs 2 permissions before it can run.

I propose that it is conceptually easier to set up work in terms of "dependencies", so the client still formulates work items with dependencies, but when they are submitted to be run, they are converted into "permissions". That is, A --> {B,C} is changed into B --> {A} and C --> {A}.

The main difference is that there is no longer any "not ready" queue at all. NR items are not held in any global list, they are only pointed to by their dependencies. Some dependency back in the tree should be ready to run, and it will then be the root that points through various NR items via permission links.

With no further ado, let's look at the implementation.

The worker thread becomes much simpler :


worker :

wait( RTR_sem );

pop RTR_queue and do work

that's it! Massively simpler. All the work is now in the permissions maintenance, so let's look at that :

How do we maintain permissions? Each item which is NR (not ready) has a (negative) count of the # of permissions needed before it can run. Whenever an item finishes, it walks its permission list and incs the permit count on the target item. When the count reaches zero, all permissions are done and the item can now run.

A work item now has to have a list of permissions. In my old system I had just a fixed size array for dependencies; I found that [3] was always enough; it's simply the nature of work that you rarely need lots of dependencies (and in the very rare cases that you do need more than 3, you can create a dummy item which only marks itself complete when many others are done). But this is not true for permissions, there can be many on one item.

For example, a common case is you do a big IO, and then spawn lots of work on that buffer. You might have 32 work items which depend on the IO. This only needs [1] when expressed as dependencies, but [32] when expressed as permissions. So a fixed size array is out and we will use a linked list.

The maintenance looks like this :


submit item for work :

void submit( work_item * wi , work_item * deps[] , int num_deps )
{

    wi->permits = - num_deps;

    if ( num_deps == 0 )
    {
        RTR_queue.push( p );
        RTR_sem.post();
        return;
    }

    for(int i=0;i<num_deps;i++)
    {
        deps[i]->lock();

        if ( ! deps[i]->is_done )
        {
            deps[i]->permits_list.push( wi );
        }
        else
        {
            int prev = wi->permits.fetch_add(1); // needs to be atomic
            if ( prev == -1 ) // permitted (do this also if num_deps == 0)
            {
                RTR_queue.push( p );
                RTR_sem.post();
            }
        }

        deps[i]->unlock();
    }

}


when an item is completed :

void complete( work_item * wi )
{
    wi->lock();

    set wi->is_done

    swap wi->permits_list to local permits_list

    wi->unlock();

    for each p in permits_list
    {
        int prev = p->permits.fetch_add(1);

        if ( prev == -1 )
        {
            // p is now permitted

            RTR_queue.push( p );
            RTR_sem.post();
        }
    }
}

the result is that when you submit not-ready items, they go into the permits list somewhere, then as their dependencies get done their permits count inc up towards zero, when it hits zero they go into the RTR queue and get picked up by a worker.

The behavior is entirely the same as the previous system except that workers who are asleep because they have no RTR work can wake up when any NR item becomes RTR, not just when the single one they popped becomes RTR.

One annoyance with this scheme is you need to lock the item to maintain the permits_list ; that's not really a big deal (I use an indexed lock system similar to Nt Keyed Events, I don't actually put a lock object on each item), but I think it's possible to maintain that list correctly and simply lock free, so maybe we'll revisit that.

ADDENDUM : hmm , not easy to do lock free. Actually maintaining the list is not hard, and even doing it and avoiding races against the permitted count is not hard, the problem is that the list is in the work item and items can be deleted at any time, so you either need to hold a lock on the item to prevent deletion, or you need something like RCU or SMR.

12-03-11 - RAD - Hawaii Branch

It's a pretty nice place to work. The ergonomics of the picnic table are not half bad actually. Very glad I brought my keyboard; wish the laptop screen was bigger.

12/02/2011

12-02-11 - Natural Expression

It's so nice when you find the "natural" way to express a coding problem. All of a sudden everything because so much simpler and the answers just start popping out at you. Like oh, and I can do this here, and this automatically happens just the way I wanted. Tons of old code just disappears that was trying to solve the problem in the "un-natural" way.

It doesn't change the code; in the end it all becomes assembly language and it can do the same thing, but changing the way you write it can change the way you think about it. Also when you find an simple elegant way to express things, it sort of makes it feel "right", whereas if you are getting the same thing done through a series of kludges and mess, it feels horrible, even though they are accomplishing the same thing.

It reminds me of physics. I think some of the greatest discoveries the past century in physics were not actually discoveries of any phenomenom, but just ways to write the physics down. In particular I cite Dirac's Bra-Ket notation and Feynman's path integrals.

Neither one added any new physics. If you look at it in a "positivist" view point, they did nothing - the actual observable predictions were the same. The physics all existed in the equations which were already known. But they opened up a new understanding, and just made it so much more natural and easier to work with the equations, and that can actually have huge consequences.

Dirac's bra ket for example made it clear that quantum mechanics was about Hilbert spaces and Operators. Transformation between different basis spaces became a powerful tool, and very useful and elegant things like raising and lowering operators popped out. Quantum mechanics at the time was sort of controversial (morons like Einstein were still questioning it), and finding a clear elegant solid way to write it down made it seem more reasonable. (physicists have a semi-irrational distrust of any physical laws that are very complicated or vague or difficult to compute with; they also have a superstition that if a physical law can be written in a very concise way, it must be true; eg. when you write Maxwell's equations as d*F = J).

Feynman's path integrals came along just at a time when Quantum Field Theory was in crisis; there were all these infinities which make the theory impossible to calculate with. There were some successful computations, and it just seemed like the right way to extend QM to fields, so people were forging ahead, but these infinities made it an incomplete (and possibly wrong) theory. The path integral didn't solve this, but it made it much easier to see what was actually being computed in the QFT equations - rather than just a big opaque integral that becomes infinity and you don't know why, the path integral lets you separate out the terms and to pretend that they correspond to physical particles flying around in many different ways. It made it more obvious that QFT was correct, and what renormalization was doing, and the fact that renormalization was a physically okay way to fix the infinities.

(while I say this is an irrational superstition, it has been the fact that the laws of physics which are true wind up being expressable in a concise, elegant way (though that way is sometimes not found for a long time after the law's discovery); most programmers have the same supertition, when we see very complex solutions to problems we tend to turn up our noses with distate; we imagine that if we just found the right way to think about the problem, a simple solution would be clear)

(I know this history is somewhat revisionist, but a good story is more important than accuracy, in all things but science)

Anyhoo, it's nice when you get it.

old rants