06-30-09 - Yaaaarg

Still no internet at home and Comcast has remotely rebooted my modem three more times. They've used up all the minutes on my pay-go phone so I also have no phone. Now I have to wait around at home for a technician to come again.

I'm going from being on hold with Comcast and navigating the infuriating automated menu (and my usual trick of just slamming on buttons to get to an operator doesn't work in their system), to sitting tense at home as upstairs neighbor stomps around on my head, to disputing health care bills or going to another fucking doctor or PT appointment, to commuting in the goddamn fucking traffic. I want to scream and/or punch someone. I am a ball of sadness and rage.

I've been a total dick to the people I talk to at Comcast; I don't mean to be, but the fucking automated menu system is so fucking awful, I'm in a complete rage by the time I get through to someone. Some of the amusing things it's done :

Navigate through the menus to the choice for "problems with internet". I finally get there and it just plays a recording "for customer support please dial XXXX" and hangs up. Of course that number was the one I had dialed. Yum.

When you actually get through to the right place for problems it plays a recording "most problems can be solved by rebooting your modem, we will do so now, then play music for 30 seconds while it powers up" WHAT !? NO !? STOP!! And you can't hit any buttons to escape out of that - and in fact if you mash on buttons trying to get past it, one of the buttons apparently is "please reboot me again" and it starts over that cycle.

I remember back in SLO when I had a similar kind of problem with my cable modem, it took like a week of talking to frustrating morons before I finally got escalated to a "stage 2 technician" (or was it a third stage guild navigator?) and then it was fixed in one day. In SLO it turns out the problem was that some backbone IP conflict that was spoofing my cable modem's public address (or something, I don't really know shit about the net). The point is that it had nothing to do with the wires and wasn't anything anybody needed to come to my house for. All it took was an actual computer guy to look into the cable-side networking problem.

(ADDENDUM : someone from that "comcast cares" email address mailed me back within about 6 hours. I have a technician coming out today theoretically so we'll see what they say; of course the connection has been fine so far today. Everyone I've actually gotten to talk to at Comcast has been pretty nice and reasonable, it's just so aggravating how hard it is to get through the system to talk to someone).

On the phone with HealthNet you get 60 seconds of "you can also view your claims and benefits online" FUCK YOU I KNOW THAT. Fortunately with HealthNet the system of frantically mashing buttons does get you through to a human.

HealthNet has been really horrible, I recommend against them as I recommend against Comcast. They have a policy of approving providers for PPO on an individual basis, which means when you go into an office which is "in network" you have to check every single individual person. Somehow over and over I keep getting treated by the one person in an office who's not "preferred". I'm not sure if the fucking providers are doing this on purpose (MTI and Olympic PT, you fuckers), or if it's just bad luck. It is a financial boon for the providers to fuck you in this way because it removes the contractual limit on charges. Early on I made the mistake of trusting that when I went to an office and they got my health insurance info in advance and told me it was fine and I would be covered that it meant they would give me to a provider that was preferred. LOL.

I'm also frustrated and annoyed with my lack of progress on Oodle. At times I feel like I'm writing some really good code, and lots of it, but when I step back and look at what I've got done in the last 8 months, I'm not happy with where I am. I'm literally doing nothing but working, I basically have no friends (that I see), no hobbies, I never go out or do anything but work and take care of fucking errands and todos and desperately try to sleep. And yet I feel like I'm not working enough. Part of the problem is definitely all the fucking PT which is such a huge time sink and distraction. I'm tempted to just say fuck it and give up on my body because it's a frustrating annoyance, and the stress of it all is half undoing any progress I make.

In the voice of Mark from Peep Show : "that's what I need, to sink my teeth into a double helping of work".

06-30-09 - Nissan 370Z

I finally went and drove a 370Z this morning. I skipped it in the earlier car testing because I think it's really ugly and I thought the G37 was just a better version of the same thing. I was wrong, the 370Z is fucking fantastic.

It feels small and tight and powerful, the steering feeling and response is excellent, the throttle is responsive. Obviously I didn't push it too hard on a test drive, but it just felt *fun* unlike any of the other cars I've driven. It's the lightest car I've driven at around 3200 pounds (still no featherweight by a longshot, but 300 less than the 135 and 400 less than the G37 and 800 less than the Audi RS4 or S5), and you can feel the difference.

There are some ridiculous things about it though. The seats suck, they're not very adjustable and they're just uncomfortable and cheap. If you lean your head back on the headrest, you can feel the lower inside bars of the headrest get levered forward and poke you in the back. The seats have sporty side flanges to pinch you, but they're not adjustable and are apparently made for very narrow people (the BMW's for example have power adjustable seat pinchers that lock you in very nicely).

The visibility is just absurd. Like dangerous. In fact I seriously think it should be illegal to make cars with such bad visibility. There's not really so much of a "blind spot" as you just can't look behind you *at all*. You have to use your side mirrors. The rear window is tiny, and the extreme angle of it means that you get bigtime fresnel effect so the glass is more reflective, and you actually can see the contents of your own trunk better than the cars behind you. Parallel parking in it would have to be done by closing your eyes and using The Force. Even ignoring the rearward visiblity problems, the front and side visibility isn't awesome either. Those windows are small too, and the hood and doors feel very high.

This is a common trend on lots of modern cars. The proper proportion for a car (says me) is about 50/50 body panel height to cabin glass height (more like 55/45 is ideal). Lots of modern cars are more like 65:35 or even 75:25. The hoods and doors are too high, you can't see the ground around you; it's claustrophobic and shitty. I think part of the reason is safety vs. the fucking big trucks and SUVs with their high bumpers, but I also think some people like the styling of it.

The 370Z also has lots of shitty plastic fit & finish. Whatever, I don't really care about that. The ergonomics of the console is actually better than BMW or Audi. The buttons are pretty simple and right at your hand and easy to use and well labelled. One thing that pisses me off in many news cars is fucking (+) and (-) buttons for things like air vent speed or stereo volume. That's fucking retarded. They need to be dials or wheels or knobs. I've ranted about this before, but look people - you want a few things from a control like that - you want the full range to be displayed physically, you want it to be easy to go instantly to any position, you want it to be friendly to muscle memory so that you can do it with your eyes away from it, and you need to be able to read the setting from the control. All of that is satisfied by the well known device of the knob (and I mean an absolute knob with the range and current value marked, not a digital knob that just spins forever in both directions). Stop fucking trying to reinvent the knob, you fail. Anyway, as cheap as the 370Z interior is, it actually has knobs FTW.

(while I'm ranting about nobs - fucking push button toggles are awful too, especially when the only indicator of what state it's in is from a dim LED that you can't tell if it's lit or not on a sunny day; switches should be actual physical switches, for all the same very obvious reasons - you can tell the state it's in by touching it with your eyes closed, and you get physical feedback so you know when you've switched it, and aside from all that it just feels cool. It's fucking awful that shit like the "M button" or the traction control buttons are little digital push button toggles; they should be big giant lever switches like something Nicola Tesla would switch so that when you are ready to tear it up you say "engage!" and flip a big fucking switch). Anyhoo, back on track...

It has a little more headroom than the G37, but less than the 135. I can just barely sit all the way up in the 370Z but it's certainly not comfortable to just sit in. The interior and trunk space is in general just ridiculously small, like you would have trouble getting a big load of groceries in it, and if you ever buy even the smallest piece of furniture you have to have it delivered.

So, I'm a little torn. It's the first car I've driven that got me hot and bothered when I floored it around a corner, but the bad seats and shit visibility are pretty big problems. It is $10k cheaper than a G37 or 135 so I guess that goes in the equation somewhere. Maybe I could spend that $10k getting the body panels replaced with a Jaguar E-Type body.

In general I'm disappointed with all these new cars. The engines in all of them are phenomenal, but all the other bits are stupidly messed up. In many ways they are all inferior to my old Prelude, which has delightful big windows and very simple functional ergonomically friendly dash controls, and fantastic raw steering feel, and everything is delightfully manual and non-computer-involved and simple. After every one of these fancy car test drives, I've been pleasantly happy to get back in the Lude. Granted, after mashing the hungry throttle of a modern engine the Lude feels incredibly slow and impotent, but least I have a nice view and no fucking beeping while I take 10 seconds to accelerate to 60.

I suppose I shouldn't be surprised. In my dreams people would just take their product and only change it in ways that are definite improvements, and leave alone the things that work just fine, but of course they don't do that. It all makes me think of MS Office or Visual Studio or something. Yes, the newer version are fundamentally much better at their core, they have better engines and some features that I really want, but they also completely fucking change the user interface and move the menus around and rename things for no god damn reason, and add all kinds of bells and whistles that I don't want and just kludge it all up. Modern cars remind me of that.

A 370Z

Psyche! That's what it should look like.

This is what it does look like


06-29-09 - Blurg

My internet has been out at home for four days now. Arg Comcast. I've spend probably an hour on the phone with them, partly because they refuse to give me a direct number to tech support. They're so evasive about it. I ask for the number for tech support and they say "okay" and then give me the number for general customer support. I say "umm no, that's the main number for customer service, I want the number for tech support" and they say "that is the number for tech support" ; one time I said "are you refusing to give me the number to tech support" and the guy was like "no, I have given you the number for tech support". Yeesh.

This Cable Modem Troubleshooting Tips is pretty good. With a Surfboard you can go to and see your own status and event logs. Of course good luck getting to talk to tech support person who understands any of that. The new thing at Comcast is apparently rebooting your modem. They just love rebooting your modem (they can do it by remote control). Any time you call up, it's "I'm going to reboot your modem, please wait" which takes two minutes and does absolutely nothing. I try to explain that I've done that myself many times to no avail.

Last Friday I was stuck in the elevator at RAD for about an hour. I got in with Mike and pressed the button and it took us up as usual, and then just stopped. None of the buttons did anything, it refused to budge. Turns out it stopped just a few inches away from the floor, and because it wasn't lined up it refused to open the doors. We waiting about an hour for the technicians to show up and pry the doors open. One of the weirdest things about the whole experience to me was that when the technicians showed up they didn't say "hello" or anything at all, there was no "hello, is everyone okay in there?" or "we're going to pry the door open now" - nothing. They just started banging on the door, then one of them climbed through the roof and jumped onto the top of the elevator with a huge BANG. I was like WTF, warn us that you're about to jump onto the fucking top of the elevator. It was weird.

Game Angst has a pretty good article about the problems of deferred shading that nobody talks about. I'm a fan of deferred shading in theory, but he has a good point. There are other problems - for example it makes the pipeline for alpha vs. non-alpha completely different (presumably you would render alpha stuff onto your scene using forward shading after the deferred shading is done). Also, it basically means you have to use the same shader on everything, or that you can't use many different types of shaders that take different parameters. Like if you wanted your main characters to be rendered with very nice spherical harmonic lighting and the rest of your stuff to be lit with N*L , you can't really do that with deferred shading. (obviously the uniformity and unification is one of the appealing things about deferred shading). Another one that popped into my head while reading that is the hacky semi-Lambertian falloff. Instead of just doing Clamp[ N*L ] , it's nice to do Clamp[ (N*L + C)/(1 + C) ] , which serves to let the light go "around the edge" a little. By giving the artists control of C per object you get a very cheap way to improve lighting, but with deferred shading you'd have to add an extra attribute channel which is absurd.

I keep looking at apartments. I had to pay July rent, so now I have a lot of time, so I thought I'd go ahead and try the "Secretary Solution". See the Bruss article "The art of a right decision" . The basic idea is to look at places for a while and not take any of them. Then when you hit a certain preset time, you switch modes and then take the first place which is the best seen so far. As I noted before, this maximizes the chance of getting the best place, but it doesn't maximize average real EV and when it fails it can be arbitrarily bad. Anyway, I plan to look without taking for another week or so, and then switch modes.

I looked at this Grand Apartment in 1903 Mansion . It'd fucking amazing. The kitchen and the floor plan are perfect - it's almost exactly what I would design myself, which is saying a lot. The kitchen is huge and open, but not too open - it's got counters and islands separating it from the living room (I hate places where the kitchen directly spills into the living room with no barrier). The big problem with this place is the location. It's on a really shitty street on the west side of Broadway (the east side of Broadway is the good part). The street has a bunch of halfway houses for men on parole; those are actually the good neighbors - the halfway houses have curfews and the guys are very quiet because they have rules and they don't want to get in trouble. The bad neighbors are all the other shitty apartments full of broke punk kids. Anyway, it's another place like the Duplex that kind of presents a quandary for me - it's fucking gorgeous inside this apartment, but not so hot outside, I dunno how to weight that.

There are also tons of houses coming for rent. So far none of them is quite ideal, but it's encouraging to see a bunch of them on the market. Most of the best places are still trying to sell, the neighborhood is just blanketed in "for sale" signs. Upstairs neighbor stimpy dickhead has been waking me up at 7 AM recently. I desperately need to get the fuck out of here. I wouldn't mind waking up early except that it puts my commute right in the middle of rush hour, which I usually try to avoid.

In other apartment news : padmapper is a much better version of the "craigslist on google maps" than the original "housingmaps.com". On padmapper you can hit the "+" to expand the filters and get much nicer control. PadMapper desperately needs to be able to save searches though, it's semi-useless without that.

Mercer Island around Mercer Way is a lovely place to ride. Getting there is miserable. It would be kinda sweet to live there, so I could hop right out my door and ride the loop. Of course I only like riding up here maybe 4 months out of the year since I don't do cold or wet. And if I really want nice riding out of my door I should move back to SLO where the riding is sweet and the weather is fair every day. Anyway, while I was riding I saw a cyclist getting carried to an ambulance on a stretcher. It was a scary reminder of the extreme danger of this hobby. On the plus side, I'm over my fear of speed that I've had since my last bad crash. I can now open up in fast descents and my body doesn't tighten up. I am still consciously choosing to be more careful and go slower, but it's a logical choice, not a panic reaction, so I'm happy about that. Two of my car crashes and one of my bike crashes were caused just by bad road conditions; I'm much more aware now of how likely it is to go around a corner and find a huge patch of oil or sand or something you can't control or avoid that's going to make you crash. People who speed around corners that they've never been around before are just retarded, it would be fine if it was only dangerous to yourself, but you never know when someone else is going to be there around that corner too. I consciously try to only speed around corners if I've been there before recently so I know the conditions. It would be really sweet to be able to ride on closed roads that are all smoothly paved and free of debris, but I guess you have to be a pro to get that. If I was billionaire I'd spend my money on shit like that. You could just rent the entire route you want to ride, country lanes in vermont, or mountain roads in the rockies, send a street sweeper ahead of you to make sure it's all clean, have any nasty bits repaved, and then have a beautiful ride. What would it cost, maybe $100k ? If you spend $100k every day for the rest of your life, it only reaches around $1 billion. No problem for a Gates or Buffet.

Seattle's summer days are quite wonderful. It's a shame to waste even a single one, since you only get maybe 20 total. You need to run around in the sun, have nothing in particular to do except enjoy the moments, sit on a patio and have drinks, play, breathe.

I need to have a child so that I have someone to play with. I just want to go the park and play tag and catch. I guess I could get a dog, but dogs are gross. I've always wanted a rent-a-dog service; I'd love to have a dog at the park, I just don't want one getting hair and slobber all over my house. It's kind of horrible to have a kid just to get someone to play with, but I suppose it's not as bad as the reasons why most people have kids - to "carry on the family name" (WTF is that) , to show the world how awesome you are by creating a child that's very successful (look at my son the doctor, aren't I such a great parent), or to make a little you that can do all the things you wish you'd done and have the breaks you didn't get so you can live vicariously through them.

The Vintage Seattle blog is super awesome. If you like old photos of cities (as I do) it's a treasure trove; go back through the old posts. It always blows my mind just how empty this country was 100 years ago ( for example ). The picture of the great white fleet is stunning.

Sean has put up his game Succor . It's an extremely interesting & clever concept; I won't spoil it, go download and play.

It occured to me the other day that since I use an HTPC to watch TV now, I could plug in a gamepad controller and play some little games on my TV (like Mutant Storm). LOL yeah right. I don't want to crash or reboot my fucking TV machine, which playing games would certainly cause.

I test-drove a G37 again a few days ago. It's a good car, I don't blame anyone who gets one, but I won't. It's a tiny bit too small, I hit my head on the roof. Supposedly by the official "front headroom" measurement it's bigger than the 135, but the reality is not so; I have plenty of room in the 135 but not in G37. The other problem is it just feels impotent at low revs, and it's too big and heavy. They brag about the "smoothness" of the acceleration; I don't want to ride a wave of acceleration, I want to be punched in the gut. The car should be gentle when I'm soft on the pedal, but it should jerk me around mercilessly when I thrash it.

Anyway, it was a funny test drive. The salesman guy was a total stereotypical douchebag car salesman; he had a baggy suit on and told me the trunk fit his golf clubs just fine. He told me has a G37 himself and got it lowered 1.5" so that he can't get over any bumps. I was like "ah, sweet", playing along trying to encourage the douchiness. He told me it looks great with bigger rims, "put some 20's on it". Yeah. So he showed me the bluetooth functionality, which granted is very good - much better than BMW - and he mentioned you can easily turn it off if you're in the car with someone and you don't want your calls to show up on the screen. I was like "oh yeah, if I'm riding with my girlfriend and my mistress calls I don't want that showing up" and he immediately said "yeah, I always turn it off, I told my girlfriend the bluetooth doesn't work in my car". OMG super lol. Nice job super douchebag car salesman, you win life.

The advantage of the G37 over the 135 is that it is more direct and mechanical; it has a real LSD, the steering feels much more connected and responsive to me, the pedal to acceleration response feels more analog and mechanical. That's all good. Also, the computer in the Infinitis is totally superior. I've seen BMWs and Audis and Mercs and in all of them the computer is the fucking suck, I would rather not have it, it's a total mess to use. The Infiniti is a touch screen for the mother fucking win. Plus the navigation is very simple and intuitive, it's got arrow keys and "enter" on the steering wheel so that you can do many things without looking, and it has voice activation and it actually works (as in, I tried it on a car I've never used before and it recognized my selections correctly on the first try with no problems). I was extremely impressed, it's by far the best car computer I've ever seen. I still would probably rather not have it, but if you really like car computers, it's the one.


06-26-09 - Guide to Health Care

God damn I keep getting screwed by providers and the insurance company. Here are some tips :

1. Do your own research, find the specific doctor you want. This can be hard to do, but you should be able to find the local specialist in your problem. I recommend someone young and up & coming because they will actually care and be up on modern techniques. The crucial thing here is the difference between a good doctor and any old doctor is like night and day. Don't be afraid to just leave a doctor after one visit if you don't think you're getting the attention you need.

Specifically - if you have an injury where you get an MRI, you should expect a doctor to actually LOOK at the MRI. The first two doctors I went to literally didn't ever look at it. They order MRI's and make a ton of money off you and then just read the examiner's report. The MRI examiner is not an MD and not qualified to be making judgements on your condition. Furthermore, if you have something like a sports injury, if the doctor does not actually look at your body, eg have you take your clothes off and move around and show him the disfunction - you should walk right out of there.

2. Do your own research on how you will be billed and who will treat you. This is tricky and you cannot trust ANYONE on this matter. You will be told that the person who will treat you is "covered" by your insurance. That doesn't mean anything. They may be outside your "network" which will cause them to bill you at a very high rate. Assuming you are on an HMO/PPO plan of some kind you must find specific providers who are "preferred" or "in network". You must do your own research on this because no one else will.

Pursuant to that - it's not enough to know that a certain office is "in network" for you. You must call ahead and get the name of the exact person who will be treating you. Twice I've gone to PT offices that I was told were "in network" for me, and then the exact person I was assigned to was out of network. This can be rather tricky - when you go in to a doctor who's in network for you, he might send you into a room to get xrays, if the xray tech is out of network, boom you're fucked. The classic one that people get bit by this was is the anaesthesiologist. Often when people get surgery they find a surprise bill for $10,000 from the anaesthesiologist who's out of network for them even though the doctor/hospital/everything else is preferred.

You really have to be a huge asshole about this, you need to call ahead and get the names of everyone who is going to treat you and check on them. When you go in to the office, you need to be firm, any time someone walks in a room to treat you, you have to say "who is this" and if they're not a name you've checked you have to say no. I know this is ridiculous and often not possible in practice.

3. When you get bills, go over everything with a fine toothed comb. Providers will bill you directly for the balance not covered by insurance. Quite often they do this wrong/illegally either on purpose or by accident. Of course the health insurance often fails to reimburse correctly too, so check on that as well. If the doctor takes your health insurance, then they are not allowed to bill more than the negotiated rates; this amount is normally marked "not allowed" on your explanation of benefits. Often the doctors will go ahead and bill this to you one way or another; this is called "balance billing" and it's specifically illegal. It can be tricky to spot sometimes because they mix it in with allowed billing, so you have to do the math and see that all the billing amounts add up right.

When you get bills that don't look right to you, you have to call your health insurance and your provider. If it's a question of "balance billing" you just confirm it with your health insurance, and then tell your provider to fuck off. The other common problem is something is getting billed at a higher rate or not covered when you think it should be. I've had more luck calling the provider about this kind of thing, the health insurance will just tell you tough luck. The provider can be a little funny about how exactly they bill things, so if they change the billing code and resubmit they may get more coverage; they are usually happy to do this, you may need to talk to the head of the billing department or whatever.

Anyhoo, if you have some kind of Orthopedic problem, I highly recommend Dr. Chris Wahl. I haven't actually had surgery from him, so I can't comment on his surgical skills (anecdotal reports on surgical outcomes are pretty worthless anyway, we need fucking public statistical analysis of doctor's outcomes, but of course the corrupt greedy bastards at the AMA would never allow that). He's the first doctor I've ever had that actually looked at my MRIs. He's also the first doctor who even gave me a proper physical visual exam, as in I took my shirt off and he immediately said "hmm your right shoulder is dropped, looks like you had a type 2 sepration". Yes! yes I did, and both my previous doctors failed to diagnose it.


06-22-09 - Redraw Dilemma

This apartment searching is really annoying me. I can't handle having "many balls in the air" ; when I put something on my todo list, I like to work at it until it's gone. God I fucking hate shit on my todo list (the fucking health care keeps reinserting itself on my todo list and it's pissing me off; they got me again today with some billing fuckup, but I digress...).

Anyway, it's reminding me of a concept I often think about. I'll call it "the redrawer's dilemma" but there must be a better/standard name for this.

The hypothetical game goes something like this :

You are given a bag with 100 numbers in it. You know the numbers are in [0,1000] but don't know how many of each number there are in the bag. You start by drawing a random number from the bag.

At each turn of play, you can either keep your current number (in which case that is your final score), or you can put your current number back in the bag and draw again, but drawing again costs you -1 that will be subtracted from your final score.

How do you play this game optimally?

There are two things that are interesting to me about this game in real life. One is that humans almost always play it incredibly badly, and the second is that when you finally decide to stop redrawing you're almost always unhappy about it (unless you got super lucky and draw a 900+ number).

The two classic human player errors in this game are the "I just started drawing, I shouldn't stand yet" and the "I can't stop now, I already passed on something better than this". The "I just started drawing, I shouldn't stand yet" guy draws something like an 800 on one of his early draws. He thinks dang that's really good, but maybe this bag just has lots of high numbers in it, I just started drawing, I should put some time into it. Now of course that reasoning is based in correct logic - if you have reason to believe that your chance of drawing higher is good enough to merit the cost of continued looking, then yes, do so, but just drawing more because "it's early" makes no sense - the game is totally non temporal, the cost of continuing drawing doesn't go up over time. This often leads into the "I can't stop now, I already passed on something better than this" guy, who's mainly motivated by pride and shame - he doesn't want to admit to himself that he made a big mistake passing early when he got a high number, so he has to keep drawing until he gets something better. He might draw an 800, then a whole mess of single digit numbers and he's thinking "oh fuck I blew it" and then he draws a 400. At that point he should stand and quit redrawing, but he can't, so he draws again.

The thing is, even if he played correctly and just took the 400 after passing on the 800, he would be really unhappy about. And if the early termination guy played correctly and just got an early 800 and didn't draw, he would be unhappy too, because he'd always be wondering if he could've done better.

The other game theory / logical fallacy that plagues me in these kind of things is "I'm already spending X I may as well spend X". First I was looking for places around $1500, then I bumped it to $1700, then $1900. Now I'm looking at places for $2500 cuz fuck it they're nicer and I was looking at places for $2000 so it's only $500 more.

In other news, hotpads is actually a pretty cool apartment search site. It seems they are just scraping craigslist and maybe some other classifieds sites, so it's not like they have anything new, but the map interface and search features and such are solid. One thing is really annoying me about it though - the wheel zooming in the map is totally broken, I keep trying to wheel zoom and it sends the map off the never never land. Urg!

In more random news, I've really enjoyed the "Wallander" series on PBS ; the stories are pretty retarded/ridiculous, but I like the muddled contemplative pace of it, and the washed out monochrome color palette.


06-21-09 - Blog consultation

So I'm thinking about renting this place ; it's kind of ridiculous, it's a 2 BR which I don't need, it's $1950 which is an outrageous amount to pay for rent (and they're overcharging for the current market conditions). Sticking with the negative, it's a duplex, and the bottom halve is inhabited by the owner. I'm somewhat worried about that, I don't like the idea of seeing the owner all the time, it makes me feel like I'm being watched and judged which is a very unpleasant feeling. Oh, and the driveway is shared and only wide enough for one car so you have to ask each other to move to get your cars in and out, that seems awful.

On the plus side, the kitchen is huge and full of light and has a real gas stove and fume hood. The other huge plus is that it has a big private deck on the roof. Those are my dreams, just to be able to cook and sit outside in the morning while I have my coffee. And it's a good location. The back yard private deck is so much better than even the balconies on big apartment buildings, because the balconies are all right next to each other.

Oh, the other drag is that the landlord is a handyman and does the repairs himself. That's such a huge fucking disadvantage. I've had that at three apartments now and it's been a huge disaster each time. They're always super slow, if you ask them to fix something it seems to be code for "please tear up my apartment and then leave your tools in it for a month and maybe stop by for an hour every other week". Then they act so smug and pleased with themselves when they actually fix something for you. You know, I'm sure a professional would've done a better job much faster, and it's just your duty to have things fixed, so stop acting like I should praise your amazing skills and thank you over and over. In reality *you* should be thanking *me* for letting you play your handyman dressup game on my time in my home.

I also looked at this place in the Trace Lofts. It's ridiculous, it makes me angry. For one thing, like so many of the new apartments, it's basically a shoebox with a hole cut in one end. It's long and skinny and only one skinny side has windows. It's also just a bit open space with no rooms, no closets. Okay, fine, it's a "loft", it's all urban and trendy and cool. But the whole fucking point of those big industrial loft conversions is that they SUCK so they're really cheap, so broke artists can afford them, which is what makes them cool. This place is expensive as balls and it's targetted at yuppies who want to play like they're urban cool artists and pay out the nose for nice fixtures but still get all the suckitude of not actually having rooms or doors or closets. So dumb. (the penthouses on top of the Trace Lofts seem amazing, the one available is in the old building part). (hell the penthouse is only $2250 , well well worth the $250 for the upgrade) (if you want to go to outrageous rents like that this looks nice too).

If I don't get the ridiculous duplex I might just rent a house. You can get whole houses now for around $1600/mo that are just slightly out of the main area here. There are several in the 19th & Prospect area for around that, which is a bit of a long walk to the happening area but still walkable, and there's even one at 16th and Harrison which is not far at all. If you rent a house, noone lives above you.

06-21-09 - Fast Exp & Log

So in an earlier post I wrote about approximation of log2 and Ryg commented with links to Robin Green's great GDC 2003 talk : part1 (pdf) and part2 (pdf) ( main page here ).

It's mostly solid, but in part 2 around page 40 he talks about "fastexp" and "bitlog" and my spidey senses got tingling. Either I don't understand, or he was just smoking crack through that section.

Let's look at "bitlog" first. Robin writes it very strangely. He writes :

A Mathematical Oddity: Bitlog
  A real mathematical oddity
  The integer log2 of a 16 bit integer
  Given an N-bit value, locate the leftmost nonzero bit.
  b = the bitwise position of this bit, where 0 = LSB.
  n = the NEXT three bits (ignoring the highest 1)

    bitlog(x) = 8x(b-1) + n

  Bitlog is exactly 8 times larger than log2(x)-1

Bitlog Example
 For example take the number 88
88 = 1011000
b = 6th bit
n = 011 = 3
bitlog(88) = 8*(6-1)+3
= 43
  (43/8)+1 = 6.375
  Log2(88) = 6.4594
  This relationship holds down to bitlog(8)

Okay, I just don't follow. He says it's "exact" but then shows an example where it's not exact. He also subtracts off 1 and then just adds it back on again. Why would you do this :

    bitlog(x) = 8x(b-1) + n

  Bitlog is exactly 8 times larger than log2(x)-1

When you could just say :

    bitlog(x) = 8xb + n

  Bitlog is exactly 8 times larger than log2(x)

??? Weird.

Furthermore this seems neither "exact" nor an "oddity". Obviously the position of the MSB is the integer part of the log2 of a number. As for the fractional part of the log2, this is not a particular good way to get it. Basically what's happening here is he takes the next 3 bits and uses them for linear interpolation to the next integer.

Written out verbosely :

x = int to get log2 of
b = the bitwise position of top bit, where 0 = LSB.

x >= (1 << b) && x < (2 << b)

fractional part :
f = (x - (1 << b)) / (1 << b)

f >= 0 && f < 1

x = 2^b * (1 + f)

correct log2(x) = b + log2(1+f)

approximate with b + f

note that "f" and "log2(1+f)" both go from 0 to 1, so it's exact at the endpoints
but wrong in the middle

So far as I can tell, Robin's method is actually like this :

uint32 bitlog_x8(uint32 val)
    if ( val <= 8 )
        static const uint32 c_table[9] = { (uint32)-1 , 0, 8, 13, 16, 19, 21, 22, 24 };
        return c_table[val];
        unsigned long index;
        _BitScanReverse(&index,(unsigned long)val);
        ASSERT( index >= 3 );
        uint32 bottom = (val >> (index - 3)) & 0x7;
        uint32 blog = (index << 3) | bottom;

        return blog;

where I've removed the weird offsets of 1 and this just returns log2 times 8. You need the check for val <= 8 because shifting by negative amounts is fucked.

But you might well ask - why only use 3 bits ? And in fact you're right, I see no reason to use only 3 bits. In fact we can do a fixed point up to 27 bits : (we need to save 5 bits at the top to store the max possible integer part of the log2)

float bitlogf(uint32 val)
    unsigned long index;
    _BitScanReverse(&index,(unsigned long)val);

    uint32 vv = (val << (27 - index)) + ((index-1) << 27);

    return vv * (1.f/134217728); // 134217728 = 2^27

what we've done here is find the pos of the MSB, shift val up so the MSB is at bit 27, then we add the index of the MSB (we subtract one because the MSB it self starts the counting at one in the 27th bit pos). This makes a fixed point value with 27 bits of fractional part, the bits below the MSB act as the fractional bits. We scale to return a float, but you could of course do this with any # of fixed point bits and return a fixed point int.

But of course this is exactly the same kind of thing done in an int-to-float so we could use that too :

float bitlogf2(float fval)
    FloatAnd32 fi;
    fi.f = fval;
    float vv = (float) (fi.i - (127 << 23));
    return vv * (1.f/8388608); // 8388608 = 2^23

which is a lot like what I wrote about before. The int-to-float does the exact same thing we did manually above, finding the MSB and making the log2 and fractional part.

One note - all of these versions are exact for the true powers of 2, and they err consistently low for all other values. If you want to minimize the maximum error, you should bias them.

The maximum error of ( log2( 1 + f) - f ) occurs at f = ( 1/ln(2) - 1 ) = 0.442695 ; that error is 0.08607132 , so the correct bias is half that error : 0.04303566

Backing up in Robin's talk we can now talk about "fastexp". "fastexp" is doing "e^x" by using the floating point format again, basically he's just sticking x into the exponent part to get the int-to-float to do the 2^x. To make it e^x instead of 2^x you just scale x by 1/ln(2) , and again we use the same trick as with bitlog : we can do exact integer powers of two, to get the values in between we use the fractional bits for linear interpolation. Robin's method seems sound, it is :

float fastexp(float x)
    int i = ftoi( x * 8.f );
    FloatAnd32 f;
    f.i = i * 1512775 + (127 << 23) - 524288;
    // 1512775 = (2^20)/ln(2)
    // 524288 = 0.5*(2^20)

    return f.f;

for 3 bits of fractional precision. (note that Robin says to bias with 0.7*(2^20) ; I don't know where he got that; I get minimum relative error with 0.5)).

Anyway, that's all fine, but once again we can ask - why just 3 bits? Why not use all the bits of x as fractional bits? And if we put the multiply by 1/ln(2) in the float math before we convert to ints, it would be more accurate.

What we get is :

float fastexp2(float x)
    // 12102203.16156f = (2^23)/ln(2)
    int i = ftoi( x * 12102203.16156f );
    FloatAnd32 f;
    f.i = i + (127 << 23) - 361007;
    // 361007 = (0.08607133/2)*(2^23)

    return f.f;

and indeed this is much much more accurate. (max_rel_err = 0.030280 instead of 0.153897 - about 5X better).

I guess Robin's fastexp is preferrable if you already have your "x" in a fixed point format with very few fractional bits (3 bits in that particular case, but it's good for <= 8 bits). The new method is preferred if you have "x" in floating point or if "x" is in fixed point with a lot of fractional bits (>= 16).


I found the Google Book where bitlog apparently comes from; it's Math toolkit for real-time programming By Jack W. Crenshaw ; so far as I can tell this book is absolute garbage and that section is full of nonsense and crack smoking.


it's obvious that log2 is something like :

x = 2^I * (1+f)

(I is an int, f is the mantissa)

log2(x) = I + log2(1+f)

log2(1+f) = f + f * (1-f) * C

We've been using log2(1+f) ~= f , but we know that's exact at the ends and wrong in the middle
so obvious we should add a term that humps in the middle.

If we solve for C we get :

C = ( log2(1+x) - x ) / x*(1-x)

Integrating on [0,1] gives C = 0.346573583

hence we can obviously do a better bitlog something like :

float bitlogf3(float fval)
    FloatAnd32 fi;
    fi.f = fval;
    float vv = (float) (fi.i - (127<<23));
    vv *= (1.f/8388608);
    //float frac = vv - ftoi(vv);
    fi.i = (fi.i & 0x7FFFFF) | (127<<23);
    float frac = fi.f - 1.f;
    const float C = 0.346573583f;
    return vv + C * frac * (1.f - frac);


06-19-09 - Apartment Codewords

"Modern" = shitty faux-modernist from the 70's with aluminum windows and beige carpet and particle board kitchen cabinets.

"Amenities" = a closet off the lobby has a stairmaster in it.

"Turn of the century Classic" / "vintage" / etc = run down, peeling paint, cracked windows, original wood-burning stove, etc.

"Heat maintained for all resident's comfort" = heat not run at all ever so the landlord can make more money.

"Vibrant community" = Noisy white trash neighbors hang out right outside your window.

"Act fast / rare opportunity / don't wait!" = this has been listed for 6 months and nobody wants it, please please be the sucker who takes it.

"Proximity to downtown / near businesses" = way the fuck out in some neighborhood you've never heard of.

"Great location" = more often than not I'm finding this means it's located directly on I-5 ; "West facing" means the same thing.

"View view view" = everything about this place is so shitty we want you to focus on what's outside.


06-18-09 - Things annoying me while apartment searching

1. Craigslist is fucking awful, but it's all I've got. I can't filter in any meaningful way, hell I can't even select for just 1 bedrooms or by neighborhood in Seattle. So I have to manually poke through the listings myself (of course the search is horribly broken), and I can't mark ads I've already seen before, and people keep relisting the same property over and over.

2. Walking around my neighborhood, I see *tons* of stuff for rent that's not on the internet. It seems like every building around here has a vacancy now as the market is crashing. What the fuck am I supposed to do with that? Oh, yay, you put a "for rent" sign outside the building. How many bedrooms? what floor? what square feet? am I supposed to phone every single fucking building in the whole city !? my god.

3. Ridiculously amateurish and unprofessional people renting these things. Some people I call & email over and over and they never get back to me at all. Good job. Others put up apartment listings that are just woefully lacking in information. My god, fucking list the square footage at least.

4. Intentional lies & left out information in the ads. Of course lots of the people who leave out the square feet do it on purpose, like the place that say "spacious 1 bedroom" and then you email them and they tell you it's 550 square feet. The big thing people do here is neighborhood lies. Everything around claims to be "Capitol Hill" - no, fuckers, First Hill down in the hospitals is not Cap Hill, fucking Central District half way down Rainier is not cap hill, fucking Eastlake is not cap hill, hell I've seen places across the fucking bridge on Beacon Hill claiming to be "Cap Hill". Liars.

5. The ridiculous pretention that we're not in a huge real estate crash. All the ads are like "rare unit available - act now ! prestigious building!" uhh, hello, half the fucking city is vacant or on sale right now. You can quit with pretending that I should feel fortunate that you are being kind enough to try to rent to me.

In general I'm a little torn about whether to hurry up and get out of here (god knows I need to get out of here), vs. take my time and find a really great place or wait for the market to crash a bit more. My "boots on the ground" view of the market here is that it has completely crashed, people are moving out left and right, but the sellers/landlords have not yet come face to face with the reality, so sale prices are still high and rents are still high. Right now there's a glut of supply that's just not moving, tons of condos are sitting empty unsold. In the next 6 months or so there's going to be a price crash.

I'm finding I'm still a sucker for these charming old 1910-1920 buildings. They're just so beautiful and full of charm and character; every time I look at a brand new building it just feels like a boring box, like a hotel room, and it makes me feel claustrophobic and ill. I dunno, maybe I'm attracted to the high I get from lead and mold poisoning.

For example I noticed these new condos Lumen are going on auction. Looking at the pictures just makes me angry and sick. Some developer just slapped together a bunch of drywall and sheets of metal and calls it "modern" "sleek" "urban" and wants to sell each condo for $500k. I bet you could build those kind of units a few days each. One can easily understand why there are so many new condos like this popping up all over the city - if people are dumb enough to buy them, it's a HUGE windfall for the developer.

Some of the more interesting projects around here :

First Church on 15th near Group Health was going to be converted to condos. I'm sure the project is going to die now, but it's at least sort of interesting. Of course they just put super cheapo generic "modern" hotel box shit inside it, but at least you have the old church around you.

The new unfinished condos on Cal Anderson Park are going to auction. I've been looking at them for a while now, wondering why they've been sitting there for months 90% finished. Well, apparently the developer ran out of money. It's kind of an amazing location, looking right out on the park, though that could also be a bit of a negative because you have zero privacy and there are a lot of hobos and kids in that park.

Harvard and Highland is the big new project going up in the historic mansion district. The condos are huge (2000 sqft) and crazy expensive, it doesn't make any sense IMO, but the web page is cool because it has an interactive map of the neighborhood with info on all the great homes around there. It's one of the best pages I've ever seen on the local robber baron's mansions.

Unrelated but Edith Macefield�s army of tattoos is cool.

There's still a ton of new construction around here that isn't done yet. All the massive amounts of shitty old buildings are going to be in trouble. Among other things, the population density in urban seattle is *massively* expanding these days, with tons of big condo projects in cap hill & especially South Lake Union. The infrastructure does not exist to handle all these people and no thought or money is being put into designing the growth of the city in a manageable constructive way.


06-17-09 - Inverse Box Sampling - Part 1.5

In the previous post we attacked the problem :

If you are given a low res signal L and a known down-sampler D() (in particlar, box down sampling), find an up sampler U() such that :

L = D ( U( L ) )

and U( L ) is as close as possible to the actual high res signal that L was made from (unknown).

I'm also interested in the opposite problem :

If you are given a high res signal H, and a known up-sampler U() (in particular, bilinear filtering), find a down sampler D() such that :

E = ( H - U( D( H ) ) )^2 is minized

This is a much more concrete and tractable problem. In particular in games/3d we know we are forced to use bilinear filtering as our up-sampler. If you use box down-sampling for D() as many people do, that's horrible, because bilinear filtering and box-downsampling are both interpolating and variance reducing. That, they both take noisey signals and force them towards gray. If you know that U() is going to be bilinear filtering, then you should use a D() that compensates for that. It's intuitively obvious that D should be something a bit like a sinc to bring in some neighbors with negative lobes to compensate for the blurring aspect of bilinear upsample, but what exactly I don't know yet.

(note that this is a different problem than making mips - in making mips you are actually going to be viewing the mip at a 1:1 resolution, it will not be upsampled back to the original resolution; you would use this if you were trying to substitute a lower res texture for a higher one).

I haven't tried my hand at solving this yet, maybe it's been done? Much like the previous problem, I'm surprised this isn't something well known and standard, but I haven't found anything on it.

06-17-09 - DXTC More Followup

I finally came back to DXTC and implemented some of the new slightly different techniques. ( summary of my old posts )

See the : NVidia Article or NVTextureTools Wiki for details.

Briefly :

DXT1 = my DXT1 encoder with annealing. (version reported here is newer and has some more small improvements; the RMSE's are slightly better than last time). DXT1 is 4 bits per pixel (bpp)

Humus BC4BC5 = Convert to YCoCg, Put Y in a single-channel BC4 texture (BC4 = the alpha part of DXT5, it's 4 bpp). Put the CoCg in a two-channel BC5 texture - downsampled by 2X. BC5 is two BC4's stuck together; BC5 is 8 bpp, but since it's downsampled 2x, this is 2bpp per original pixel. The net is a 6 bpp format

DXT5 YCoCg = the method described by JMP and Ignacio. This is 8 bpp. I use arbitrary CoCg scale factors, not the limited ones as in the previously published work.

Here are the results in RMSE : (modified 6-19 with new better results for Humus from improved down filter)

name DXT1 Humus DXT5 YCoCg
kodim01.bmp 8.2669 3.9576 3.8355
kodim02.bmp 5.2826 2.7356 2.643
kodim03.bmp 4.644 2.3953 2.2021
kodim04.bmp 5.3889 2.5619 2.4477
kodim05.bmp 9.5739 4.6823 4.5595
kodim06.bmp 7.1053 3.4543 3.2344
kodim07.bmp 5.6257 2.6839 2.6484
kodim08.bmp 10.2165 5.0581 4.8709
kodim09.bmp 5.2142 2.519 2.4175
kodim10.bmp 5.1547 2.5453 2.3435
kodim11.bmp 6.615 3.1246 2.9944
kodim12.bmp 4.7184 2.2811 2.1411
kodim13.bmp 10.8009 5.2525 5.0037
kodim14.bmp 8.2739 3.9859 3.7621
kodim15.bmp 5.5388 2.8415 2.5636
kodim16.bmp 5.0153 2.3028 2.2064
kodim17.bmp 5.4883 2.7981 2.5511
kodim18.bmp 7.9809 4.0273 3.8166
kodim19.bmp 6.5602 3.2919 3.204
kodim20.bmp 5.3534 3.0838 2.6225
kodim21.bmp 7.0691 3.5069 3.2856
kodim22.bmp 6.3877 3.5222 3.0243
kodim23.bmp 4.8559 3.045 2.4027
kodim24.bmp 8.4261 5.046 3.8599
clegg.bmp 14.6539 23.5412 10.4535
FRYMIRE.bmp 6.0933 20.0976 5.806
LENA.bmp 7.0177 5.5442 4.5596
MONARCH.bmp 6.5516 3.2012 3.4715
PEPPERS.bmp 5.8596 4.4064 3.4824
SAIL.bmp 8.3467 3.7514 3.731
SERRANO.bmp 5.944 17.4141 3.9181
TULIPS.bmp 7.602 3.6793 4.119
lena512ggg.bmp 4.8137 2.0857 2.0857
lena512pink.bmp 4.5607 2.6387 2.3724
lena512pink0g.bmp 3.7297 3.8534 3.1756
linear_ramp1.BMP 1.3488 0.8626 1.1199
linear_ramp2.BMP 1.2843 0.7767 1.0679
orange_purple.BMP 2.8841 3.7019 1.9428
pink_green.BMP 3.1817 1.504 2.7461

And here are the results in SSIM :

Note this is an "RGB SSIM" computed by doing :

SSIM_RGB = ( SSIM_R * SSIM_G ^2 * SSIM_B ) ^ (1/4)

That is, G gets 2X the weight of R & B. The SSIM is computed at a scale of 6x6 blocks which I just randomly picked out of my ass.

I also convert the SSIM to a "percent similar". The number you see below is a percent - 100% means perfect, 0% means completely unrelated to the original (eg. random noise gets 0%). This percent is :

SSIM_Percent_Similar = 100.0 * ( 1 - acos( ssim ) * 2 / PI )

I do this because the normal "ssim" is like a dot product, and showing dot products is not a good linear way to show how different things are (this is the same reason I show RMSE instead of PSNR like other silly people). In particular, when two signals are very similar, the "ssim" gets very close to 0.9999 very quickly even though the differences are still pretty big. Almost any time you want to see how close two vectors are using a dot product, you should do an acos() and compare the angle.

name DXT1 Humus DXT5 YCoCg
kodim01.bmp 84.0851 92.6253 92.7779
kodim02.bmp 82.2029 91.7239 90.5396
kodim03.bmp 85.2678 92.9042 93.2512
kodim04.bmp 83.4914 92.5714 92.784
kodim05.bmp 83.6075 92.2779 92.4083
kodim06.bmp 85.0608 92.6674 93.2357
kodim07.bmp 85.3704 93.2551 93.5276
kodim08.bmp 84.5827 92.4303 92.7742
kodim09.bmp 84.7279 92.9912 93.5035
kodim10.bmp 84.6513 92.81 93.3999
kodim11.bmp 84.0329 92.5248 92.9252
kodim12.bmp 84.8558 92.8272 93.4733
kodim13.bmp 83.6149 92.2689 92.505
kodim14.bmp 82.6441 92.1501 92.1635
kodim15.bmp 83.693 92.0028 92.8509
kodim16.bmp 85.1286 93.162 93.6118
kodim17.bmp 85.1786 93.1788 93.623
kodim18.bmp 82.9817 92.1141 92.1309
kodim19.bmp 84.4756 92.7702 93.0441
kodim20.bmp 87.0549 90.5253 93.2088
kodim21.bmp 84.2549 92.2236 92.8971
kodim22.bmp 82.6497 91.0302 91.9512
kodim23.bmp 84.2834 92.4417 92.4611
kodim24.bmp 84.6571 92.3704 93.2055
clegg.bmp 77.4964 70.1533 83.8049
FRYMIRE.bmp 91.3294 72.2527 87.6232
LENA.bmp 77.1556 80.7912 85.2508
MONARCH.bmp 83.9282 92.5106 91.6676
PEPPERS.bmp 81.6011 88.7887 89.0931
SAIL.bmp 83.2359 92.4974 92.4144
SERRANO.bmp 89.095 75.7559 90.7327
TULIPS.bmp 81.5535 90.8302 89.6292
lena512ggg.bmp 86.6836 95.0063 95.0063
lena512pink.bmp 86.3701 92.1843 92.9524
lena512pink0g.bmp 89.9995 79.9461 84.3601
linear_ramp1.BMP 92.1629 94.9231 93.5861
linear_ramp2.BMP 92.8338 96.1397 94.335
orange_purple.BMP 89.0707 91.6372 92.1934
pink_green.BMP 87.4589 93.5702 88.4219

Conclusion :

DXT5 YCoCg and "Humus" are both significantly better than DXT1.

Note that DXT5-YCoCg and "Humus" encode the luma in exactly the same way. For gray images like "lena512ggg.bmp" you can see they produce identical results. The only difference is how the chroma is encoded - either a DXT1 block (+scale) at 4 bpp, or a downsampled 2X BC4 block at 2 bpp.

In RGB RMSE , DXT5-YCoCg is measurably better than Humus-BC4BC5 , but in SSIM they are are nearly identical. This is because almost all of the RMSE loss in Humus comes from the YCoCg lossy color conversion and the CoCg downsampling. The actual BC4BC5 compression is very near lossless. (as much as I hate DXT1, I really like BC4 - it's very easy to produce near optimal output, unlike DXT1 where you have to run a really fancy compressor to get good output). The CoCg loss hurts RMSE a lot, but doesn't hurt actual visual quality or SSIM much in most cases.

In fact on an important class of images, Humus actually does a lot better than DXT5-YCoCg. That class is simple smooth ramp images, which we use very often in the form of lightmaps. The test images at the bottom of the table (linear_ramp and pink_green) show this.

On a few images where the CoCg downsample kills you, Humus does very badly. It's bad on orangle_purple because that image is specifically designed to be primarily in Chroma not Luma ; same for lena512pink0g.bmp ; note that normal chroma downsampling compressors like JPEG have this same problem. You could in theory choose a different color space for these images and use a different reconstruction shader.

Since Humus is only 6 bpp, size is certainly not a reason to prefer DXT1 over it. However, it does require two texture fetches in the shader, which is a pretty big hit. (BTW the other nice thing about Humus is that it's already down-sampled in CoCg, so if you are using something like a custom JPEG in YCoCg space with downsampled CoCg - you can just directly transcode that into Humus BC4BC5, and there's no scaling up or down or color space changes in the realtime recompress). I think this is probably what will be in Oodle because I really can't get behind any other realtime recompress.

I also tried something else, which is DXT1 optimized for SSIM. The idea is to use a little bit of neighbor information. The thing is, in my crazy DXT1 encoder, I'm just trying various end points and measuring the quality of each choice. The normal thing to do it to just take the MSE vs the original, but of course you could do other error metrics.

One such error metric is to decompress the block you're working on into its context - decompress into a chunk of neighbors that have already been DXT1 compressed & decompressed as well. Then compare that block and its neighbors to the original image in that neighborhood. In my case I used 2 pixels around the block I was working on, making a total region of 8x8 pixels (with the 4x4 DXT1 block in the middle).

You then compare the 8x8 block to the original image and try to optimize that. If you just used MSE in this comparison, it would be the same as before, but you can use other things. For example, you could add a term that penalizes not changes in values, but changes in *slope*.

Another approach would be to take the DCT of the 8x8 block and the DCT of the 8x8 original. If you then just take the L2 difference in DCT domain, that's no different than the original method, because the DCT is unitary. But you can apply non-uniform quantizers at this step using the JPEG visual quantization weights.

The approach I used was to use SSIM (using a 4x4 SSIM block) on the 8x8 windows. This means you are checking the error not just on your block, but on how your block fits into the neighborhood.

For example if the original image is all flat color - you want the output to be all flat color. Just using MSE won't give you that, eg. MSE considers 4444 -> 3535 to be just as good as 4444 -> 5555 , but we know the latter is better.

This does in fact produce slightly better looking images - it hurts RMSE of course because you're no longer optimizing for RMSE.

06-17-09 - Inverse Box Sampling - Part 2

Okay, in Part 1.5 I asked about the downsample that was the best inverse of bilinear upsampling. I have a solution that pleases me.

Sean reminded me that he tackled this before; I dunno if he has any notes about it on the public net, he can link them. His basic idea was to do a full solve for the entire down-sampled image. It's quite simple if you think about. Consider the case of 2X up & down sampling. The bilinear filter upsample will make a high res image where each pixel is a simple linear combo of 4 low res. You take the L2 error :

E = Sum[all high res pixel] ( Original - Upsampled ) ^2

For Sean's full solution approach, you set Upsampled = Bilinear_Upsample( X) , and just solve this for X without any assumption of how X is made from Original. For an N-pixel low res image you have 4N error terms, so it's plenty dense (you could also artificially regularize it more by starting with a low res image that's equal to the box down-sample, and then solve for the deltas from that, and add an extra "Tikhonov" regularization term that presumes small deltas - this would fix any degenerate cases).

I didn't do that. Instead I assumed that I want a discrete local linear filter and solved for what it should be.

A discrete local linear filter is just a bunch of coefficients. It must be symmetric, and it must sum to 1.0 to be mean-preserving (flat source should reproduce flat exactly). Hence it has the form {C2,C1,C0,C0,C1,C2} with C0+C1+C2 = 1/2. (this example has two free coefficients). Obviously the 1-wide case must be {0.5,0.5} , then you have {C1,0.5-C1,0.5-C1,C1} etc. as many taps as you want. You apply it horizontally and then vertically. (in general you could consider asymetric filters, but I assume H & V use the same coefficients).

A 1d application of the down-filter is like :

L_n = Sum[k] { C_k * [ H_(2*n-k) + H_(2*n+1+k) ] }

That is : Low pixel n = filter coefficients times High res samples centered at (2*n * 0.5) going out both directions.

Then the bilinear upsample is :

U_(2n) = (3/4) * L_n + (1/4) * L_(n-1)

U_(2n+1) = (3/4) * L_n + (1/4) * L_(n+1)

Again we just make a squared error term like the above :

E = Sum[n] ( H_n - U_n ) ^2

Substitute the form of L_n into U_n and expand so you just have a matrix equation in terms of H_n and C_k. Then do a solve for the C_k. You can do a least-squares solve here, or you can just directly solve it because there are generally few C's (the matrix is # of C's by # of pixels).

Here's how the error varies with number of free coefficients (zero free coefficients means a pure box downsample) :

r:\>bmputil mse lenag.256.bmp bilinear_down_up_0.bmp  rmse : 15.5437 psnr : 24.3339

r:\>bmputil mse lenag.256.bmp bilinear_down_up_1.bmp  rmse : 13.5138 psnr : 25.5494

r:\>bmputil mse lenag.256.bmp bilinear_down_up_2.bmp  rmse : 13.2124 psnr : 25.7454

r:\>bmputil mse lenag.256.bmp bilinear_down_up_3.bmp  rmse : 13.0839 psnr : 25.8302
you can see there's a big jump from 0 to 1 but then only gradually increasing quality after that (though it does keep getting better as it should).

Two or three free terms (which means a 6 or 8 tap filter) seems like the ideal width to me - wider than that and you're getting very nonlocal which means ringing and overfitting. Optimized on all my test images the best coefficients I get are :

// 8 taps :

static double c_downCoef[4] = { 1.31076, 0.02601875, -0.4001217, 0.06334295 };

// 6 taps :

static double c_downCoef[3] = { 1.25 , 0.125, - 0.375 };

(the 6-tap one was obviously so close to those perfect fractions that I just manually rounded it; I assume that if I solved this analytically that's what I would get. The 8-tap one is not so obvious to me what it would be).

Now, how do these static ones compare to doing the lsqr fit to make coefficients per image ? They're 99% of the benefit. For example :

// solve :
lena.512.bmp : doing solve exact on 3 x 524288
{ 1.342242526 , -0.028240414 , -0.456030369 , 0.142028257 }  // rmse : 10.042138

// static fit :
lena.512.bmp :  // rmse : 10.116388


// static fit :
clegg.bmp :  // rgb rmse : 50.168 , gray rmse : 40.506

// solve :
fitting : clegg.bmp : doing lsqr on 3 x 1432640 , c_lsqr_damping = 0.010000
{ 1.321164423 , 0.002458499 , -0.381711250 , 0.058088329 }  // rgb rmse : 50.128 , gray rmse : 40.472

So it seems to me this is in fact a very simple and high quality way to down-sample to make the best reproduction after bilinear upsampling.

I'm not even gonna touch the issue of the [0,255] range clamping or the fact that your low res image should actually be considered discrete, not continuous.

ADDENDUM : it just occured to me that you might do the bilinear 2X upsampling using offset-taps instead of centered taps. That is, centered taps reconstruct like :

+---+    +-+-+
|   |    | | |
|   | -> +-+-+
|   |    | | |
+---+    +-+-+

That is, the area of four high res pixels lies directly on one low res pixel. Offset taps do :

+---+     | |
|   |    -+-+-
|   | ->  | |
|   |    -+-+-
+---+     | |

that is, the center of a low res pixel corresponds directly to a high res pixel.

With centered taps, the bilinear upsample weights in 1d are always (3/4,1/4) then (1/4,3/4) , (so in 2d they are 9/16, etc.)

With offset taps, the weights in 1d are (1) (1/2,1/2) (1) etc... that is, one pixel is just copied and the tweeners are averages.

Offset taps have the advantage that they aren't so severely variance decreasing. Offset taps should use a single-center down-filter of the form :


(instead of {C2,C1,C0,C0,C1,C2} ).

My tests show single-center/offset up/down is usually slightly worse than symmetric/centered , and occasionally much better. On natural/smooth images (such as the entire Kodak set) it's slightly worse. Picking one at random :

symmetric :
kodim05.bmp : { 1.259980122 , 0.100375561 , -0.378468204 , 0.018112521 }   // rmse : 25.526521

offset :
kodim05.bmp : { 0.693510045 , 0.605009745 , -0.214854612 , -0.083665178 }  // rgb rmse : 26.034 

that pattern holds for all. However, on weird images it can be better, for example :

symmetric :
c:\src\testproj>Release\TestProj.exe t:\test_images\color\bragzone\clegg.bmp f
{ 1.321164423 , 0.002458499 , -0.381711250 , 0.058088329 }  // rgb rmse : 50.128 , gray rmse : 40.472

offset :
c:\src\testproj>Release\TestProj.exe t:\test_images\color\bragzone\clegg.bmp f
{ 0.705825115 , 0.561705835 , -0.267530949 }  // rgb rmse : 45.185 , gray rmse : 36.300

so ideally you would choose the best of the two. If you're decompressing in a pixel shader you need another parameter for whether to offset your sampling UV's by 0.5 of a pixel or not.

ADDENDUM : I got Humus working with a KLT color transform. You just do the matrix transform in the shader after fetching "YUV" (not really YUV any more). It helps on the bad cases, but still doesn't make it competitive. It's better just to go with DXT1 or DXT5-YCoCg in those cases. For example :

On a pure red & blue texture :

Humus YCoCg :

rmse : 11.4551 , psnr : 26.9848
ssim : 0.9529 , perc : 80.3841%

Humus KLT with forced Y = grey :

KLT : Singular values : 56.405628,92.022781,33.752548
 KLT : 0.577350,0.577350,0.577350
 KLT : -0.707352,0.000491,0.706861
 KLT : 0.407823,-0.816496,0.408673

rmse : 11.4021 , psnr : 27.0251
ssim : 0.9508 , perc : 79.9545%

Humus KLT  :

KLT : Singular values : 93.250313,63.979282,0.230347
 KLT : -0.550579,0.078413,0.831092
 KLT : -0.834783,-0.051675,-0.548149
 KLT : -0.000035,-0.995581,0.093909

rmse : 5.6564 , psnr : 33.1140
ssim : 0.9796 , perc : 87.1232%

(note the near perfect zero in the last singular value, as it should be)

DXT1 :

rmse : 3.0974 , psnr : 38.3450
ssim : 0.9866 , perc : 89.5777%

DXT5-YCoCg :

rmse : 2.8367 , psnr : 39.1084
ssim : 0.9828 , perc : 88.1917%

So, obviously a big help, but not enough to be competitive. Humus also craps out pretty bad on some images that have single pixel checkerboard patterns. (again, any downsampling format, such as JPEG, will fail on the same cases). Not really worth it to mess with the KLT, better just to support one of the other formats as a fallback.

One thing I'm not sure about is just how bad the two texture fetches is these days.


06-16-09 - Inverse Box Sampling

A while ago I posed this problem to the world :

Say you are given the box-downsampled version of a signal (I may use "image" and "signal" interchangeably cuz I'm sloppy). Box-downsampled means groups of N values in the original have been replaced by the average in that group and then downsampled N:1. You wish to find an image which is the same resolution as the source and if box-downsampled by N, exactly reproduces the low resolution signal you were given. This high resolution image you produce should be "smooth" and close to the expected original signal.

Examples of this are say if you're given a low mip and you wish to create a higher mip such that downsampling again would exactly reproduce the low mip you were given. The particular case I mainly care about is if you are given the DC coefficients of a JPEG, which are the averages on 8x8 blocks, you wish to produce a high res image which has the exact same average on 8x8 blocks.

Obviously this is an under-constrained problem (for N > 1) because I haven't clearly spelled out "smooth" etc. There are an infinity of signals that when downsampled produce the same low resolution version. Ideally I'd like to have a way to upsample with a parameter for smoothness vs. ringing that I could play with. (if you're nitty, I can constrain the problem precisely : The correlation of the output image and the original source image should be maximized over the space of all real world source images (eg. for example over the space of all images that exist on the internet)).

Anyway, after trying a whole bunch of heuristic approaches which all failed (though Sean's iterative approach is actually pretty good), I found the mathemagical solution, and I thought it was interesting, so here we go.

First of all, let's get clear on what "box downsample" means in a form we can use in math.

You have an original signal f(t) . We're going to pretend it's continuous because it's easier.

To make the "box downsample" what you do is apply a convolution with a rectangle that's N wide. Since I'm treating t as continuous I'll just choose coordinates where N = 1. That is, "high res" pixels are 1/N apart in t, and "low res" pixels are 1 apart.

Convolution { f , g } (t) = Integral{ ds * f(s) * g(t - s) }

The convolution with rect gives you a smoothed signal, but it's still continuous. To get the samples of the low res image, you multiply this by "comb". comb is a sum of dirac delta functions at all the integer coordinates.

F(t) = Convolve{ rect , f(t) }

low res = comb * F(t)

low res = Sum[n] L_n * delta_n

Okay ? We now have a series of low res coefficients L_n just at the integers.

This is what is given to us in our problem. We wish to try to guess what "f" was - the original high res signal. Well, now that we've written is this way, it's obvious ! We just have to undo the comb filtering and undo the convolution with rect !

First to undo the comb filter - we know the answer to that. We are given discrete samples L_n and we wish to reproduce the smooth signal F that they came from. That's just Shannon sampling theorem reconstruction. The smooth reconstruction is made by just multiplying each sample by a sinc :

F(t) = Sum[n] L_n * sinc( t - n )

This is using the "normalized sinc" definition : sinc(x) = sin(pi x) / (pi x).

sinc(x) is 1.0 at x = 0 and 0.0 at all other integer x's and it oscillates around a lot.

So this F(t) is our reconstruction of the rect-filtered original - not the original. We need to undo the rect filter. To do that we rely on the Convolution Theorem : Convolution in Fourier domain is just multiplication. That is :

Fou{ Convolution { f , g } } = Fou{ f } * Fou{ g }

So in our case :

Fou{ F } = Fou{ Convolution { f , rect } } = Fou{ f } * Fou{ rect }

Fou{ f } = Fou{ F } / Fou{ rect }

Recall F(t) = sinc( t - n ) , so :

Fou{ f } = Sum[n] L_n * Fou{ sinc( t - n ) } / Fou{ rect }

Now we need some Fourier transform knowledge. The easiest way for me to find this stuff is just to do the integrals myself. Integrals are really fun and easy. I won't copy them here because it sucks in ASCII so I'll leave it as an exercise to the reader. You can easily figure out the Fourier translation principle :

Fou{ sinc( t - n ) } = e^(-2 pi i n v) * Fou{ sinc( t ) }

As well as the Fourier sinc / rect symmetry :

Fou{ rect(t) } = sinc( v )

Fou{ sinc(t) } = rect( v )

All that means for us :

Fou{ f } = Sum[n] L_n * e^(-2 pi i n v) * rect(v) / sinc(v)

So we have the Fourier transform of our signal and all that's left is to do the inverse transform !

f(t) = Sum[n] L_n * Fou^-1{ e^(-2 pi i n v) * rect(v) / sinc(v) }

because of course constants pull out of the integral. Again you can easily prove a Fourier translation principle : the e^(-2 pi i n v) term just acts to translate t by n, so we have :

f(t) = Sum[n] L_n * h(t - n)

h(t) = Fou^-1{ rect(v) / sinc(v) }

First of all, let's stop and see what we have here. h(t) is a function centered on zero and symmetric around zero - it's a reconstruction shape. Our final output signal, f(t), is just the original low res coefficients multiplied by this h(t) shape translated to each integer point n. That should make a lot of sense.

What is h exactly? Well, again we just go ahead and do the Fourier integral. The thing is, "rect" just acts to truncate the infinite range of the integral down to [-1/2, 1/2] , so :

h(t) = Integral[-1/2,1/2] { dv e^(2 pi i t v) / sinc(v) }

Since sinc is symmetric around zero, let's take the two halves of the range around zero and add them together :

h(t) = Integral[0,1/2] { dv ( e^(2 pi i t v) + e^(- 2 pi i t v) ) / sinc(v) }

h(t) = Integral[0,1/2] { dv 2 * cos ( 2 pi t v ) * pi * v / sin( pi v) }

(note we lost the c - sinc is now sin). Let's change variables to w = pi v :

h(t) = (2 / pi ) * Integral[ 0 , pi/2 ] { dw * w * cos( 2 t w ) / sin( w ) }

And.. we're stuck. This is an integral function; it's a pretty neat form, it sure smells like some kind of Bessel function or something like that, but I can't find this exact form in my math books. (if anyone knows what this is, help me out). (actually I think it's a type of elliptic integral).

One thing we can do with h(t) is prove that it is in fact exactly what we want. It has the box-unit property :

Integral[ N - 1/2 , N + 1/2 ] { h(t) dt } = 1.0 if N = 0 and 0.0 for all other integer N

That is, the 1.0 wide window box filter of h(t) centered on integers is exactly 1.0 on its own unit interval, and 0 on others. In other words, h(t) reconstructs its own DC perfectly and doesn't affect any others. (prove this by just going ahead and doing the integral; you should get sin( N * pi ) / (N * pi ) ).

While I can't find a way to simplify h(t) , I can just numerically integrate it. It looks like this :


You can see it sort of looks like sinc, but it isn't. The value at 0 is > 1. The height of the central peak vs. the side peaks is more extreme than sinc, the first negative lobes are deeper than sinc. It actually reminds me of the appearance of a wavelet.

Actually the value h(0) is exactly 4 G / pi = 1.166243... , where "G" is Catalan's constant.

Anyway, this is all very amusing and it actually "works" in the sense that if you blow up a low-res image using this h(t) basis shape, it does in fact make a high res image that is smooth and upon box-down sampling exactly reproduces the low-res original.

It is, however, not actually useful. For one thing, it's computationally ridiculous. Of course you would precompute the h(t) and store it in a table, but even then, the reach of h(t) is infinite, and it doesn't get small until very large t (beyond the edges of any real image), so in practice every output pixel must be a weighted sum from every single DC values in the low res image. Even without that problem, it's useless because it's just too ringy on real data. Looking at the shape above it should be obvious it will ring like crazy.

I believe these problems basically go back to the issue of using the ideal Shannon reconstruction when I did the step of "undoing the comb". By using the sinc to reproduce I doomed myself to non-local effect and ringing. The next obvious question is - can you do something other than sinc there? Why yes you can, though you have to be careful.

Say we go back to the very beginning and make this reconstruction :

F(t) = Sum[n] L_n * B( t - n )

We're making F(t) which is our reconstruction of the smooth box-filter of the original. Now B(t) is some reconstruction basis function (before we used sinc). In order to be a reconstruction, B(t) must be 1.0 at t = 0, and 0.0 at all other integer t. Okay.

If we run through the math with general B, we get :

again :

f(t) = Sum[n] L_n * h(t - n)

but with :

h(t) = Fou^-1{ Fou{ B } / sinc(v) }

For example :

If B(t) = "triangle" , then F(t) is just the linear interpolation of the L_n

Fou{ triangle } = sinc^2 ( v)

h(t) = Fou^-1{ sinc^2 ( v) / sinc(v) } = Fou^-1{ sinc } = rect(t)

Our basis functions are rects ! In fact this is the reconstruction where the L_n is just made a constant over each DC domain. In fact if you think about it that should be obvious. If you take the L_n and make them constant on each domain, then you run a rectangle convolution over that - as you slide the rectangle window along, you get linear interpolation, which is our F(t).

That's not useful, but maybe some other B(t) is. In particular I think the best line of approach is for B(t) to be some kind of windowed sinc. Perhaps a Guassian-windowed sinc. Any real world window I can think of leads to a Fourier transform of B(t) that's too complex to do analytically, which means our only approach to finding h is to do a double-numerical-integration which is rather a disastrous thing to do, even for precomputing a table.

So I guess that's the next step, though I think this whole approach is a practical dead end and is now just a scientific curiosity. I must say it was a lot of fun to actually bust out pencil and paper and do some math and real thinking. I really miss it.


06-15-09 - Blog Roll

It's time now for me to give a shout out to all the b-boys in the werld.

Adventures of a hungry girl
Beautiful Pixels
Birth of a Game
bouliiii's blog
Capitol Hill Triangle
cbloom rants
Cessu's blog
Culinary Fool
David Lebovitz
Diary of a Graphics Programmer
Diary Of An x264 Developer
Eat All About It
Game Rendering
garfield minus garfield
Graphic Rants
Graphics Runner
Graphics Size Coding
Gustavo Duarte
His Notes
I Get Your Fail
Ignacio Casta�o
Industrial Arithmetic
John Ratcliff's Code Suppository
Lair Of The Multimedia Guru
Larry Osterman's WebLog
level of detail
Lightning Engine
Lost in the Triangles
Mark's Blog
Married To The Sea
More Seattle Stuff
My Green Paste, Inc.
not a beautiful or unique snowflake
NVIDIA Developer News
Pete Shirley's Graphics Blog
Pixels, Too Many..
Real-Time Rendering
realtimecollisiondetection.net - the blog
Ryan Ellis
Seattle Daily Photo
Some Assembly Required
stinkin' thinkin'
Stumbling Toward 'Awesomeness'
surly gourmand
Sutter's Mill
Thatcher's rants and musings
The Atom Project
The Big Picture
The Data Compression News Blog
The Ladybug Letter
The software rendering world
TomF's Tech Blog
Visual C++ Team Blog
Void Star: Ares Fall
What your mother never told you about graphics development
Wright Eats
Bartosz Milewski's Programming Cafe

autogen from the Google Reader xml output. I would post the code right here but HTML EATS MY FUCKING LESS THAN SIGNS and it's pissing me off. God damn you.

SAVED : Thanks Wouter for linking to htmlescape.net ; I might write a program to automatically do that to anything inside a PRE chunk when I upload the block.

int main(int argc,char *argv[])
    char * in = ReadWholeFile(argv[1]);
    while( in && *in )
        in = skipwhitespace(in);
        if ( stripresame(in," %s  
\n",url,title); } in = strnextline(in); } return 0; }


06-14-09 - Noise Torture

I'm literally surrounded by crazy annoying loud people on all sides. To the west is the young couple who just had a baby that cries constantly; I've been around many babies and never heard one cry and cry like this; plus their Russian mom has now moved in with them and is always barking out orders to kill dissenting journalists in high pitched Russian.

To the north (fortunately across the street) is a party house full of frat boys who are constantly screaming "wooo" about something or other. Whoah bro sportscenter is on! Open the windows and scream "wooo" !! They literally set their house on fire the other day, fire trucks came and sprayed it, some fire dudes hacked open a wall to get at some stray embers inside. The next night after the fire they had another big party.

To the south is some amateur indie/punk band (emphasis strongly on the "amateur"). Fortunately they are a few houses away, but the pounding of the drums travels far. Sadly for me, they seem to be very dilligent about practicing the same song over and over and over. Sadly for them it doesn't seem to be helping.

To the east are the fucking white trash who are sitting outside drinking bud light and talking five feet away from my window. Blurg. Houses around here are way the fuck too close together. It's might be worse than the traditional row house like you have in SF or back east; with the row house you have a solid wall between you and the neighbor. Here we have open space and windows, but only like five feet of distance.

And of course there's "stompy" the crack head upstairs neighbor who seems to pass the time by moving his furniture from one end of the building to the other.

I guess it's kind of a noisy neighborhood, but I couldn't tell that when I moved in, because there are plenty of very nice single family homes around, with yuppie parents and kids and high property values.

There should really be segregation. There should be "ghettos" for the people who want to be noisy and have parties and whatnot - fine, that's cool, just go live in the noisy ghetto with other people of your kind. Alternatively, people should have to sign up in advance for certain weekends when they're allowed to be noisy so I can just go out of town those weekends.

I gather that many places in Europe have the tradition of a local pub on each block, and the people who live there just go congregate in the pub and make their noise there, so it's not in anyone's house. If you're making a ruckus, you go down to the pub. That seems like a good system.

Anyway, I write this now because by some divine conspiracy, all my neighbors went out of town this weekend. Hippie smoker jabberers next door - gone! Upstairs stompy - gone! No band practices, and no huge block parties. It was sublime, I was free, at peace, I could sit and read or work (I did lots of work), cook, listen to music, and I felt alone and happy. My god. Sometimes I get into these funks in life where I just think that everything fucking sucks and everyone is a huge dick, and then it's like the clouds clear - you see a moment where in fact, things do not suck, and it's like a revelation - whoah this misery is not how it has to be.

06-14-09 - Biking - Fuck you Seattle

God dammit, there's no good biking around here. The bike lanes are meager, and half of the so-called "bike routes" run you right down busy streets with hardly any space (I'm looking at you, Howell-Stewart junction). The roads are in a shameful state, full of pot holes and ruts that are literally rattling the bolts loose on my bike (and banging them is a huge hazard and hurts my joints like a motherfuck). The Lake Washington loop is tolerable, but there are tons of bits with ridiculously bad pavement or narrow/ no shoulder, as well as plenty of major hazards, lots of commuting traffic, and bad routing.

My original right shoulder injury in 2006 was a separation that became frozen and is now still bugging me in the form of a winging scapula and arthritic AC joint. That crash was caused by a pot hole in San Francisco. Fucking pot holes.

Also, the fucking mini traffic circles they've tossed around cap hill and 28th are fucking retarded. They don't function as real traffic circles because they're too small; a real traffic circle works because being "in the circle" is a seperate state. The big problem with them is that cars have to swing really wide to get around them, and the road isn't wide, so cars swing right into the path of pedestrians, and cut right into bicycles. It's fucking awful. Actually I hate all the "Yield" streets around here too since half of them are at blind intersections and lots of dumb fucks come barrelling through them at high speed. All of these residential intersections should have full 4-way stops and painted crosswalks. Hell, more cities just need streets that are ped/bike only. For example Pike between Broadway and 12th should just be closed to cars. It would be fantastic for local businesses.

Kirkland's got this lovely pool right by my work, so I go to check when the lap swim hours are ... none. I mean, they do have lap swim from 5:30-7 am, but that may as well not exist. Even if I wanted to get up that early, it's fucking cold and gross that early, I want to swim in the afternoon sun you fucks. WTF you have this fucking great pool and you just can't open it? Presumably this is the same problem as the fucking roads, that there's no damn taxes and the governments are fucking dumb. It's such stupid cost saving though; you've spent all the money to make this pool, you clean it and pump it, and then you only have it open 6 hours a day.

Anyway, Colman park just south of I90 is really cool. Not the part down on the lake, that's okay, but it's obvious, what with it's views or Rainier and whatnot. The cool part is up Lake Washington Blvd S toward 31st Ave. You get the best effect if you park down at the bottom and walk up the hill - it has cool winding paths and stairs and bridges, and then at the top there's this huge public vegetable+flower garden that's like a hidden garden surprise for the hardy souls that made it up the hill.

Biking is so fucking great. I went and did the Mercer Island loop this weekend; it's pretty nice once you're out there, though the bike path is damn annoying and riding over I90 is scary and not fun. I have vertigo and the high view down to water with just a railing next to me is nauseau inducing. (only did the ride over the Golden Gate Bridge once; I nearly had a heart attack; after that I always drove my bike across the bridge and parked and then rode north).

I passed two seperate Mercer Island residents who seemed to intentionally stand right in my way. They were just standing in the road in the bike lane, cars were coming so it's not like I had a ton of room, and they made no movement out of the way at all. Rich people are fucking cocks.

Some douchebag cyclist dropped a passive-aggressive bummer-bumb on me. He was riding ahead of me on the bike lane, I move to the left and pass him. As I'm passing he says "I'm on your right". Huh? Yeah you are. I came up behind you and saw you. Oh, I get it. That's a fucking dickweed way of saying you expected me to say "on your left" when I passed you. Fuck you. It took me a little while to figure out what an acrid like asshole he was and by that time I was well past (because in additional to being a passive aggressive holier than thou dick he was also fat and slow); if I'd realized it sooner I would've yelled something back at him. How dare you fucking bring that negative shit into my world when I'm out on my ride having my one fucking moment of pure joy and pleasure? Fuck you, I know the fucking rules of courtesy, I say "on your left" if I think there's any danger or if it's a tight spot, but I don't say it every damn time I pass every person, and that's an unreasonable expectation, and even if you do think I should you can fucking keep it to yourself.

Some random dude also drafted me for a mile or so. That's not cool. You don't just jump on the ass of someone you don't know without saying anything. To draft correctly you have to be mere inches from the person in front of you. It's great for efficiency, but it's also very dangerous if you aren't communicating, because if the lead person brakes, you have an instant crash. Sometimes I'll latch onto someone's wheel when they pass me, but I hang back far enough to be safe, or I say hey can we draft a while? This dude just put his nose in my butt and stuck there. Mild scowl.

I got this book : Bicycling the Backroads around Puget Sound at the library. It sucks & basically proves that the biking here blows. There's one or two good rides in it (one of them being the Mercer Island Loop). Then it's full of rides that are just bullshit. It's got a bunch of rides that go down highways that are totally not suitable for biking (like the 203 and the 169) ; it also literally has a ride that goes on I90. WTF. Oh and then it's got rides that go off road. Umm, hello, this is the road biking book, you can put the unpaved road rides in another book, thank you. There are a few rides that look interesting, but they're well the hell far away, like Enumclaw or Granite Falls kind of far away. What the hell I guess I'll go try one soon cuz I don't have anything else to do.

The NYT Travel this week featured biking around Provence . That's like my dream; that article is pretty worthless and the writer is not a real biker, but doing some real country touring around Europe, in the sun of Provence, Tuscany, Catalonia, seeing all the countryside at the pace of a bike (the bike is the perfect speed for seeing country; walking is too slow and driving is too fast), eating and drinking. I don't want to ride the fucking Tour de France routes, that's way too hard and not fun.

old rants