# cbloom rants

## 3/31/2011

### 03-31-11 - Some image filter notes

Say you have a filter like F = [1,2,3,2,1] . The normal thing to do is compute the sum and divide so you have pre-normalized values and you just do a bunch of madd's. eg. you make N = [1/9,2/9,3/9,2/9,1/9].

Now there's the question of how you handle the boundaries of the image. The normal thing to do is to take the pre-normalized filter N and apply all over, and when one of the taps samples off edge, you have to give it something to sample. You can use various edge modes, such as :

```
SampleBounded(int i, int w) :

clamp :
return Sample( Clamp(i,0,w-1) );

wrap :
return Sample( (i+256*w)%w );

mirror (no duplicated edge pixel) :
if ( i < 0  ) return SampleBounded( -i, w );
if ( i >= w ) return SampleBounded( -i + 2*w - 2 , w );
else return Sample( i );

mirror with duplicated edge pixel :
if ( i < 0  ) return SampleBounded( - i - 1, w );
if ( i >= w ) return SampleBounded( -i + 2*w - 1 , w );
else return Sample( i );

```
(the correct edge mode depends on the usage of the image, which is one of those little annoying gotchas in games; eg. the mips you should make for tiling textures are not the same as the mips for non-tiling textures). (another reasonable option not implemented here is "extrapolate" , but you have to be a bit careful about how you measure the slope at the edge of the image domain)

The reason we do all this is because we don't want to have to accumulate the sum of filter weights and divide by the weight.

But really, in most cases what you really should be doing is applying the filter only where its domain overlaps the image domain. Then you sum the weights in the area that is valid and renormalize. eg. if our filter F is two pixels off the edges, we just apply [3,2,1] / 6 , we don't clamp the sampler and put an extra [1,2] on the first pixel.

ADDENDUM : in video games there's another special case that needs to be handled carefully. When you have a non-tiling texture which you wish to abutt seamlessly to another texture. That is, you have two textures T1 and T2 that are different and you wish to line them up beside each other without a seam.

I call this mode "shared", it sort of acts like "clamp" but has to be handled specially in filtering. Lets say T1 and T2 are layed against eachother horizontally, so they abutt along a column. What the artist should do is make the pixels in that border column identical in both textures (or you could have your program enforce this). Then, the UV mapping on the adjacent rectangles should be inset by half a pixel - that is, it picks the center of the pixels, not the edge of the texture. Thus the duplicated pixel edge only appears to be a single column of pixels.

But that's not the special case handling - the special case is whenever you filter a "shared" image, you must make border column pixels only from other border column pixels. That is, that shared edge can only be vertically filtered, not horizontally filtered. That way it stays identical in both images.

Note that this is not ideal with mipping, what happens is the shared edge gets fatter at higher mip levels - but it never develops a seam, so it is "seamless" in that sense. To do it right without any artifacts (eg. to look as if it was one solid bigger texture) you would have to know what image is on the other side of the shared edge and be able to filter tap into those pixels. Obviously that is impossible if your goal is a set of terrain tiles or something like that where you use the same shared edge in multiple different ways.

(is there a better solution to this issue?)

I did a little look into the difference between resizing an image 8X by either doubling thrice or directly resizing. I was sanity checking my filters and I thought - hey if I use a Gaussian filter, it should be the same thing, because convolution of a Gaussian with a Gaussian is a Gaussian, right?

In the continuous case, you could either use one Gaussian with an sdev of 8 (not actually right for 8X mag, but you get the idea). If you had a Gaussian with sdev 2 and convolved it 3 times - you should get a Gaussian with sdev of 8.

So I tried it on my filters and I got :

```
Gaussian for doubling, thrice :

1.0000,0.9724,0.8822,0.7697,0.5059,0.3841,0.2635,0.1964,0.1009,0.0607,0.0281,0.0155,0.0067,0.0034,0.0012,0.0004,...

Gaussian for direct 8x :

1.0000,0.9439,0.8294,0.6641,0.4762,0.3057,0.1784,0.0966,0.0492,0.0235,0.0103,0.0041,0.0014,0.0004,...

```
and I was like yo, WTF they're way off, I must have a bug. (note : these are scaled to make the max value 1.0 rather than normalizing because it's easier to compare this way, they look more unequal after normalizing)

But then I realized - these are not really proper Gaussians. These are discrete samples of Gaussians. If you like, it's a Gaussian multiplied by a comb. It's not even a Gaussian convolved with a box filter - that is, we are not applying the gaussian over the range of the pixel as if the pixel was a box, but rather just sampling the continuous function at one point on the pixel. Obviously the continuous convolution theorem that Gauss [conv] Gauss = Gauss doesn't apply.

As for the difference between doing a direct 8X and doubling thrice, I can't see a quality difference with my eyes. Certain the filters are different numerically - particularly filters with negatives, eg. :

```
sinc double once :
1.0000,0.6420,0.1984,-0.0626,-0.0974,-0.0348,0.0085,0.0120,
sinc double twice :
1.0000,0.9041,0.7323,0.5193,0.3042,0.1213,-0.0083,-0.0790,-0.0988,-0.0844,-0.0542,-0.0233,-0.0007,0.0110,0.0135,0.0107,0.0062,0.0025,0.0004,-0.0004,-0.0004,
sinc double thrice :
1.0000,0.9755,0.9279,0.8596,0.7743,0.6763,0.5704,0.4617,0.3549,0.2542,0.1633,0.0848,0.0203,-0.0293,-0.0645,-0.0861,-0.0960,-0.0962,-0.0891,-0.0769,-0.0619,-0.0459,-0.0306,-0.0169,-0.0057,0.0029,0.0087,0.0120,0.0133,0.0129,0.0116,0.0096,0.0073,0.0052,0.0033,0.0019,0.0008,0.0001,-0.0003,-0.0004,-0.0004,-0.0004,-0.0002,

sinc direct 8x :
1.0000,0.9553,0.8701,0.7519,0.6111,0.4595,0.3090,0.1706,0.0528,-0.0386,-0.1010,-0.1352,-0.1443,-0.1335,-0.1090,-0.0773,-0.0440,-0.0138,0.0102,0.0265,0.0349,0.0365,0.0328,0.0259,0.0177,0.0097,0.0029,-0.0019,-0.0048,-0.0059,

```
very different, but visually meh? I don't see much.

The other thing I constantly forget about is "filter inversion". What I mean is, if you're trying to sample between two different grids using some filter, you can either apply the filter to the source points or the dest points, and you get the same results.

More concretely, you have filter shape F(t) and some pixels at regular locations P[i].

You create a continuous function f(t) = Sum_i P[i] * F(i-t) ; so we have placed a filter shape at each pixel center, and we are sampling them all at some position t.

But you can look at the same thing a different way - f(t) = Sum_i F(t-i) * P[i] ; we have a filter shape at position t, and then we are sampling it at each position i around it.

So, if you are resampling from one size to another, you can either do :

1. For each source pixel, multiply by filter shape (centered at source) and add shape into dest, or :

2. For each dest pixel, multiply filter shape (centered at dest) by source pixels and put sum into dest.

And the answer is the same. (and usually the 2nd is much more efficient than the first)

And for your convenience, here are some doubling filters :

```
box        : const float c_filter[1] = { 1.00000 };
linear     : const float c_filter[2] = { 0.25000, 0.75000 };
quadratic  : const float c_filter[3] = { 0.28125, 0.68750, 0.03125 };
cubic      : const float c_filter[4] = { 0.00260, 0.31510, 0.61198, 0.07031 };
mitchell0  : const float c_filter[4] = { -0.02344, 0.22656, 0.86719, -0.07031 };
mitchell1  : const float c_filter[4] = { -0.01476, 0.25608, 0.78212, -0.02344 };
mitchell2  : const float c_filter[4] = { 0.01563, 0.35938, 0.48438, 0.14063 };
gauss      : const float c_filter[5] = { 0.00020, 0.20596, 0.78008, 0.01375, 0.00000 };
sqrtgauss  : const float c_filter[5] = { 0.00346, 0.28646, 0.65805, 0.05199, 0.00004 };
sinc       : const float c_filter[6] = { 0.00052, -0.02847, 0.23221, 0.87557, -0.08648, 0.00665 };
lanczos4   : const float c_filter[4] = { -0.01773, 0.23300, 0.86861, -0.08388 };
lanczos5   : const float c_filter[5] = { -0.04769, 0.25964, 0.89257, -0.11554, 0.01102 };
lanczos6   : const float c_filter[6] = { 0.00738, -0.06800, 0.27101, 0.89277, -0.13327, 0.03011 };

```
These are actually pairs of filters to create adjacent pixels in a double-resolution output. The second filter of each pair is simply the above but in reverse order (so the partner for linear is 0.75, 0.25).

To use these, you scan it over the source image and apply centered at each pixel. This produces all the odd pixels in the output. Then you take the filter and reverse the order of the coefficients and scan it again, this produces all the even pixels in the output (you may have to switch even/odd, I forget which is which).

These are created by taking the continuous filter function and sampling at 1/4 offset locations - eg. if 0 is the center (maximum) of the filter, you sample at -0.75,0.25,1.25, etc.

And here's the same thing with a 1.15 X blur built in :

```
box        : const float c_filter[1] = { 1.0 };
linear     : const float c_filter[2] = { 0.30769, 0.69231 };
quadratic  : const float c_filter[3] = { 0.00000, 0.33838, 0.66162 };
cubic      : const float c_filter[5] = { 0.01586, 0.33055, 0.54323, 0.11034, 0.00001 };
mitchell0  : const float c_filter[5] = { -0.05174, 0.30589, 0.77806, -0.03143, -0.00078 };
mitchell1  : const float c_filter[5] = { -0.02925, 0.31410, 0.69995, 0.01573, -0.00052 };
mitchell2  : const float c_filter[5] = { 0.04981, 0.34294, 0.42528, 0.18156, 0.00041 };
gauss      : const float c_filter[6] = { 0.00000, 0.00149, 0.25842, 0.70629, 0.03379, 0.00002 };
sqrtgauss  : const float c_filter[6] = { 0.00000, 0.01193, 0.31334, 0.58679, 0.08726, 0.00067 };
sinc       : const float c_filter[7] = { 0.00453, -0.05966, 0.31064, 0.78681, -0.03970, -0.00277, 0.00015 };
lanczos4   : const float c_filter[5] = { -0.05129, 0.31112, 0.78006, -0.03946, -0.00042 };
lanczos5   : const float c_filter[6] = { 0.00499, -0.09023, 0.33911, 0.80082, -0.04970, -0.00499 };
lanczos6   : const float c_filter[7] = { 0.02600, -0.11420, 0.34931, 0.79912, -0.05497, -0.00837, 0.00312 };

```
The best doubling filters to my eyes are sinc and lanczos5, they have a good blend of sharpness and lack of artifacts. Stuff like gauss and cubic are too blurry, but are very smooth ; lanczos6 is sharper but has more ringing and stair-steps; wider lanczos filters get worse in that way. Sinc and lanczos5 without any blur built in can have a little bit of visible stair-steppiness (there's an inherent tradeoff when linear upsampling of sharpness vs. stair-steps) (by stair steps I mean the ability to see the original pixel blobs).

## 3/24/2011

### 03-24-11 - Image filters and Gradients

A friend recently pointed me at John Costella's supposedly superior edge detector . It's a little bit tricky to figure out what's going on there because his writing is quite obtuse, so I thought I'd record it for posterity.

You may recognize Costella's name as the guy who made Unblock which is a rather interesting and outside-the-norm deblocker. He doesn't have an image science background, and in the case of Unblock that led him to some ideas that normal research didn't find. Did he do it again with his edge detector?

Well, no.

First of all, the edge detector is based on what he calls the magic kernel . If you look at that page, something is clearly amiss.

The discrete 1d "magic kernel" for upsampling is [1,3,3,1] (unnormalized). Let's back up a second, we wish to upsample an image without offseting it. That is, we replace one pixel with four and they cover the same area :

```
+---+     +-+-+
|   |     | | |
|   |  -> +-+-+
|   |     | | |
+---+     +-+-+

```
A 1d box upsample would be convolution with [1,1] , where the output discrete taps are half the distance apart of the original taps, and offset by 1/4.

The [1331] filter means you take each original pixel A and add the four values A*[1331] into the output. Or if you prefer, each output pixel is made from (3*A + 1*B)/4 , where A is the original pixel closer to the output and B is the one farther :

```
+---+---+
| A | B |
+---+---+

+-+-+-+-+
| |P| | |
+-+-+-+-+

P = (3*A + 1*B)/4

```
but clever readers will already recongize that this is just a bilinear filter. The center of P is 1/4 of an original pixel distance to A, and 3/4 of a pixel distance to B, so the 3,1 taps are just a linear filter.

So the "magic kernel" is just bilinear upsampling.

Costella shows that Lanczos and Bicubic create nasty grid artifacts. This is not true, he simply has a bug in his upsamplers.

The easiest way to write your filters correctly is using only box operations and odd symmetric filters. Let me talk about this for a moment.

In all cases I'm talking about discrete symmetric filters. Filters can be of odd width, in which case they have a single center tap, eg. [ a,b,c,b,a ] , or even width, in which case the center tap is duplicated : [a,b,c,c,b,a].

Any even filter can be made from an odd filter by convolution with the box , [1,1]. (However, it should be noted that an even "Sinc" is not made by taking an odd "Sinc" and convolving with box, it changes the function).

That means all your library needs is odd filters and box resamplers. Odd filters can be done "in place", that is from an image to an image of the same size. Box upsample means replicate a pixel with four identical ones, and box downsample means take four pixels are replace them with their average.

To downsample you just do : odd filter, then box downsample.
To upsample you just do : box upsample, then odd filter.

For example, the "magic kernel" (aka bilinear filter) can be done using an odd filter of [1,2,1]. You just box upsample then convolve with 121, and that's equivalent to upsampling with 1331.

Here are some odd filters that work for reference :

```
Box      : 1.0
Linear   : 0.25,0.50,0.25
Cubic    : 0.058,0.128,0.199,0.231,0.199,0.128,0.058
Gaussian : 0.008,0.036,0.110,0.213,0.267,0.213,0.110,0.036,0.008
Mitchell1: -0.008,-0.011,0.019,0.115,0.237,0.296,0.237,0.115,0.019,-0.011,-0.008
Sinc     : -0.003,-0.013,0.000,0.094,0.253,0.337,0.253,0.094,0.000,-0.013,-0.003
Lanczos4 : -0.008,0.000,0.095,0.249,0.327,0.249,0.095,0.000,-0.008
Lanczos5 : -0.005,-0.022,0.000,0.108,0.256,0.327,0.256,0.108,0.000,-0.022,-0.005

```
Okay, so now let's get back to edge detection. First of all let's clarify something : edge detectors and gradients are not the same thing. Gradients are slopes in the image; eg. big planar ramps may have large gradients. "edges" are difficult to define things, and different applications may have different ideas of what should constitute an "edge". Sobel kernels and such are *gradient* operators not edge detectors. The goal of the gradient operator is reasonably well defined, in the sense that if our image is a height map, the gradient should be the slope of the terrain. So henceforth we are talking about gradients not edges.

The basic centered difference operator is [-1,0,1] and gives you a gradient at the middle of the filter. The "naive difference" (Costella's terminology) is [-1,1] and gives you a gradient half way between the original pixels.

First of all note that if you take the naive difference at two adjacent pels, you get two gradients at half pel locations; if you want the gradient at the integer pixel location between them you would combine the taps - [-1,1,0] and [0,-1,1] - the sum is just [-1,0,1] , the central difference.

Costella basically proposes using some kind of upsampler and the naive difference. Note that the naive difference operator and the upsampler are both just linear filters. That means you can do them in either order, since convolution commutes, A*B = B*A, and it also means you could just make a single filter that does both.

In particular, if you do "magic upsampler" (bilinear upsampler) , naive difference, and then box downsample the taps that lie within an original pixel, what you get is :

```
-1  0  1
-6  0  6
-1  0  1

```
A sort of Sobel-like gradient operator (but a bad one). (this comes from 1331 and the 3's are in the same original pixel).

So upsampling and naive difference is really just another form of linear filter. But of course anybody who's serious about gradient detection knows this already. You don't just use the Sobel operator. For example in the ancient/classic Canny paper, they use a Gaussian filter with the Sobel operator.

One approach to making edge detection operators is to use a Gaussian Derivative, and then find the discrete approximation in a 3x3 or 5x5 window (the Scharr operator is pretty close to the Gaussian Derivative in a 3x3 window, though Kroon finds a slightly better one). Of course even Gaussian Derivatives are not necessarily "optimal" in terms of getting the direction and magnitude of the gradient right, and various people (Kroon, Scharr, etc.) have worked out better filters in recent papers.

Costella does point out something that may not be obvious, so we should appreciate that :

Gradients at the original res of the image do suffer from aliasing. For example, if your original image is [..,0,1,0,1,0,1,..] , where's the gradient? Well, there are gradients between each pair of pixels, but if you only look at original image pixel locations you can't place a gradient anywhere. That is, convolution with [-1,0,1] gives you zero everywhere.

However, to address this we don't need any "magic". We can just double the resolution of our image using whatever filter we want, and then apply any normal gradient detector at the higher resolution. If we did that on the [0,1,0,1] example we would get gradients at all the half taps.

Now, finally, I should point out that "edge detection" is a whole other can of worms than gradient operators, since you want to do things like suppress noise, connect lines, look for human perceptual effects in edges, etc. There are tons and tons of papers on these topics and if you really care about visual edge detection you should go read them. A good start is to use a bilateral or median filter before the sharpen operator (the bilateral filter suppresses speckle noise and joins up dotted edges), and then sharpen should be some kind of laplacian of gaussian approximation.

### 03-24-11 - Some Car videos I like

Most car racing is just excruciatingly boring. In car time attack videos in crazy fast cars, blah blah boring. Some exceptions :

YouTube - On Board with Patrick Long at Lime Rock 2010
It's cool to actually hear the driver talk about what he's thinking. All throughout racing there's tons of crazy stuff going on, but you can't appreciate it from watching cuz you don't know all the subtle things the drivers are thinking about; it's actually very strategic, one move sets up the next. It's so much more interesting with the voice over.

It's pretty nuts the way the Nurburgring races run cars of all kinds of different speeds. You've got crazy race cars running with just slightly modified road cars, and that leads to lots of passing and action, way better than something like F1 where everyone is the same speed and you can never pass :
YouTube - Corvette Z06 GT3 vs. Porsche Cayman @ N�rburgring Nordschleife VLN
YouTube - BMW Z4 M Coupe vs Porsche 997 RSR @ Nurburgring 24 Hr Race

Then you've got people who take their race car out during normal Nurburgring lapping days : (the Alzen 996 Turbo runs 6:58 on the ring, one of the fastest times ever)
YouTube - Porsche Team Jurgen Alzen Motorsport

The other way you get exciting races is in amateur races where you have some very fast cars, and some cars that are woefully bad and spinning out - especially when the fast cars fail to quality and have to start at the back of the back and move forward :
YouTube - Scott Goodyear On Board - 1988 Rothmans Porsche Turbo Cup Series Mont Tremblant Race
NASA GTS Putnam Park, May 15, 2010 on Vimeo

(BTW the Porsche Carrera Cup races are some of the most boring races I've ever seen; all the cars are identical so they can never pass, and the drivers are all rich amateur boneheads who can hardly work their auto-blipping sequential shifter)

The RUF Yellowbird might be the "lairiest" car ever. Take an old Porsche with no traction, lighten it and stick a giant turbo in it. It did at an 8:05 at the ring with the tail sliding the entire time. Completely insane.
YouTube - Ruf Yellowbird DRIFT in Nurburgring
YouTube - 930 Ruf CTR Yellowbird on nurburgring
YouTube - Insane driving in PorscheRUF Yellowbird - Nurburgring Hotlap

I like this video as contrast, it shows how hard it is to drift in the new 911 (even the GT3, which is much easier to drift than the base models). What you have to do is come in to a corner very hot, over 60 mph, brake very hard and very late, continue braking as you turn in ("trail brake"), this should get the weight loaded up on the front wheels and make the rear end light, now you heel and toe into 1st, wait for the nose to get turned in a bit, then on power hard out of the corner. It's much easier going downhill too.
YouTube - Porsche 911 (997) GT3 drifting

Some other random car shite :

Weight distributions :

```
Porsche 997 C2 : 38% front , 62% rear
Porsche 997 C4 : 40% front , 60% rear
Porsche Cayman : 45% front , 55% rear
Lotus Elise    : 38% front , 62% rear
Lotus Evora    : 39% front , 61% rear

```
I was surprised how rear-biased the Loti are. So the best handling cars in the world (Loti) have a 911-like rear weight bias. Granted, the weight at the wheels is not the whole story, the location of the engine lump matters a lot for dynamic weight transfer purposes, but still. Weight under acceleration and lateral forces under cornering would tell you a more complete story.

(people often say a car has a 50/50 weight distribution and thus it's "perfectly balanced" - but that's only true when it's not moving; under acceleration it gets more weight in the rear, and under braking it gets more in the front; I think the best cars are slightly rear-biased, like the cayman, because the under braking they becoming only slightly front-biased; front-biased cars get dangerously light in the rear under hard braking)

It's also interesting to compare tire sizes :

```
Porsche 997 C2 : 235 front , 295 rear
Porsche Cayman : 235 front , 265 rear
Lotus Evora    : 225 front , 255 rear
Lotus Elise 1  : 185 front , 205 rear
Lotus Elise 2  : 175 front , 225 rear
Lotus Exige    : 195 front , 225 rear
Honda S2000 AP1: 205 front , 225 rear
Mazda RX8      : 225 front , 225 rear

```
The differences are revealing. The Evora for example weighs about the same as a 911 GT3 and has the same weight distribution, but has much less staggered tire sizes. This means the tail will come out more easily on the Evora, you generally have less rear grip. The Cayman has much less rear weight bias but has the same wide rear tires, which again edges towards understeer. The Elise setup was changed during its life to a much more staggered setup (it's much narrower tires in general are due to its much lower weight).

(I also tossed in the AP1 S2000 and the RX8 since they are just about the only OEM cars that are tweaked for oversteer; the newer S2000's are on a 255 rear cuz honda pussed out; note that unlike the Elise, the S2000 actually weighs close to the same as the Cayman and Evora, yet is on much narrower tires; that provides more "driving pleasure")

(BTW it's risky to try to learn too much from Lotus because they do things so differently from any other car maker; they use stiff springs, *NO* sway bar or very weak sway bar, no LSD, generally narrow front tires for quick steering, etc.)

For my reference :

```
BMW M coupe (E86) (2007)
330 hp , 262 lb-ft
3230 curb weight (manufaturer spec)

Cayman S (987.2) (2009)
320 hp, 273 lb-ft
2976 curb weight (manufaturer spec)

```
I really like the M coupe ; I think it's the last great BMW, reasonably light weight, and tuned for oversteer from the factory. But it is just not as good as the Cayman in pretty much any way. It's also got a much smaller interior and much less cargo space. It's also really not much cheaper, because it's rare and is holding value well, while used Porsche values fall fast. The only real advantage is that the BMW engine is a bit better (it has more potential), and the OEM suspension setup is more enthusiast-oriented. The M coupe would be so much better if they hadn't separated the boot from the cabin; it should have been a proper hatchback; that would provide more feeling of space in the cabin, and much more cargo room. Instead you get a small claustrophobic cabin, and a small boot.

Sometimes I lust after the really old BMW's, like the E28 M5 or the E30 M3, I love how small and boxy they are, and the downward-pointed noses, but they are just so far off modern performance, you would have to do a lot of work on them (suspension, engine). So then I look at newer ones, like the E43 M3, but they really have most of the disadvantages of a new car - big, heavy, tuned for safety, etc. I wind up at the M coupe as sort of the sweet spot of old values and new engineering, but then it just doesn't make sense compared to the Cayman either.

BTW the 2009 Cayman is a big improvement over the earlier cars. It's got a new engine that is really much better; if you just look at the figures it doesn't look like a big improvement (20 hp or something) but that hides the real value - it doesn't blow up like the old ones do; the older ones have power steering problems, air-oil separator problems, oil starvation problems, all of which is fixed in post-2009 cars. But you have to wait until 2013 because the real values in used Porsches come after the lease returns start showing up - when the cars are 4 years old.

Continuing the for my reference theme :

```
Porsche GT3 (997.1) (2007)
415 hp, 300 lb-ft
3075 - 3262 curb weight (manufaturer spec)

```
note 1 : it's very hard to find accurate weights of cars. Wikipedia is all over the place with innacurate numbers. For one thing, the US and European official standards for how weight are measured differ, (eg. what kind of fluids are required, standard driver weight added or not, etc), also the weights differ with options, and one of the tricks manufacturers play is they measure the weight without options, but then make those options mandatory. This is another one of those things you would like to see car magazines report on, give you true weights, but of course no they don't do that.

note 2 : the GT3 is more than 2X the price of the M Coupe or Cayman, but in terms of depreciation I'm not sure it costs much more (and the M Coupe is actually cheaper than the Cayman I believe because it will depreciate less). In some sense, you should measure car cost not by initial outlay, but rather by the annual depreciation + opportunity cost of the money sunk. There are some classic cars that have expected ZERO depreciation - that is, other than opportunity cost and transaction cost, they are completely free to own (Jag E-types, classic Ferraris, etc.)

However, buying a car based on expected depreciation sucks as a life move. You have to be constantly worried about how your usage is affecting possible resale value. It's like the future purchaser is watching your every move and judging you. OMG you parked outside in the rain? That's \$5k off. You drove in a gravel parking lot? That's \$5k off. You put too many miles on it? That's \$10k off. It's a horrible feeling. It's much more fun to buy a car and assume you will never sell it and just do as you please with it. (of course many people allow this disease to affect their home ownership experience - they are constantly thinking about how what they do their home will affect resale, which is a sad way to live).

So for example I think a \$70k GT3 is actually "cheaper" than a \$50k base 997 at the moment, that thinking sucks you into depreciation horror.

The Boss 302 Mustang has layed down some great lap times, in M3 / Porsche territory, for about half the price. The old "retard's wisdom" is that European cars may be slower in a straight line (than cheaper American cars), but they go faster around corners - is no longer true. (In fact a recent comparo of the Porsche Turbo S vs. the Corvette ZR1 found the Porsche to be faster in a straight line, but the Vette to be faster around a bendy track! That's the exact opposite of the old stereotype which people still deeply associate with these cars). That said, I'm totally uninterested in this car (the Mustang). It weighs over 3600 pounds (the GT500 is over 3800). It's got those damn tall doors and tiny glass that makes it feel like a coffin. I drove a standard mustang with the same body style recently as a rental car, and it was just awful, it felt so huge, so unweildly. The whole high-power giant heavy car thing is such a turn off. The only American sports cars is the Pontiac Solstice. But the other reason I don't like the 302 is Aero.

I believe Aero is a very bad thing for road cars. Much of the speed of the 302 comes from aerodynamic bits (henceforth Aero). The same is true of the Viper ACR, and even the Porsche GT3. These cars have layed down much faster track times than their progenitors, and the main difference is the Aero (the other big difference is stiff track suspension). You look at the lap time and think the car is much improved, but the fact is you will never actually experience that improvement on the road. Aero only has a big effect over 100 mph; you are not taking corners at 120 on the road.

And furthermore - even if you *are* taking fast corners I contend that aero is a bad thing. The reason is that amateur drivers don't know how to handle aero and can get unpredictable effects from it. For example if you are going through a corner at 120 steady state and you brake, you decrease your downforce and suddenly can lose grip and either spin out or understeer. Aero can create false confidence because the car feels very stable and planted, but that's only there as long as you keep on the gas. So IMO when a car is improved by getting better Aero, that is not actually a benefit to the consumer.

Take a car and slap a big wing on it; the lap time goes down by 3 seconds. Is it a better road car? No, probably not. But if you look at lap time rankings it seems much better than its rivals.

I believe that lap times in general are not a great way to judge cars. Granted it is much better than 0-60 or 1/4 mile times, the way the US muscle mags compared cars in the old days, but lap times can be gamed in weird ways (tires, aero, suspension, etc) that aren't actually beneficial to the buyer. (of course, even worse is looking at top speeds, which that moron Clarkson seems to fixate on; he even does the most lol-worthy thing of all which is to compare top speeds of cars that are *limitted*, like he'll say that some shit sedan with a top speed of 160 is "faster than a BMW M5" ; umm that's because the M5 is limitted at 155 you giant fucking moron, and even if it wasn't top speed is totally irrelevant because it depends so much on drag and gearing; you can actually greatly improve most cars by regearing them to lower their top speed to 120 or so).

Another issue is that lap times heavily reward grip. And grip is not really what you want. You want a bit of tail sliding fun, and ideally at a safe speed in a predictable way, which means reasonably low grip. This is part of what makes the Miata so brilliant, they intentionally designed it with low grip, so that even though it doesn't have much power, you could still get the tail out (the original was on 185 width tires).

I love the Best Motoring car reviews ; for example in this one :
YouTube - Boxster S vs Elise vs S2000 Touge Test & Track Battle - Best Motoring International
Tsuchiya rates the cars by how progressively they go from stable to spinning ; the ideal is a car that is steady and gives you lots of feedback, the worst is a car that suddenly goes to spinning without warning you.

If you just look at lap times, you will favor cars with aero downforce, stiff suspension, and wide tires. That's not really what you want. You want cars with "driving pleasure". For some reason, the Japanese seem to be the only manufacturers who get this; cars like the Miata, S2000, RX8 are not about putting up figures, they are about balance, and all those little things that go into a car making you happy (such as shift feel and getting the rev ranges right and so on).

## 3/21/2011

### 03-21-11 - Slow Coder

I'm doing the cross platform build script for my RAD library, and I am SO FUCKING SLOW at it. Somebody else could have done it in one day and it's taking me a week.

It reminds me that in some of my less successful job interviews, I tried to be honest about my strengths and weaknesses. I said something along the lines of "if you give me interesting technical work, I'm better than almost anyone in the world, but if you give me boring wiring work, I'm very ordinary, or maybe even worse than ordinary". I didn't get those jobs.

Job interviewing is one of those scenarios where honesty is not rewarded. Employers might give lip service to trying to find out what an employee's really like, but the fact is they are much more likely to hire someone who just says "I'm great at everything" and answers "what is your weakness" with one of those answers like "my greatest weakness is I spend too many hours at work".

It's sort of like the early phase of dating. If you are forthcoming and actually confess any of your flaws, the employer/date is like "eww yuck, if they admit that, they must have something really bad actually wrong with them". You might think it's great to get the truth out in the open right away, see if you are compatible, but all the other person sees is "candidate A has confessed no weaknesses and candidate B has said he has a fear of intimacy and might be randomly emotionally cold to me at times, and that was a really weird thing to say at a job interview".

Furthermore, it's sort of just a faux pas. It's like talking about masturbation around your parents. It's too much sharing with someone you aren't close with yet. All the people who understand the social code of how you're supposed to behave just feel really uncomfortable, like "why the fuck is this guy confessing his honest weaknesses? that is not what you're supposed to do in an interview/date". Job interviews/early dates don't really tell you much deep factual information about a person. There's an obvious code of what you're supposed to say and you just say that. It's really a test of "are you sane enough to say the things you are supposed to in this situation?".

### 03-21-11 - ClipCD

Copy current dir to clipboard :
```
c:\bat>type clipcd.bat
@echo off
cechonr "clip " > s:\t.bat
cd >> s:\t.bat
REM type r:\t.bat
s:\t.bat
```
(cechonr is my variant of "echo" that doesn't put a \n on the end).

I'm sure it could be done easier, but I've always enjoyed this crufty way of making complex batch files by having them write a new batch file. For example I've long done my own savedir/recalldir this way :

```
c:\bat>type savedir.bat
@echo off
cd > r:\t1.z
cd \
cd > r:\t2.z
zcopy -o c:\bat\echo_off.bat r:\t3.z
attrib -r r:\t3.z
type r:\t2.z >> r:\t3.z
cechonr "cd " >> r:\t3.z
type r:\t1.z >> r:\t3.z
zcopy -o r:\t3.z c:\bat\recalldir.bat
echo cls >> c:\bat\recalldir.bat
call dele r:\t1.z r:\t2.z r:\t3.z
call recalldir.bat

```
Less useful now that most CLI's have a proper pushdir/popdir. But this is a bit different because it actually makes a file on disk (recalldir.bat), I use it to set my "home" dir and my dos startup bat runs recalldir.

In other utility news, my CLI utils (move,copy,etc) have a new option which everyone should copy - when you have a duplicate name, you can ask it to check for binary identity right there in the prompt :

```
r:\>zc aikmi.BMP z
R:\z\aikmi.BMP exists; overwrite? (y/n/A/N/u/U/c/C)?
R:\z\aikmi.BMP exists; overwrite? (y/n/A/N/u/U/c/C)c
CheckFilesSame : same
R:\z\aikmi.BMP exists; overwrite? (y/n/A/N/u/U/c/C)y
R:\aikmi.BMP -> R:\z\aikmi.BMP

```
And of course like all good prompts, for each choice there is a way to say "do this for every prompt".

(BTW if you want a file copier for backing up big dirs, robocopy is quite good. The only problems is the default number of retries is no good, when you hit files with problems it will just hang forever (well, 30 million seconds anyway, which is essentially forever) You need to use /R:10 and /W:10 or something like that).

## 3/19/2011

I've started working out again recently. I'm trying to do things differently this time, hopefully in a way that leads to more long term good foundational structure for my body problems. Obviously that would have been much easier to do at a young age, but better late than never I guess. I believe that in the past I may have overdeveloped the easy muscles, which is basically the "front" - pecs, abs, biceps, etc. I'm not sure if that contributed to my series of shoulder injuries, but it certainly didn't help.

My intention this time is to try to develop musculature that will help support my unstable shoulders as well as generally help with "programmer's disease". So generally that means strengthening the back, shoulder stabilizers, lots of over-head work, and dynamic work that involves full body moves, flexibility and extension.

The other change is that the gym I'm going to here happens to have no proper weights (aka barbells and racks). Hey dumb gym owners : if you only put ONE thing in your gym, it should be a power rack with barbells. And of course this gym has no power rack, just a bunch of those stupid fucking machines. That is the most useful and general purpose single piece of gym equipment. You could get a full workout with just bodyweight moves for the small muscles and a power rack for the big ones. In fact I would love a gym that's just a big empty room and a bunch of racks and bars, but that's reserved for pro athletes and nutters like crossfit.

Anyway, the one thing they do have is kettlebells, so I'm doing that. It's pretty fun learning the new moves. If you read the forums you'll see a bunch of doofuses talking about how kettlebells "change everything" and are "so much more fun". No, they're not. But they are different. So if you've done normal weights for many years and you're sick of it, it might be a nice change of pace. Learning new moves gives you mind something to do while your body is lugging weight around, it keeps you from dieing of boredom.

I'm also trying to avoid all crunch-like movements for abs, that is, all contractions. So far I'm doing a bunch of plank variants, and of course things like overhead farmers walks, but I may have to figure out some more to add to that. One of the best exercises for abs is just heavy deadlifts, but sadly I can't do that in the dumb yuppie gym.

## 3/14/2011

### 03-14-11 - cbloom.com-exe BmpUtil update

I put up a new BmpUtil on the cbloom.com/exe page . Release notes :

```
bmputil built Mar 14 2011 12:49:42
bmp view `<`file>
bmp info `<`file>
bmp copy `<`fm> `<`to> [bits] [alpha]
bmp jpeg `<`fm> `<`to> [quality]
bmp crop `<`fm> `<`to> `<`w> `<`h> [x] [y]
bmp pad `<`fm> `<`to> `<`w> `<`h> [x] [y]
bmp cat `<`h|v> `<`fm1> `<`fm2> `<`to>
bmp size `<`fm> `<`to> `<`w> [h]
bmp mse `<`im1> `<`im2>
bmp median `<`fm> `<`to> `<`radius> [selfs]
file extensions : bmp,tga,png,jpg
jpg gets quality from last # in name

```

```
fimutil by cbloom built Mar 14 2011 12:50:56
fim view `<`file>
fim info `<`file>
fim copy `<`fm> `<`to> [planes]
fim mse `<`fm> `<`to>
fim size `<`fm> `<`to> `<`w> [h]
fim make `<`to> `<`w> `<`h> `<`d> [r,g,b,a]
fim eq `<`fm> `<`to> `<`eq>
fim eq2 `<`fm1> `<`fm2> `<`to> `<`eq>
fim cmd `<`fm> `<`to> `<`cmd>  (fim cmd ? for more)
fim interp `<`to> `<`fm1> `<`fm2> `<`fmt>
fim filter `<`fm> `<`to> `<`filter> [repeats] ; (filter=? for more)
fim upfilter/double `<`fm> `<`to> `<`filter> [repeats]
fim downfilter/halve `<`fm> `<`to> `<`filter> [repeats]
fim gaussian `<`fm> `<`to> `<`sdev> [width]
fim bilateral `<`fm> `<`to> `<`spatial_sdev> `<`value_sdev> [spatial taps]
file extensions : bmp,tga,png,jpg,fim
use .fim for float images; jpg gets quality from last # in name

fim cmd `<`fm> `<`to> `<`cmd>
use cmd=? for help
RGBtoYUV
YUVtoRGB
ClampUnit
Normalize
ScaleBiasUnit
ReGamma
DeGamma
normheight
median5

```

Some notes :

Most of the commands will give more help if you run them, but you may have to give some dummy args to make them think they have enough args. eg. run "fimutil eq ? ? ?"

FimUtil sizers are much better than the BmpUtil ones. TODO : any resizing except doubling/halving is not very good yet.

FimUtil eq & eq2 provide a pretty generate equation parser, so you can do any kind of per-sample manipulation you want there.

"bmputil copy" is how you change file formats. Normally you put the desired jpeg quality in the file name when you write jpegs, or you can use "bmputil jpeg" to specify it manually.

Unless otherwise noted, fim pixels are in [0,1] and bmp pixels are in [0,255] (just to be confusing, many of the fimutil commands do a *1/255 for you so that you can pass [0,255] values on the cmd line); most fim ops do NOT enforce clamping automatically, so you may wish to use ClampUnit or ScaleBiasUnit.

Yeah, I know imagemagick does lots of this shit but I can never figure out how to use their commands. All the source code for this is in cblib, so you can examine it, fix it, laugh at it, what have you.

## 3/12/2011

### 03-12-11 - C Coroutines with Stack

It's pretty trivial to do the C Coroutine thing and just copy your stack in and out. This lets you have C coroutines with stack - but only in a limitted way.

[deleted]

Major crack smoking. This doesn't work in any kind of general way, you would have to find the right hack per compiler, per build setting, etc.

Fortunately, C++ has a mechanism built in that lets you associate some data per function call and make those variable references automatically rebased to that chunk of memory - it's called member variables, just use that!

## 3/11/2011

### 03-11-11 - Worklets , IO , and Coroutines

So I'm working on this issue of combining async CPU work with IO events. I have a little async job queue thing, that I call "WorkMgr" and it runs "Worklets". See previous main post on this topic :

So I'm happy with how my WorkMgr works for pure CPU work items. It has one worker thread per core, the Worklets can be dependent on other Worklets, and it has a dispatcher to farm out Worklets using lock-free queues and all that.

(ASIDE : there is one major problem that ryg describes well , which is that it is possible for worker threads that are doing work to get swapped out for a very long time while workers on another core that could have CPU time can't find anything to do. This is basically a fundamental issue with not being in full control of the OS, and is related to the "deficiency of Windows' multi-processor scheduler" noted above. BTW this problem is much worse if you lock your threads to cores; because of that I advise that in Windows you should *never* lock your threads to cores, you can use affinity to set the preferred core, but don't use the exclusive mask. Anyway, this is an interesting topic that I may come back to in the future, but it's off topic so let's ignore it for now).

So the funny issues start arising when your work items have dependencies on external non-CPU work. For concreteness I'm going to call this "IO" (File, Network, whatever), but it's just anything that takes an unknown amount of time and doesn't use the CPU.

Let's consider a simple concrete example. You wish to do some CPU work (let's call it A), then fire an IO and wait on it, then do some more CPU work B. In pseduocode form :

```WorkletLinear
{
A();
h = IO();
Wait(h);
B();
}
```
Now obviously you can just give this to the dispatcher and it would work, but while your worklet is waiting on the IO it would be blocking that whole worker thread.

Currently in my system the way you fix this is to split the task. You make two Worklets, the first does work A and fires the IO, the second does work B and is dependent on the first and the IO. Concretely :

```
Worklet2
{
B();
}

Worklet1
{
A();
h = IO();
QueueWorklet( Worklet2, Dependencies{ h } );
}

```
so Worklet1 finishes and the worker thread can then do other work if there is anything available. If not, the worker thread goes to sleep waiting for one of the dependencies to be done.

This way works fine, it's what I've been using for the past year or so, but as I was writing some example code it occurred to me that it's just a real pain in the ass to write code this way. It's not too bad here, but if you have a bunch of IO's, like do cpu work, IO, do cpu work, more IO, etc. you have to make a whole chain of functions and get the dependencies right and so on. It's just like writing code for IO completion callbacks, which is a real nightmare way to write IO code.

The thing that struck me is that basically what I've done here is create one of the "ghetto coroutine" systems. A coroutine is a function call that can yield, or a manually-scheduled thread if you like. This split up Worklet method could be written as a state machine :

```
WorkletStatemachine
{
if ( state == 0 )
{
A();
h = IO();
state++; enqueue self{ depends on h };
}
else if ( state == 1 )
{
B();
}
}

```
In this form it's obviously the state machine form of a coroutine. What we really want is to yield after the IO and then be able to resume back at that point when some condition is met. Any time you see a state machine, you should prefer a *true* coroutine. For example, game AI written as a state machine is absolutely a nightmare to work with. Game AI written as simple linear coroutines are very nice :
```
WalkTo( box )
obj = Open( box )
PickUp( obj )

```
with implicit coroutine Yields taking place in each command that takes some time. In this way you can write linear code, and when some of your actions take undetermined long amounts of time, the code just yields until that's done. (in real game AI you also have to handle interruptions and such things).

So, there's a cute way to implement coroutines in C using switch :

So one option would be to use something like that. You would put the hidden "state" counter into the Worklet work item struct, and use some macros and then you could write :

```
WorkletCoroutine
{
crStart   // macro that does a switch on state

A();
h = IO();

crWait(h,1)  // macro that does re-enqueue self with dependency, state = 1; case 1:

B();

crEnd
}

```
that gives us linear-looking code that actually gets swapped out and back in. Unfortunately, it's not practical because this C-coroutine hack doesn't preserve local variables, is creating weird scopes all over, and just is not actually usable for anything but super simple code. (the switch method gives you stackless coroutines; obvious Worklet can be a class and you could use member variables). Implementing a true (stackful) coroutine system doesn't really seem practical for cross-platform (it would be reasonably easy to do for any one platform, you just have to record the stack in crStart and copy it out in crWait, but it's just too much of a low-level hacky mess that would require intimate knowledge of the quirks of each platform and compiler). (you can do coroutines in Windows with fibers, not sure if that would be a viable solution on Windows because I've always heard "fibers are bad mmkay").

Aside : some links on coroutines for C++ :

The next obvious option is a thread pool. We go ahead and let the work item do IO and put the worker thread to sleep, but when it does that we also fire up a new worker thread so that something can run. Of course to avoid creating new threads all the time you have a pool of possible worker threads that are just sitting asleep until you need them. So you do something like :

```
{
A();
h = IO();
B();
}

{
number of non-waiting workers --;

Wait(h);

number of non-waiting workers ++;
}

{
if ( number of non-waiting workers < desired number of workers &&
is there any work to do )
{
start a new worker from the pool
}

if ( number of non-waiting workers > desired number of workers )
{
sleep worker to the pool
}
}

// CheckThreadPool also has to be called any time a work item is added to the queue

```
or something like that. Desired number of workers would be number of cores typically. You have to be very careful of the details of this to avoid races, though races here aren't the worst thing in the world because they just mean you have not quite the ideal number of worker threads running.

This is a reasonably elegant solution, and on Windows is probably a good one. On the consoles I'm concerned about the memory use overhead and other costs associated with having a bunch of threads in a pool.

Of course if you were Windows only, you should just use the built-in thread pool system. It's been in Windows forever in the form of IO Completion Port handling. New in Vista is much simpler, more elegant thread pool that basically just does exactly what you want a thread pool to do, and is managed by the kernel so it's fast and robust and all that. For example, with the custom system you have to be careful to use ThreadPoolWait() instead of normal OS Wait() and if you can't get nice action when you do something that puts you to sleep in other ways (like locking a mutex or whatever).

Some links on Windows thread pools and the old IO completion stuff :

So I've rambled a while and don't really have a point. The end.

### 03-11-11 - Rant Rant Rant

Well I found out you're not allowed to contribute to a Roth IRA if you make more than \$120k or something. WTF god damn unnecessarily complicated tax laws. So now I get to deal with penalties for excess contribution. If you just leave it in there you get a 6% penalty *every year*. God damnit, the fucking Roth limit is \$5000 anyway, it's not like the government is missing out on a ton of tax revenue because I made a contribution, it's just part of the fucking retarded way that they raise money without "raising taxes" because they aren't allowed to touch the nominal percent tax rate, they get it in other ways. (actually I'm sure that I made illegal contributions in past years too, god fucking dammit).

Oh, and god damnit why can't the IRS just do my taxes for me !? All I have are W2's and 1099's , you fuckers have all the information, and you're going to electronically check them against my filing, so you just fucking tell me what I'm supposed to pay.

Anyway, I hate fucking retirement savings. You're locking it up in a box where you aren't allowed to use it until you're old. Fuck you future self, you don't get my money, you can earn your own damn money.

My "Brownstripe" internet has been super flakey for the last week. It's incredibly frustrating trying to browse the web when the net is slow, because you become excruciatingly aware of all the unnecessary shit that people are doing on all their web pages. I'm just loading some blog I want to read and I keep getting "waiting for blah blah", site after site, various ad hosts, various tracker sites, etc. Shit like Google Maps is just horrible on slow/flakey nets. I want to be able to manually tell it to cancel all its previous requests and please update this fucking image tile right here that I'm clicking on.

Anyway, because of this I have discovered that Perforce is not actually robust over a flakey net connection. WTF Perforce, you are supposed to be well tested and super-robust. I submitted a big changelist over my flakey net connection. P4 crapped out (rather than retrying and just taking a long time like it should have), and managed to get itself into an invalid state. Some of the files in the changelist got submitted, and when I tried to do anything else to that changelist it told me "unknown changelist #". So I moved all the files in that changelist out to a new one and re-submitted once I got into the office, and discovered that about half the files had merge conflicts because they had already been sort of submitted (not actualy conflicts because it was just the same change) (and "add of added file" errors). WTF, not confidence inspiring P4. Changelist submission is supposed to be atomic.

My fucking PS3 wants to fucking update its system software every two minutes. The worst thing is that it won't let me use Netflix until I do. And it's the worst kind of prompt. I mean, first of all, it's my fucking system, don't force me to update if I don't want to (especially not when the major change in this update is "you can now set controller turn-off timeouts per controller" or some shit). Second of all, if I can't fucking run anything without doing the update, then just do it. Don't ask me. Especially don't pretend that it's optional. I get a prompt like "there is an update available; press X to do it or O to continue". Okay, I don't want to fucking update so I press O. Then after a minute of grinding, I get "press X to update or O to continue" , okay, I press O, don't update. Then I get "you need to log in", WTF I was logged in, but here goes... then I get "you must update your system software to log in". ARG! WTF if it's not optional just fucking do it.

I also wish I could make the PS3 boot directly into Netflix, since that's all I ever use it for. For a device that could be a simple consumer electronics device, it sure is making itself feel like an annoying computer. Oh, and in other PS3 complaint news : the wireless controllers are sort of fail. 1. We spilled like two drips of water on one of them and it doesn't work anymore; 2. They're too heavy, maybe that's the vibration motors, old PS1 controllers were much lighter. 3. The battery runs out in like two seconds if you don't set them to auto-turn off, and 4. if you do set them to auto-turn off they take way too long to wake up. Like, why is my TV remote so much better than my PS controller? The PS3 fan is also a bit too loud. It's much quieter than the Xenon, and it's tolerable when you're playing games, but when you're watching movies it's annoying. The PS3 audio output also has some shoddy non-ground-loop-protected wiring. I was getting a nasty hum out of my stereo and I finally tracked it down to the PS3 RCA wires that I had hooked up. I have various other loops of the same sort and none of them caused any hum, so I put the blame on the PS3.

In non-computer related ranting news, my fence got tagged (spray paint graffiti). I guess that's what happens when you live in an "up and coming" neighborhood. The tagging is just sort of amusing to me (my main complaint is that it's just a shitty tag, come on, put some artistry into it!). The annoying thing is that I have to get the landlord involved. I would just paint over it myself and not report it at all, but then they might see it's a shitty paint job and I'd be responsible. The landlord is just one of those fucking nightmare people who turn everything into a huge stressful hassle. She over-reacts and gets into a giant tizzy about things, it makes you just not want to tell them about any kind of problem. (I've worked with this kind of person before and it's a real nightmare, because you wind up not wanting to assign them any tasks because they act like it's just so onerous, and they wind up working less overtime than everyone else, but complaining more). So now the landlord wants to get in the house to get the old paint stashed in the closet, so I have to dispose of the dead bodies and the meth lab. God dammit.

## 3/10/2011

### 03-10-11 - House Contemplation

Well, I'm thinking about buying a house. Property values are plummetting fast around here. I think they have a ways to fall still, but asking & selling are starting to come together a bit (for the past 2-3 years there's been a huge gap between initial asking price and final sale price as people refused to accept the reality of the situation). By the time I get my shit together and actually buy in 6-12 months it should be a nice buyer's market. And interest rates are super low and I have a bunch of cash that I don't know what to do with, so that all points to "buy".

On the other hand, it sort of fucking sucks to live in Seattle. I feel like I've explored most of it already and I need a new place to explore. The countryside is really far away here; it's weird because you think of Seattle as being a beautiful place surrounded by mountains, but it's actually one of the most difficult places to actually get away from civilization that I've ever lived. (eg. downtown San Francisco is much much closer to real countryside). Here, you can get out I90, but the I90 corridor really actually sucks, there are zero country roads going off the freeway, and all the hikes are straight up the valley within earshot of the freeway (the thing that doesn't suck is backpacking, when you get far enough in to Alpine Lakes or whatever it's fantabulous). To really get out to country roads and wild open spaces you have to drive 3-4 hours from Seattle, up to Mountain Loop or across a pass, or down to Mount Rainier, something like that.

There's nowhere to fucking bike except Mercer Island over and over (unless you drive 2+ hours, and even then it's not great because it's very hard to find good country roads around here, the ones within 1 hour are generally narrow, trafficky, and pot-holed (eg. Duvall, Green River Valley); I think probably Whidbey is the best spot within 2 hours). And even if there was somewhere to bike it would be raining.

The gray horrible winter is also a sneaky bastard. I find myself starting to think, "I'm used to this, I can handle it" , but the thing I'm not realizing is that I'm just always constantly slightly depressed. It seeps into you and becomes the new norm, and humans have this way of habituating and not realizing that their norm has been lowered. All winter long, I don't laugh, I don't play, I don't dance, I don't meet new people or try new things, I sleep in and eat too much sugar and drink too much booze, I'm just constantly depressed, and I think pretty much everyone in Seattle is, they just don't realize it because it becomes their baseline. You only realize it when you go on vacation somewhere sunny and it's like somebody just lifted a weight off your head and you're like "holy crap, life doesn't have to suck all the time! who knew!?"

And of course the people in Seattle are fucking terrible. Passive-aggressive, busybody, uptight, bland, ugly, pale, pastey, out of shape, unfashionable, slow-driving, sexually timid, white-bread, unfriendly, cliquey. I'm sure whatever house I move into, the neighbors will watch through the window and raise their eyebrows disapprovingly at various things I do. Capitol Hill is by far the best part of Seattle because it's full of The Gays and people who have moved here from out of state, and that's a better population. (in general, the new-comers are almost always a better population than the old-timers; it's generally a better portion of the population who moves to a new place looking for adventure or their fortune; that's why everyone in CA is so beautiful, it's why the West in general is better than the Midwest, it's why America used to be so great and why our closed doors are now hurting us; it's so retarded, of course we should allow citizenship for anyone with a college degree, we would basically steal all the best people from China and India, though it may already be too late for that move).

Okay, Seattle rant aside, I'm still considering it, cuz hey, I'm sick of fucking renting and moving, I want to be able to do what I want to my own house, and you have to live somewhere, and the jobs up here are really good, and if you lock yourself in your bedroom and watch TV all the time it really doesn't matter where you are.

It's pretty insane to go back and look at the property records for sale prices over the last 15 years or so. ( King County eReal Property Records and Parcel Viewer ). For example one house has these sell values :

```
3/11/2010   asking 500k (sale probably less)
3/23/2006   \$739,000.00
1/08/2002   \$215,302.00
1/27/1997   \$130,000.00

```
N found the most insane one :
```
05/18/2007  \$605,000
10/31/1997  \$30,500

```
Whoever sold in the bubble sure did well. Assuming they took the profit and moved to The Philippines or somewhere sane.

The other reason I'm thinking about buying is this area around where I live is in the process of gentrifying (see, for example: recent graffiti attack) and I think there's a decent chance to strike lucky. Of course the big percent gain from that has already happened - that's why the prices above have gone so crazy - they were in very poor, crime-ridden, black neighborhoods, that have already semi-gentrified and cleaned up quite a lot. But it's still a bit grungey around here, and only half a block away the real wave of yuppie motherfuckers is marching forward like a khaki tidal wave. The hard thing about the gentrification wave is timing, it can take 50 years

Of course the whole idea of individuals "investing" in the home they live in is retarded and is a real sickness of the last ten years. I have to keep myself from getting swept up in that "norm" (when everyone around you is saying the same wrong idea, it's easy to forget that it's shite). Actual real estate investors invest in lots of properties, not one, and they generally invest for income, not appreciation. And of course home value appreciation is only income if you actually move to a much cheaper place when you sell, which hardly anyone actually does. Unfortunately this belief causes homes to be valued at prices that don't make any sense if you don't believe that it is an "investment".