05-31-11 - STB style code

I wrote a couple of LZP1 implementations (see previous) in "STB style" , that is, plain C, ANSI, single headers you can just include and use. It's sort of wonderfully simple and easy to use. Certainly I understand the benefit - if I'm grabbing somebody else's code to put in my project, I want it to be STB style, I don't want some huge damn library.

(for example I actually use the James Howse "lsqr.c" which is one file, I also use "divsufsort.c" which is a delightful single file, those are beautiful little pieces of code that do something difficult very well, but I would never use some beast like the GNU Triangulated Surface lib, or OpenCV or any of those big bloated libs)

But I just struggle to write code that way. Like even with something as simple as the LZP's , okay fine you write an ANSI version and it works. But it's not fast and it's not very friendly.

I want to add prefetching. Well, I have a module "mem.h" that does platform-independent prefetching, so I want to include that. I also want fast memsets and memcpys that I already wrote, so do I just copy all that code in? Yuck.

Then I want to support streaming in and out. Well I already have "CircularBuffer.h" that does that for me. Sure I could just rewrite that code again from scratch, but this is going backwards in programming style and efficiency, I'm duplicating and rewriting code and that makes unsafe buggy code.

And of course I want my assert. And if I'm going to actually make an EXE that's fast I want my async IO.

I just don't see how you can write good code this way. I can't do it; it totally goes against my style, and I find it very difficult and painful. I wish I could, it would make the code that I give away much more useful to the world.

At RAD we're trying to write code in a sort of heirarchy of levels. Something like :

very low level : includes absolutely nothing (not even stdlib)
low level : includes only low level (or lower) (can use stdlib)
              low level stuff should run on all platforms
medium level : includes only medium level (or lower)
               may run only on newer platforms
high level : do whatever you want (may be PC only)

This makes a lot of sense and serves us well, but I just have so much trouble with it.

Like, where do I put my assert? I like my assert to do some nice things for me, like log to file, check if a debugger is present and int 3 only if it is (otherwise do an interactive dialog). So that's got to be at least "medium level" - so now I'm writing some low level code and I can't use my assert!

Today I'm trying to make a low level logging faccility that I can call from threads and it will stick the string into a lock-free queue to be flushed later. Well, I've already got a bunch of nice lockfree queues and stuff ready to go, that are safe and assert and have unit tests - but those live in my medium level lib, so I can't use them in the low level code that I want to log.

What happens to me is I wind up promoting all my code to the lowest level so that it can be accessible to the place that I want it.

I've always sort of struggled with separated libs in general. I know it's a nice idea in theory to build your game out of a few independent (or heirarchical) libs, but in practice I've always found that it creates more friction than it helps. I find it much easier to just throw all my code in a big bag and let each bit of code call any other bit of code.


05-30-11 - Wood Counter Tops

Wood ("butcher block") counter tops are fucking retarded. Some problems you might not have thought of :

They burn if you put hot pans on them. So you have to have hot pads and shit on your counter top all the time. Under normal use, that's all fine or whatever, but say you have some minor kitchen fire, you have a pan on the oven that catches fire, you pull it out - you can't just put it on the counter top, or the counter top might catch fire too.

Water ruins them. Water is quite common in a kitchen. In particular, you can't use a dish rack because it's too likely to get water on the counters. If you're a real big moron, you'll extend your butcher block counter tops right up and all around the sink. So now you have a sink, which is wet, and a wood counter, which can't get wet. The inevitable result is warping all around the sink.

They dry out and have to be oiled regularly, like once a month. Basically they're a giant pain in the ass. The reason people get them is for looks, because they look like a cool old rustic kitchen, but they are not actually functional. The proper materials for kitchen counters are stone or ceramic (I'm not convinced about the modern plastics like Corian, but they might be allright, I dunno).

For god's sake if you do feel the need to use wood for your counter top (presumably because you're a moron who cares more about looking like the photos in Dwell than how things function), don't run it straight up to the sink, at least put something else around the sink, and the stove.

Using wood on your counter top is almost as retarded as using leather in your car interior, which is ruined by sun (hmm what comes through car windows?), water, and similarly needs to be oiled regularly or it gets stiff and cracks. Leather is cold in winter and hot in summer and is heavy and smelly and expensive. It's monstrously retarded. We have better fucking materials now!

05-30-11 - Product Review - Honeywell 18155 SilentComfort

Honeywell 18155 SilentComfort Product Review.

This thing is a badly designed piece of crap. The way it works is it sucks in air through a tiny "pre-filter" (at the bottom), pumps it through the fans, then out through the big HEPA filter. The result is that the pre-filter gets clogged with dust almost immediately, like within a few weeks. I wager that 99% of people running these machines have pre-filters full of crap. Once the pre-filter is full of dust, the machine gets much louder and pulls much less air. Not only that but it starts pulling dust into the fans, which then get noisey and are hard to clean.

So, you are forced to constantly change the pre-filter. And they don't actually make a pre-filter that's cut to the size you need, they just sell sheets of charcoal paper, so you have to cut it yourself and it's a big pain in the ass (even if they did, they would be like $20 a pop which is too much since you have to change them at least once a month). I'm not surprised that they rate the HEPA filter part (on the outbound air) of it as "lifetime" because of course it's the prefilter that actually does all the work.

A properly designed machine should have a large surface area paper filter at the air intake.

The other problem with this thing is that it doesn't have a low enough speed setting. Even on lowest setting it is very far from "silent". You hear a constant loud whooshing. It needs a setting that's about 50% the speed of the lowest.

While I'm at it, let's talk some more about dust and filters.

I noticed that my HTPC has been getting steadily louder recently. The problem of course is dust. Dust makes fans noisey, particularly the greasy crud stuck on the blades which really disturbs the air flow. It also reduces their efficiency, and dust inside the PC greatly impedes airflow and passive cooling, so it's important to clean it out regularly (once a year is probably enough).

It would be preferable really to have little paper filters on the PC air intakes that you could just replace, but PC fans are not designed for that so it's not a good idea to add. The biggest PITA is probably the power supply, because the fans are inside the PSU block, and so are some nasty capacitors.

Laptops are a much worse problem, they have tiny passages that can easily get clogged with dust and ruin their cooling. Unfortunatley you can't really clean them without taking them apart. This should be done about once a year, or your laptop will be running louder and hotter than it needs to. I'm sure the insides of game consoles are totally clogged up too.

Some other random places that I clean that you might not be aware of :

The radiator and fan in the back of the fridge.

The fume hood above your stove. The grating for the hood gets filthy and clogged which ruins the suction. Soak in degreaser or just replace. The fan blades of the fume hood need cleaning too, though it's often hard to access.

Heater air vents. Obviously you replace the filter once a month, but you also need to take off all the output grills and clean inside them. I would really like to clean out the entire pipe from the air intake to the output points but I can't imagine how to do that. Basically forced air heaters are blowing giant clouds of filth and alergens into your air all the time and there's nothing you can do about it. One option is to put filters at the output grills, but most heat systems aren't designed for that much back pressure.

Inside your vacuum cleaner. Not only do you replace the paper bag, you take the bottom of the machine and clean out all the dust stuck in the piping. Your suction will be greatly improved.

The exhaust tube for your clothes dryer. Again you can't get to most of the tube, but most of the shit will be at the end near the dryer or the end going out of the house.


05-29-11 - Cars - BMW 1M and Cayman R

These are two very interesting cars getting a lot of recent press. I think they might be the two best cars in the whole world right now, so let's explore some details. I haven't actually driven either, since it's impossible to find either at a dealer still (maybe forever - both are semi-limitted runs and are getting bought as fast as they are made), but I have driven a 135 and Cayman S.

The BMW 1M is basically a 135 with some tweaks. The Cayman R is basically a Cayman S with some tweaks. Both appear to perform much better than their base cars. If you look at lap times, you would conclude they are radically better. But that is a bit misleading.

I believe the tweaks to the 1M and the R are pretty similar, and both are what enthusiasts do to the base cars.

Neither one really has a modified engine at all. That is mildly disappointing. The Cayman R gets +10 hp from air flow mods (which enthusiast owners do to their Cayman S), and the 1M gets +30 hp from air & ecu mods (enthusiast 135 owners do air & ecu for +any hp, 30 is no problem). Also, neither one is really lighter. If you weigh them in the no-radio, no-AC, lightweight seat spec then they seem a bit lighter, but of course you could do that to your S/135 if you wanted. You can rip out your AC and put Recaros in your current car, that's not a serious way to lighten a car, so they are a big fail in that respect (the actual structural weight savings on both cars is something like 30 pounds; it's also retarded how they do the lightening in these cars; they remove your AC and radio, which you want, but then they still put a cosmetic plastic piece over your engine; WTF, remove the useless cosmetic weight and leave the functional shit please; if you want to get serious about lightening, I could go for manual windows and manual door locks and trunk release and all that, you can get rid of all those motors).

(the lack of engine mod is particlarly disappointing in the Cayman R, since Porsche has got a bunch of engines just sitting around that could have gone in there; they could have put the 3.6 version of the 9A1 in there instead of the 3.4 that the Cayman usually gets)

What has been done? Well the formula is exactly the same for both cars : LSD & Stiffer rear.

Porsches and BMW's have been set up for horrible understeer for the past many years. You may have seen me write about this exact issue with Porsches before. Most of the cars don't come with LSD; on Porsches you can option an LSD, but that is sort of a kludge because they aren't properly set up (you want different suspension setups for cars with LSD vs. not). Basically all they're doing with the R and the 1M is undoing the bad setup of the S and the 135. Particularly I think with the 135, it's a travesty that such a good engine has been sold with fucking run-flat tires and no LSD. So they're just fixing that sin.

Once you look at the 1M and R not as new special sporty models, but just as fixes to make these good cars the way they always should have been, you see the point of them. Anyway, some details :

BMW 1M :
N54 +30 hp from piston rings, air, ecu
LSD (viscous)
proper rubber
transmission oil cooler
M3 rear suspension and rear subframe (stiffer)
wider track
light flywheel
steering from E3
ecu (throttle response?)
lots of actual M3 parts

Cayman R
9A1 3.4L +10 hp (exhaust manifold, ecu, 7400 vs 7200 rev limit)
LSD (friction clutch pack)
lower / non-PASM / stiffer suspension
more negative camber (still not enough)
stiffer rear sway
not actually GT3 parts (not adjustable)

The differences between the 1M and 135 are a lot more significant than the differences between the Cayman R and S. In both cases the price premium ($5-10k or so) is so small that of course you should go ahead and buy the 1M/R.

I know a lot more about Porsches than I do about BMW's, and I can say that as usual Porsche have cheaped out and done the absolute minimum to hit their performance target. They could have easily grabbed the GT3 suspension bits, which is what you really want because they are adjustable in a wide range from good on the street to good with R-comp on the track, but no, that would have cut into their massive profit margin, so instead they give you new non-adjustable suspension bits, that are better than the S, but still not actually good enough for the serious enthusiast. You can just see it in every little aspect of modern Porsches; they don't give you the good brake cooling ducts, they don't give you the good shift cables, they don't give you the good throttle body, etc.

To be redundant, basically what a serious Cayman S owner does is they buy the GT3 front LCA's, GT3 sway bars, maybe a stiffer spring set, and a good aftermarket LSD. All that costs you a little bit more than the Cayman R, but then you have a much better car. So if you're going to track mods for your car anyway, there's no point in starting with the R, because you're going to replace all it's parts anyway, just start with the S.

It's sort of an odd fact that all modern Porsches (below the GT3) are shipped intentionally crippled. This not only makes them look back in magazine tests, it makes them look bad in comparisons like "fastestlaps.com" that compare stock cars, and it is a real problem for people who want to race them in SCAA "stock" classes. It's strange and very annoying. It's very useful when a car manufacturer offers a "super sport" option pack (such as the Mustang Boss 302 Laguna Seca) - even if you don't buy that car, it gives you a set of example parts to copy, it shows you how to set up the car for maximum performance, and it gives the magazines a way to test the car in properly set up trim. And if it's an option pack (as opposed to a nominally different model like the R), then you're allowed to use in SCCA stock racing.

One misgiving I have about the Cayman R that would give me pause before buying it new without driving is that it is lower and stiffer than the Cayman S, which is already quite low and quite stiff. It's sort of cheating to make a car that handles better by making it lower and stiffer. It's like making a compressor that gets better ratio by using more memory. It's not actually an improvement in the fundamentals, it's just dialing your trade-off slightly differently on the curve. The really impressive thing is to make a car that handles better without being lower or stiffer.

(the other problem with buying the Cayman R new is that Porsches are absurdly over-priced new, it's basically a $50k car being sold for $70k just because there are enough suckers and no competition)

Attempt to summarize my usual descent into rambling :

The Cayman R is a nice thing because it shows everyone what a properly set up Cayman S can do; the Cayman S has looked falsely bad in lots of tests (see for example Fifth Gear test where Tiff complains of understeer) because of the way it's sold set up all wrong, so the R is good for magazines, but it really isn't that special of a model, and if you wait until 2014 for the next gen 991 Cayman it will be better.

The 1M on the other hand is a very special model that will only be around this year, and gives you loads of goodies for the value; if you were considering a small BMW, the 1M is clearly the one to buy.


05-20-11 - LZP1 Variants

LZP = String match compression using some predictive context to reduce the set of strings to match

LZP1 = variant of LZP without any entropy coding

I've just done a bunch of LZP1 variants and I want to quickly describe them for my reference. In general LZP works thusly :

Make some context from previous bytes
Use context to look in a table to see a set of previously seen pointers in that context
  (often only one, but maybe more)

Encode a flag for whether any match, which one, and the length
If no match, send a literal

Typically the context is made by hashing some previous bytes, usually with some kind of shift-xor hash. As always, larger hashes generally mean more compression at the cost of more memory. I usually use a 15 bit hash, which means 64k memory use if the table stores 16 bit offsets rather than pointers.

Because there's no entropy coding in LZP1, literals are always sent in 8 bits.

Generally in LZP the hash table of strings is only updated at literal/match decision points - not for all bytes inside the match. This helps speed and doesn't hurt compression much at all.

Most LZP variants benefit slightly from "lazy parsing" (that is, when you find a match in the encoder, see if it's better to instead send a literal and take the match at the next byte) , but this hurts encoder speed.

LZP1a : Match/Literal flag is 1 bit (eight of them are sent in a byte). Single match option only. 4 bit match length, if match length is >= 16 then send full bytes for additional match length. This is the variant of LZP1 that I did for Clariion/Data General for the Pentium Pro.

LZP1b : Match/Literal is encoded as 0 = LL, 10 = LM, 11 = M (this is the ideal encoding if literals are twice as likely as matches) ; match length is encoded as 2 bits, then if it's >= 4 , 3 more bits, then 5 more bits, then 8 bits (and after that 8 more bits as needed). This variant of LZP1 was the one published back in 1995.

LZP1c : Hash table index is made from 10 bits of backwards hash and 5 bits of forward hash (on the byte to be compressed). Match/Literal is a single bit. If a match is made, a full byte is sent, containing the 5 bits of forward hash and 3 bits of length (4 bits of forward hash and 4 bits of length is another option, but is generally slightly worse). As usual if match length exceeds 3 bits, another 8 bits is sent. (this is a bit like LZRW3, except that we use some backward context to reduce the size of the forward hash that needs to be sent).

LZP1d : string table contains 2 pointers per hash (basically a hash with two "ways"). Encoder selects the best match of the two and send a 4 bit match nibble consisting of 1 selection bit and 3 bits of length. Match flag is one bit. Hash way is the bottom bit of the position, except that when a match is made the matched-from pointer is not replaced. More hash "ways" provide more compression at the cost of more memory use and more encoder time (most LZP's are symmetric, encoder and decoder time is the same, but this one has a slower encoder) (nowadays this is called ROLZ).

LZP1e : literal/match is sent as run len; 4 bit nibble is divided as 0-4 = literal run length, 5-15 = match length. (literal run length can be zero, but match length is always >= 1, so if match length >= 11 additional bytes are sent). This variant benefits a lot from "Literal after match" - after a match a literal is always written without flagging it.

LZP1f is the same as LZP1c.

LZP1g : like LZP1a except maximum match length is 1, so you only flag literal/match, you don't send a length. This is "Predictor" or "Finnish" from the ancient days. Hash table stores chars instead of pointers or offsets.

Obviously there are a lot of ways that these could all be modifed to get more compression (*), but it's rather pointless to go down that path because then you should just use entropy coding.

(* a few ways : combine the forward hash of lzp1c with the "ways" of lzp1d ; if the first hash fails to match escape down to a lower order hash (such as maybe just order-1 plus 2 bits of position) before outputting a literal ; output literals in 7 bits instead of 8 by using something like an MTF code ; write match lengths and flags with a tuned variable-bit code like lzp1b's ; etc. )

Side note : while writing this I stumbled on LZ4 . LZ4 is almost exactly "LZRW1". It uses a hash table (hashing the bytes to match, not the previous bytes like LZP does) to find matches, then sends the offset (it's a normal LZ77, not an LZP). It encodes as 4 bit literal run lens and 4 bit match lengths.

There is some weird/complex stuff in the LZ4 literal run len code which is designed to prevent it from getting super slow on random data - basically if it is sending tons of literals (more than 128) it starts stepping by multiple bytes in the encoder rather than stepping one byte at a time. If you never/rarely compress random data then it's probably better to remove all that because it does add a lot of complexity.

REVISED : Yann has clarified LZ4 is BSD so you can use it. Also, the code is PC only because he makes heavy use of unaligned dword access. It's a nice little simple coder, and the speed/compression tradeoff is good. It only works well on reasonably large data chunks though (at least 64k). If you don't care so much about encode time then something that spends more time on finding good matches would be a better choice. (like LZ4-HC, but it seems the LZ4-HC code is not in the free distribution).

He has a clever way of handling the decoder string copy issue where you can have overlap when the offset is less than the length :

    U32     dec[4]={0, 3, 2, 3};

    // copy repeated sequence
    cpy = op + length;
    if (op-ref < 4)
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        *op++ = *ref++;
        ref -= dec[op-ref];
    while(op < cpy) { *(U32*)op=*(U32*)ref; op+=4; ref+=4; }
    op=cpy;     // correction

This is something I realized as well when doing my LZH decoder optimization for SPU : basically a string copy with length > offset is really a repeating pattern, repeating with period "offset". So offset=1 is AAAA, offset=2 is ABAB, offset=3 is ABCABC. What that means is once you have copied the pattern a few times the slow way (one byte at a time), then you can step back your source pointer by any multiple of the offset that you want. Your goal is to step it back enough so that the separation between dest and source is bigger than your copy quantum size. Though I should note that there are several faster ways to handle this issue (the key points are these : 1. you're already eating a branch to identify the overlap case, you may as well have custom code for it, and 2. the single repeating char situation (AAAA) is by far more likely than any other).

ADDENDUM : I just found the LZ4 guy's blog (Yann Collet, who also did the fast "LZP2"), there's some good stuff on there. One I like is his compressor ranking . He does the right thing ( I wrote about here ) which is to measure the total time to encode,transmit,decode, over a limitted channel. Then you look at various channel speeds and you can see in what domain a compressor might be best. But he does it with nice graphs which is totally the win.


05-19-11 - Nathan Myhrvold

I was reading about the new cookbook "Modernist Cuisine" (which sounds pretty interesting, but DON'T BUY IT) and I was like hhrrmm why does this name sound familiar. And then I remembered ...

Myrhvold back in the day was one of the "gunslingers" at the top of Microsoft who was responsible for their highly immoral and sometimes illegal practices, which consisted of vaporware announcements, exclusivity deals, and general bullying to sabotage any attempts at free market competition in computers. (see here for example). It's hard for me to get back in that mindset now, Microsoft seems so inept these days, but let's not forget it was built on threats, lies, bullying, and stealing (I assume this was mostly Bill's doing).

Nowadays, Nathan Myhrvold is basically enemy #1 if you believe that most patents are evil.

His "Intellectual Ventures" is basically buying up every patent they think they can make money on, and then extracting license fees. He talks a lot of shit about funding invention, but that is an irrelevant facade to hide what they're really doing, which is buying up patents and then forcing people to license under threat of suit.

Even if you support patents you must see that innovation is being stifled because tech startups have to live in fear that the completely obvious algorithm they implemented was patented by some someone, and was then bought by some fund that has a team of lawyers hunting for infringers.

For more info see : 1 , 2 , 3

The Coalition for Patent Fairness has been doing some decent work at making small steps towards reform. The CPF is big business and certainly not revolutionary anti-patentites, they just want it to be a bit harder for someone to patent something absurd like "one click ordering" and then to extract huge damages for it. And of course Myhrvold is against them.


05-17-11 - Dialing Out Understeer

Almost all modern cars are tuned for understeer from the factory. Partly because of lawyers, but partly because that's what consumers want. Certain Porsches (ever since the 964) and BMW's (ever since the E46 M3) have been badly tweaked for understeer.

You can mostly dial it back out. I will describe some ways, in order from most preferred to least. Obviously the exact details depend on your model, blah blah blah. I'm just learning all this stuff, I'm no expert, so this sort of a log of my learnings so far.

In general when trying to get more oversteer you have to be aware of a few issues. Basically you're trying to increase grip in the front and decrease grip in the rear ; you don't want to go so far with decreasing grip in the rear that you decrease overall grip and get much slower. You also don't want to create lift-off oversteer or high speed mid-corner snap oversteer or any of those nasty gremlins.

1. Driving Technique. Go into corners fast, brake hard to load up the front, turn in with a bit of trail brake. This helps a lot. Now move on to :

2. Alignment (Camber & Toe). Basically more camber in front and less camber in rear will increase oversteer, because in a corner the tires are twisted sideways, so by giving the front the ideal camber they will have good contact while cornering, and the rear tires will be on edge and slip. Obviously there's a limit where more toe in front is too much, so the idea is to set the front tires to their ideal camber for maximum grip, and then set the rear somewhere lower to give them less grip. If you want a fun alignment you generally want zero front toe, and just enough rear toe to keep the car stable under braking and in high speed turns (zero rear toe is a bit too lively for anything but autocross use). Note that severe camber on your driven wheels can hurt straight line acceleration.

3. Higher rear tire pressure. This is sort of a temp/hack fix and real racers frown on it, but it is cheap and pretty harmless so it's easy to try. Many people get confused about how tire pressures affect handling, because if you search around the internet you will find some people saying "lower your pressure to decrease grip" and others say "raise your pressure you to decrease grip". The truth is *both* work. Basically there is an ideal pressure at which grip is maximum - changing pressure in either direction decreases grip. However, lower pressure also leads to tires moving on the rim, which is very bad, so if you want to tweak your grip it should always be done by raising pressures (as you lower tire pressure, you get more grip, more grip, then suddenly hit a point where the tires start rolling on the sidewall, which you don't want to get to). So set the front & rear to the ideal pressures, then raise the rear pressure a little to reduce grip in the rear. (for example E46 M3 is supposed to be good at 35F and 42R)

4. Sway bars or spring rates (stiffer in rear and softer in front). You get more oversteer from a car with a stiff rear. Basically stiffer = less grip, it means the whole end of the car will slide rather than the wheels acting independently and sticking (this is why Lotus and McLaren don't use sway bars at all). You don't want to overdo this as a way to get oversteer, because it makes the ride harder (a stiffer sway is just a form of stiffer spring), and it also just reduces overall grip (unless your sways were so severe that you were leaning and getting out of camber, in which case stiffer can mean more grip). But many OEM cars are shipped much stiffer in the front than the rear - they are dialed to have more grip in back and not enough in front, so you can undo that. Note that lots of "tuners" just mindlessly put bigger sways on front and back, when in fact the OEM front bar might be just fine, and putting on a stiffer front just makes things worse. BTW a lot of people make the mistake of just going stiff, thinking stiffer = better handling; in fact you want some weight transfer because weight transfer is what gives you control over the grip. The ideal car will either oversteer or understeer depending on what you do to it, and weight transfer lets you do that.

5. Narrower rear tires. Basically undoing the staggered setup a bit. This obviously decreases max grip in the rear. Many cars now are shipped on rear tires that are really too big. See below for more notes.

6. Wider front tires. Going wider in the front as part of a package of going narrower in the rear may make sense; for example a lot of cars now are sold on a 235/265 stagger, and they may do well on a 245/245 square setup. However, many people mistakenly think wider = faster, and will do something like change the 235/265 to 265/265. Wider is not always better, particularly in the front. It makes turning feel heavier and makes turning response not as sharp. It produces more tire scrubbing in low speed full lock turn. It takes the tires longer to heat up, so it can actually make you slower in autocross scenarios. It makes the tires heavier which makes you slower.

Beware copying racers' setups.

Racers run way more negative camber, like -3 to -5 degrees. That's partly because they are taking hard turns all the time, with few straights, but it's also because they are running true competition slicks, which are a very different tire compound and need/want the greater camber.

Racers run much wider tires. This is good for them for a few reasons. One is they never actually make very sharp turns (most race cars have horrible turning radii), they only ever turn the wheels slightly, so they don't need the nimbleness of narrow tires. The other is that they are going so fast that they can warm up the wide tires - under street or autocross type use very wide tires never come up to temp and thus can actually have less grip than narrow tires.

Also more generally, race cars are usually set up because of the weird specs and rules of their class, not because it's the best way for them to be set up.

A collection of quotes that I found informative :

"The way you take a corner in a 911 is brake in a straight line before
entering the corner and get your right foot on the throttle before
turning into the corner. Use light changes on the throttle to keep the
rear end stable and use weight transfer to control understeer/oversteer.
The front end will bite and turn in. At or before Apex, start rolling in
throttle. Most corners you can be at full throttle before corner exit.
There is not another car out there that can come off a corner as fast as
these cars, but a lot of cars that can enter faster. You do not want to
drive it just trying to push through a corner like a front engine
understeering car."

"I race a FWD 1999 Honda Civic in the SSC class in SCCA
races. The trick to get FWD cars to rotate is to pump up the rear tires.
The rules won't let us modify suspensions and, in the Civic, there's
very little camber available, so we run with cold air pressures of 33F
and 37R. In a Neon I used to race, I would run with over 40 psi in the

"When I got my 2004 M3 I played with air pressures and ended up setting
them at 35F and 42R. This was on a car with the factory alignment and
Michelin Pilot Sports. At the Summit Point track in WV, with these air
pressures, I got NO understeer in the M3. Mr. B"

"To use oversteer to rotate your car prior to the apex you turn in early
and trail brake hard. The heavy braking while turning shifts your weight
forward reducing traction in the rear which induces oversteer. You
better not plan on lifting to catch the car though or you will be in the
weeds. The right thing to do is to transition from trail braking to
fairly heavy throttle (depending on the corner) to shift the weight back
to the rear taking the car from oversteer to neutral at the apex.

Because you come into the corner early and fast and brake very late this
can be very fast but it's not for beginners. One false move and you are
in a world of hurt. 

If you really feel the need to change your car to help with this the
best way to help the weight transfer. If you have 2 way adjustable
shocks increase rebound in back allowing the rear to lift more under
braking or decrease compression in front allowing the it to dip more or

If you don't have adjustable shocks do something to increase turn in
like softening the front bar or stiffening the rear. The downside is
that this will also increase oversteer under acceleration at and after
the apex. "

"Same principles as the M3 game, Ron. For less understeer: more camber
in front, less camber in rear, higher pressure in rear, less pressure in
front. Anything to increase grip in front and reduce grip in the rear
will result in more neutral handling. Of course, you can go too far in
one direction and create an oversteering monster a la 930s, et al."

"My car is my daily driver, and I care how quiet and comfortable it is.
I think that all camber plates have monoball mountings, so there is NO
rubber bushing at the top of the strut."

"There is almorst no increase in noise & rattle with the Tarett camber
plates. removing the rubber makes a big difference in turn in and
getting the car to take a set."

"If I remember right when we built the Spec Boxster we were
after 3.5 degrees in the front. You will pick up lots of camber when you
lower the car. I thick we were at 1.9 degrees without any shims. After
that the rule of thumb was .1 degrees for every mm of shim. I thick we
went with 16mm of shim to get our 3.5 degrees."

"I'm an instructor with LCAs and only use 1.8 in front. It's enough to
really help the tire life. I'd be faster with more camber but I drive
the car a lot on regular roads and don't want to muck up the everyday

"Track driving is a different story with different settings. Don't fall
into the "Negative-camber-mania" accompanied by excessive lowering"

"Worst case is a lowered car on stock bars, which will have lots of body
roll and the camber will go positive (or less negative) with only a bit
of compression. Add that to the positive camber from body roll and your
outside front tire could go several degrees positive in a hard turn. "

"On my car--light weight with fairly stiff suspension and not overly
lowered--I couldn't even use -1.5� of static front camber at street
speeds. It was cornering so flat that it wasn't "using up" the camber it
had. Front end bite was actually much better with -.8� front camber. At
the track, I'd probably want that -1.5� or even more."

"collectively- we spend a lot of time tuning our cars for the "ultimate
set up" with high amounts of camber- stiffness in the sway bars etc.
this is a DRY set up and will in fact get you in trouble on a wet

"You'll find that you lose a lot of feel with the 26/24mm sways because
the car isn't rolling at all. That probably makes it faster, but IMHO
it's not as fun. And without body roll the car can seem a bit more
unpredictable /unstable."

"My understanding, and I'll confirm once I get lsd, is that you should
run without a rear sway if you don't have lsd, run with rear sway if you

" Take out that front sway and put the stock one back in. Then put the
rear sway on full stiff. I don't understand why so many people continue
to put stiffer front sways on the front of the Turbo's. Even on full
soft, it's got to be well stiffer than the stock unit. You just
increased your front spring rate (as springs and sways work together)
and added understeer. Especially H&R's which are pretty darn stiff.

"FWIW I ran the GT3 rear bar (full stiff) with an OEM M030 front on my
car when I was on the stock M030 dampers and H&R springs and the car
drove great compared to when the stock rear bar was on there. Helped the
car rotate, reduces the understeer some and was a cheap solution to
making the car handle better with what was on the car at the time.
Totally worth the time and money if you ask me"

" Steve and others, I've not had good results running less negative
camber up front. My car is AWD still, and I played around with a
different setting earlier this year with horrible "Put it on the
trailer" results... Setting the car up with as little as 1/4 less
negative camber up front than the rear made the car a handful to drive,
to the point that I was off course twice on highspeed corners... I am
having my best results running about .4-.5 degrees MORE negative camber
up front than in the rear, and tend to float around at -3.0 to -2.8
upfront and -2.5 to -2.2 in the rear. This is on my 18X9 and 18X12 CCWs
with NT01s on. I also run the front swaybar in the softest setting and
the rear in the middle of three settings... If I want more rear
rotation, I tend to go stiffer to the inside hole. 

I've also raised my rear ride height by about 3/8th inch and that has
helped with high speed braking a little bit... I think that additional
height may have put the rear wing in the air a little more to help with
some downforce out back... Can't wait to get the GT2 nose on the car to
see how it balances things out... " 

"My observation is that the rear sway setting affects your ride quality
quite a bit, more than the front. And the front setting affects steering


05-15-11 - SSD's

There's been some recent rumor-mongering about high failure rates of SSD's. I have no idea if it's true or not, but I will offer a few tips :

1. Never ever defrag an SSD. Your OS may have an automatic defrag, make sure it is off.

2. Make sure you have a new enough driver that supports TRIM. Make sure you aren't running your SSD in RAID or something like that which breaks TRIM support for some drivers/chipsets.

3. If you use an Intel SSD you can get the Intel SSD Toolbox ; if you have your computer set up correctly, it should not do anything, eg. if you run the "optimize" it should just finish immediately, but it will tell you if you've borked things up.

4. If you have one of the old SSD's with bad firmware or drivers (not Intel), then you may need to do more. Many of them didn't balance writes properly. Probably your best move is just to buy a new Intel and move your data (never try to save money on disks), but in the mean time, most of the manufacturers offer some kind of "optimize" tool which will go move all the sectors around. For example SuperTalent's Performance Refresh Tool is here .

5. Unnecessary writes *might* be killers for an SSD. One thing you can do is to check out the SMART info on your drive (such as CrystalDiskInfo, or the Intel SSD Optimizer works too), which will tell you the total amount of writes in its lifetime. So far my 128 G drive has seen 600 G of writes. If you see something like 10 TB of writes, that should be a red flag that data is getting rewritten over and over for no good reason, thrashing your drive. So then you might proceed to take some paranoid steps :

Disable virtual memory. Disable superfetch , indexing service, etc. Put firefox's SQL db files on a ram disk. Run "filemon" and watch for any file writes and see who's doing it and stop them. Now it's certainly true that *in theory* if the SSD's wear levelling is working correctly, then you should never have to worry about write endurance with a modern SSD - it's simply not possible to write enough data to overload it, even if you have a runaway app that just sits and writes data all the time, the SSD should not fail for a very long time ( see here for example ). But why stress the wear leveller when you don't need to? It's sort of like crashing your car into a wall because airbags are very good.

I'm really not a fan of write caching, because I've had too many incidents of crashes causing partial flushes of the cache to corrupt the file system. That may be less of an issue now that crashes are quite rare.

What would really be cool is a properly atomic file system, and then you could cache writes in atoms.

Normally when you talk about an atomic file system you mean the micro-operation is atomic, eg. a file delete or file rename or whatever will either commit correctly or not change anything. But it would also be nice to have atomicity at a higher level.

For example when you run a program installer, it should group all its operations as one "changelist" and then the whole changelist is either committed to the filesystem, or if there is an error somewhere along the line the whole thing fails to commit.

However, not everything should be changelisted. For example when you tell your code editor to "save all" it should save one by one, so that if it crashes during save you get as much as possible. Maybe the ideal thing would be to get a GUI option where you could see "this changelist failed to commit properly, do you want to abort it or commit the partial set?".


05-14-11 - Pots 3

Potting becomes a bit less fun as I start to take it more seriously. It's really nice to do something where you just feel like a noob and everything is learning and you have no expectations. If you fuck up, no big deal, you learned something, and you didn't expect to succeed anyway.

Doing something creative and being a beginner makes me aware of what a dick I've been to other people in similar situations. When people say "hey look what I made, do you like it?" , my usual response is "erm, yeah it's okay". I should be like "yeah, it's wonderful!". I'm always careful about showing too much approval because I think of my approval as a way of directing behavior, but that's just being a dick.

It's getting annoying to go into the studio, so I feel like I have to either get a wheel at home or quit.

Some pots, in roughly chronological order over the last two months or so. Notes under each image.

Oribe on the left, with grooves cut in to try to accentuate the glaze variation. Ox blood on the right, makes a really nice red. The foot of the little red bowl is unpleasantly rough, I need to burnish the feet on the rough stoneware pots.

On the left is yellow salt base over dakota red clay, then I turned it upside down and poured temmoku to make the drippy look; I used a wire rack for that which was not a good method, the glaze bunches up on the rack. On the right is a bowl that I threw way too thick so then I cut away lots of facets; glazed temmoku.

On the left is cobalt stain under clear glaze; the cobalt on white clay body is a great blue; my drawing is horrible but I'd like to try that color again in a non-hand-drawn way. On the right is a crappy cup that I tried the "rub ink into crackle" method; it's okay but it's a pain in the ass.

Some experiments with leaving portions of the piece unglazed. I sanded the bowl a bit, but it's still quite rough, a smooth burnished outside would be better. I was hoping the inside of the bowl would pop with color more, maybe I'll try this idea again.

Experiment with making glaze run. Trying to throw some classical vase shapes. Base dip in yellow salt (white clay body). Then I dipped the rim 4 times in black glaze, dip, wait a bit for it to dry, dip again. Before firing it was a clean line on the rim, the idea is to get it to run in the firing. Pretty successful I think; I really like the unnexpected organic things that happen in the firing to lock fluid flow patterns into color, so I'm trying to find more ways to get that. It's crazy how much the pot shrinks between throwing and finished - it shrinks in bisque, then shrinks again in glaze firing ; I thought this pot was a nice big size when I threw it, but it came out awkwardly small.

This is some crap that didn't come out great. I do like the symmetrical shape on the right, might try that again, but taller, and better glaze.

Trying a band of unglazed clay; first iron oxide stain, then wax, then glazes.

I tried a funny technique on this one to try to get some irregular patterns; I dipped the pot in slip after it was bisqued, which you normally wouldn't do, because it doesn't stick well. Glazed in yellow salt.

This one I painted some wax around in a wavy band before glazing, then sand-papered off most of the wax, leaving a thin irregular bit of wax. Then glazed. At that time it looked like it was all covered, the spots only revealed themselves in the kiln. Shino glaze with the pour-on method to create irregularities in thickness.


05-13-11 - Steady State Fallacy

One of the big paradigm shifts in data compression was around the mid 90's, people stopped talking about the steady state.

All the early papers had proofs of "asymptotic optimality" , that given infinite data streams from a static finite model, the compressor would converge to the entropy. This was considered an important thing to prove and it was done for lots of algorithms (LZ7x,PPM,CONTEXT, etc).

But in the real world, that asymptote never arrives. Most data is simply too small, and even huge data sets usually don't have static generators, rather the generating model changes over the course of the data, or perhaps it is a static model but it has internal hidden switching states that are very hard for the compressor to model so in practice you get a better result by treating it as a non-static model. We now know that the performance during the "learning phase" is all that matters, since all data is always in transition (even if the majority of the model becomes stable, the leaves are always sparse - I wrote about this before, generally you want the model to keep growing so that the leaves are always right on the boundary of being too sparse to make a reliable model from).

So modern papers rarely bother to prove aymptotic optimality.

Today I realized that I make the same mistake in decision making in life.

In general when something does not go well, my solution is to just not go back. Say some friend is shitty to me, I just stop seeing them. Some restaurant offers me a horrible table even though I see better ones sitting epty, I just won't go to that restaurant again. Some car mechanic way over-charges me, I just won't go to that mechanic again. My landlord is a nutter, I'll just move out. The idea of this behavior is that by cutting out the bad agent eventually you get into a state with only good agents in your life.

But in reality that steady state never arrives. Unless you lock yourself in a cave, you are always getting into new situations, new agents are injected into your life, and the good agents drop out, so you are always in the transition phase.

Or you know, during unpleasant phases, you think "ok I'll just work a lot right now and life will suck but then it will be better after" or "ok I'm injured it sucks I'll just do a lot of rehab and it will be better after" or "it's gray winter and life will suck but it will be better after". But the after never actually comes. There's always some new reason why now is the "hard time".

05-13-11 - Avoiding Thread Switches

A very common threading model is to have a thread for each type of task. eg. maybe you have a Physics Thread, Ray Cast thread, AI decision thread, Render Thread, an IO thread, Prefetcher thread, etc. Each one services requests to do a specific type of task. This is good for instruction cache (if the threads get big batches of things to work on).

While this is conceptually simple (and can be easier to code if you use TLS, but that is an illusion, it's not actually simpler than fully reentrant code in the long term), if the tasks have dependencies on each other, it can create very complex flow with lots of thread switches. eg. thread A does something, thread B waits on that task, when it finishes thread B wakes up and does something, then thread A and C can go, etc. Lots of switching.

"Worklets" or mini work items which have dependencies and a work function pointer can make this a lot better. Basically rather than thread-switching away to do the work that depended on you, you do it immediately on your thread.

I started thinking about this situation :

A very simple IO task goes something like this :

Prefetcher thread :

  issue open file A

IO thread :

  execute open file A

Prefetcher thread :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

IO thread :

  do read on file A
  do close file A

Prefetcher thread :

  register file A to prefetched list

lots of thread switching back and forth as they finish tasks that the next one is waiting on.

The obvious/hacky solution is to create larger IO thread work items, eg. instead of just having "open" and "read" you could make a single operation that does "open, malloc, read, close" to avoid so much thread switching.

But that's really just a band-aid for a general problem. And if you keep doing that you wind up turning all your different systems into "IO thread work items". (eg. you wind up creating a single work item that's "open, read, decompress, parse animation tree, instatiate character"). Yay you've reduced the thread switching by ruining task granularity.

The real solution is to be able to run any type of item on the thread and to immediately execute them. Instead of putting your thread to sleep and waking up another one that can now do work, you just grab his work and do it. So you might have something like :

Prefetcher thread :

  queue work items to prefetch file A
  work items depend on IO so I can't do anything and go to sleep

IO thread :

  execute open file A

  [check for pending prefetcher work items]
  do work item :

  get size of file A
  malloc buffer of size
  issue read file A into buffer
  issue close file A

  do IO thread work :

  do read on file A
  do close file A

  [check for pending prefetcher work items]
  do work item :

  register file A to prefetched list

so we stay on the IO thread and just pop off prefetcher work items that depended on us and were waiting for us to be able to run.

More generally if you want to be super optimal there are complicated issues to consider :

i-cache thrashing vs. d-cache thrashing :

If we imagine the simple conceptual model that we have a data packet (or packets) and we want to do various types of work on it, you could prefer to follow one data packet through its chain of work, doing different types of work (thrashing i-cache) but working on the same data item, or you could try to do lots of the same type of work (good for i-cache) on lots of different data items.

Certainly in some cases (SPU and GPU) it is much better to keep i-cache coherent, do lots of the same type of work. But this brings up another issue :

Throughput vs. latency :

You can generally optimize for throughput (getting lots of items through with a minimum average time), or latency (minimizing the time for any one item to get from "issued" to "done"). To minimize latency you would prefer the "data coherent" model - that is, for a given data item, do all the tasks on it. For maxmimum throughput you generally preffer "task coherent" - that is, do all the data items for each type of task, then move on to the next task. This can however create huge latency before a single item gets out.


Let me say this in another way.

Say thread A is doing some task and when it finishes it will fire some Event (in Windows parlance). You want to do something when that Event fires.

One way to do this is to put your thread to sleep waiting on that Event. Then when the event fires, the kernel will check a list of threads waiting on that event and run them.

But sometimes what you would rather do is to enqueue a function pointer onto that Event. Then you'd like the Kernel to check for any functions to run when the Event is fired and run them immediately on the context of the firing thread.

I don't know of a way to do this in general on normal OS's.

Almost every OS, however, recognizes the value of this type of model, and provides it for the special case of IO, with some kind of IO completion callback mechanism. (for example, Windows has APC's, but you cannot control when an APC will be run, except for the special case of running on IO completion; QueueUserAPC will cause them to just be run as soon as possible).

However, I've always found that writing IO code using IO completion callbacks is a huge pain in the ass, and is very unpopular for that reason.


05-08-11 - Torque vs Horsepower

This sometimes confuses me, and certainly confuses a lot of other people, so let's go through it a bit.

I'm also motivated by this page : Torque and Horsepower - A Primer in which Bruce says some things that are slightly imprecise in a scientific sense but are in fact correct. Then this A-hole Thomas Barber responds with a dickish pedantic correction which adds nothing to our understanding.

We're going to talk about car engines, the goal is to develop sort of an intuition of what the numbers mean. If you look on Wikipedia or whatever there will be some frequently copy-pasted story about James Watt and horses pulling things and it's all totally irrelevant. We're not using our car engine to power a generator or grind corn or whatever. We want acceleration.

The horizontal acceleration of the car is proportional to the angular acceleration of the wheels (by the circumference of the wheels). The angular acceleration of the wheels is proportional to the angular acceleration of the flywheel, modulo the gear ratio in the transmission. The angular acceleration of the flywheel is proportional to the torque of the engine, modulo moment of inertia.

For a fixed gear ratio :

torque (at the engine) ~= vehicle acceleration

(where ~= means proportional)

So if we all had no transmission, then all we would care about is torque and horsepower could go stuff itself.

But we do have transmissions, so how does that come into play?

To maximize vehicle acceleration you want to maximize torque at the wheels, which means you want to maximize

vehicle acceleration ~= torque (at the engine) * gear ratio

where gear ratio is higher in lower gears, that is, gear ratio is the number of times the engine turns for one turn of the wheels :

gear ratio = (engine rpm) / (wheel rpm)

which means we can write :

vehicle acceleration ~= torque (at the engine) * (engine rpm) / (wheel rpm)

thus at any given vehicle speed (eg. wheel rpm held constant), you maximize acceleration by maximizing [ torque (at the engine) * (engine rpm) ] . But this is just "horsepower" (or more generally we should just say "power"). That is :

horsepower ~= torque (at the engine) * (engine rpm)

vehicle acceleration ~= horsepower / (wheel rpm)

Note that we don't have to say that the power is measured at the engine, because due to conservation of energy the power production must be the same no matter how you measure it (unlike torque which is different at the crank and at the wheels). Power is of course the energy production per unit time, or if you like it's the rate that work can be done. Work is force over distance, so Power is just ~= Force * velocity. So if you like :

horsepower ~= torque (at the engine) * (engine rpm)

horsepower ~= torque (at the wheels) * (wheel rpm)

horsepower ~= vehicle acceleration * vehicle speed

(note this is only true assuming no dissipative forces; in the real world the power at the engine is greater than the power at the wheels, and that is greater than the power measured from motion)

Now, let's go back to this statement : "any given vehicle speed (eg. wheel rpm held constant), you maximize acceleration by maximizing horsepower". The only degree of freedom you have at constant speed is changing gear. So this just says you want to change gear to maximize horsepower. On most real world engines this means you should be in as low a gear as possible at all times. That is, when drag racing, shift at the red line.

The key thing that some people miss is you are trying to maximize *wheel torque* and in almost every real world engine, the effect of the gear ratio is much more important that the effect of the engine's torque curve. That is, staying in as low a gear as possible (high ratio) is much more important than being at the engine's peak torque.

Let's consider some examples to build our intuition.

The modern lineup of 911's essentially all have the same torque. The Carrera, the GT3, and even the RSR all have around 300 lb-ft of torque. But they have different red lines, 7200, 8400 and 9400.

If we pretend for the moment that the masses are the same, then if you were all cruising along side by side in 2nd gear together and floored it - they would accelerate exactly the same.

The GT3 and RSR would only have an advantage when the Carrera is going to hit red line and has to shift to 3rd, and they can stay in 2nd - then their acceleration will be better by the factor of gear ratios (something like 1.34 X on most 2nd-3rd gears).

Note the *huge* difference in acceleration due to gearing. Even if the upshift got you slightly more torque by putting you in the power band of the engine, the 1.34 X from gearing is way too big to beat.

(I should note that in the real world, not only are the RSR/R/Cup (racing) versions of the GT3 lighter, but they also have a higher final drive ratio and some different gearing, so they are actually faster in all gears. A good mod to the GT3 is to get the Cup gears)

Another example :

Engine A has 200 torques (constant over the rpm range) and revs to 4000 rpm. Engine B has 100 torques and revs to 8000 rpm. They have the exact same peak horsepower (800 torques*krpm) at the top of their rev range. How do they compare ?

Well first of all, we could just gear down Engine B by 2X so that for every two turns it made the output shaft only made one turn. Then the two engines would be exactly identical. So in that sense we should see that horsepower is really the rating of the potential of the engine, whereas torque tells you how well the engine is optimized for the gearing. The higher torque car is essentially steeper geared at the engine.

How do they compare on the same transmission? In 1st gear Car A would pull away with twice the acceleration of Car B. It would continue up to 4000 rpm then have to change gears. Car B would keep running in 1st gear up to 8000 rpm, during which time it would have more acceleration than car A (by the ratio of 1st to 2nd gear).

So which is actually faster to 100 mph ?

You can't answer that without knowing about the transmission. If gear changes took zero time (and there was no problem with traction loss under high acceleration), the faster car would be the higher torque car. In fact if gear changes took zero time you would want an infinite number of gears so that you could keep the car at max rpm at the time, not because you are trying to stay in the "power band" but simply because max rpm means you can use higher gearing to the wheels.

I wrote a little simulator. Using the real transmission ratios from a Porsche 993 :

Transmission Gear Ratios: 3.154, 2.150, 1.560, 1.242, 1.024, 0.820 
Rear Differential Gear Ratio: 3.444 
Rear Tire Size: 255/40/17  (78.64 inch cirumference)
Weight : 3000 pounds

and 1/3 of a second to shift, I get :

200 torque, 4000 rpm redline :

time_to_100 = 15.937804

100 torque, 8000 rpm redline :

time_to_100 = 17.853252

higher torque is faster. But what if we can tweak our transmission for our engine? In particular I will make only the final drive ratio free and optimize that with the gear ratios left the same :

200 torque, 4000 rpm redline :

c_differential_ratio = 3.631966
time_to_100 = 15.734542

100 torque, 8000 rpm redline :

c_differential_ratio = 7.263932
time_to_100 = 15.734542

exact same times, as they should be, since the power output is the same, with double the gear ratio.

In the real world, almost every OEM transmission is geared too low for an enthusiast driver. OEMs offer transmission that minimize the number of shifts, offer over-drive gears for quiet and economy, etc. If you have a choice you almost always want to gear up. This is one reason why in the real world torque is king ; low-torque high-power engines could be good if you had sufficiently high gearing, but that high gearing just doesn't exist (*), so the alternative is to boost your torque.

(* = drag racers build custom gear boxes to optimize their gearing ; there are also various practical reasons why the gear ratios in cars are limitted to the typical range they are in ; you can't have too many teeth, because you want the gears to be reasonably small in size but also have a minimum thickness of teeth for strength, high gear ratios tend to produce a lot of whine that people don't like, etc. etc.)

One practical issue with this these days is that more and more sports cars use "transaxles". Older cars usually had the transmission up front and then a rear differential. It was easy to change the final drive ratio in the rear differential so all the old American muscle cars talk about running a 4.33 or whatever different ratios. Nowadays lots of cars have the transmission and rear differential together in the back to balance weight (from the Porsche 944 design). While that is mostly a cool thing, it makes changing the final drive much more expensive and much harder to find gears for. But it is still one of the best mods you can do for any serious driver.

(another reason that car gear ratios suck so bad is the emphasis on 0-60 times means that you absolutely have to be able to reach 60 in 2nd gear. That means 1st and 2nd can't be too high ratio. Without that constraint you might actually want 2nd to max out at 50 mph or something. There are other stupid goals that muck up gearings, like trying to acheive a high top speed).

Let's look at a final interesting case. Drag racers often use a formula like :

speed at end of 1/4 mile :

MPH = 234 * (Horsepower / Pounds) ^ .3333

and it is amazingly accurate. And yet it doesn't contain anything about torque or gear ratios. (they of course also use much more complex calculators that take everything into account). How does this work ?

A properly set up drag car is essentially running at power peak the whole time. They start off the line at high revs, and then the transmission is custom geared to keep the engine in power band, so it's a reasonable approximation to assume constant power the entire time.

So if you have constant power, then :

  d/dt E = P

  d/dt ( 1/2 mv^2 ) = P

  integrate :

  1/2 mv^2 = P * t

  v^2 = 2 * (P/m) * t 

  distance covered is : 
  x = 1/2 a t^2

  and P = m a v

  a = (P/m) / v


  t = sqrt( 2*x*v / (P/m) )

  sqrt( 2*x*v / (P/m) ) = v^2 / ( 2 * (P/m) )

  simplify :

  v = 2 * ( x * (P/m) ) ^(1/3)

which is the drag racer's formula. Speed is proportional to distance covered times power-to-weight to the one third power.

If you're looking at "what is the time to reach X" (X being some distance or some mph), the only thing that matters is power-to-weight *assuming* the transmission has been optimized for the engine.

I think there's more to say about this, but I'm bored of this topic.


Currently the two figures that we get to describe a car's engine are Horsepower (at peak rpm) and Torque (at peak rpm) (we also get 0-60 and top speed which are super useless).

I propose that the two figures that we'd really like are : Horsepower/weight (at peak rpm) and Horsepower/weight (at min during 10-100 run).

Let me explain why :

(Power/weight) is the only way that power ever actually shows up in the equations of dynamics (in a frictionless world). 220 HP in a 2000 pound car is better than 300 HP in a 3000 pound car. So just show me power to weight. Now, in the real world, the equations of dynamics are somewhat more complicated, so let's address that. One issue is air drag. For fighting air, power (ignoring mass) is needed, so for top speed you would prefer a car with more power than just power to weight. However, for braking and turning, weight is more important. So I propose that it roughly evens out and in the end just showing power to weight is fine.

Now, what about this "Horsepower/weight (at min during 10-100 run)" ? Well let's back up a second. The two numbers that we currently get (Power and Torque both at their peak) give us some rough idea of how broad the power band of an engine is, because Power is almost always at peak near the max rpm, and Torque is usually at peak somewhere around the middle, so a higher torque number (power being equal) indicates a broader power band. But good gearing (or bad gearing) can either hide or exagerate that problem. For example a tuned Honda VTEC might have a narrow power band that's all in the 7k - 10k RPM range, but with a "crossed" transmission you might be perfectly happy never dropping out of that rev range. Another car might have a wide power band, but really huge gear steps so that you do get a big power drop on shifts. So what I propose is you run the cars from 10mph-100 , shifting at red line, and measure the *min* horsepower the engine puts out. This will tell you what you really want to know, which is when doing normal upshifts do you drop out of the power band, and how bad is it? eg. what is the lowest power you will experience.

Of all the numbers that we actually get, quarter mile time is probably the best.


05-03-11 - Some Documentaries

Stuff not on Netflix :

"Synth Britannia" was amusing. Search Piratebay for "Synth Britannia" and you will find two different things - one is a collection of music videos, the other is the documentary. Get both. The first significant synth album was by Wendy (born Walter) Carlos. WTF !? And Annie Lennox actually looked great with long hair.

("Prog Britannia" is good too).

"Kochuu" - about Japanese architecture and it's relationship to Sweden. Really beautifully made movie, captures the slow contemplative mood of the architecture in the filming itself. Very well done.

"Blowing up Paradise". God damn French.

05-03-11 - Pots 2

Finally got some more stuff out of the kiln. Some of this is months old because the firing is very much not FIFO.

Small bowl (B-mix body, lung chuan glaze, blue celadon drips) :

I like how the lung chuan gets pale blue where it's thick; I also think the drip-on spot technique was pretty successful so I'll play with that more in the future. One bad spot where I had my finger when I dipped it, need to be more careful of that. Lung chuan looks gross on red clay bodies but looks nice on the white body.

Some more practice with bottle / bud vase shapes :

Getting better. I like the red splash, I think I did that with a spoon but I forget, that's an old one.

Cups to practice throwing off the hump :

Unfortunately these cups are all junk because I wasn't careful about the shape of the rim. To be able to drink from a cup comfortably, the rim has to be either perfectly straight up and tapered (ideally straight up on the outside and tapered on the inside), or slightly tipped outward. It's safer to always flare out a tiny bit, because when it dries or fires a straight up lip might shrink inward a bit.

Some notes to self on throwing off the hump : it's useful to at least approximately center the whole lump so it doesn't disturb your eyes as you throw ; once you center a handful, press in a dent below it to clearly define your bottom ; open and then compress the bottom well ; throw as usual but remember you can always move clay from the hump up to your piece if necessary ; to remove : find the bottom of your piece and cut in a dent where you want to cut off, either a good ways down if you want to trim, or directly under and you can avoid trimming ; use the wheel to wrap the wire and pull it through ; now dry your fingers, use first two fingers on each side in a diamond shape, make contact and then use a spin of the wheel to break it free and lift up.

Some bowls :

This is my first batch of bowls that are correctly made in the sense of being reasonably thin and round. I'm horrible at fixing a pot once it's mostly thrown, so the way I get perfectly round bowls is to start perfectly centered and to open perfectly on center. If you can do that, then you just pull up and you wind up with a round bowl. This is not what pro potters do, it takes too long, they tend to just center real approximately and open quickly, and then they can man-handle the pot or cut off a bit to get it into shape.

To open perfectly on center, don't try to plunge your fingers in right at the center, it's impossible, you will always accidentally wiggle to one side or another. Instead, open by pulling down towards yourself as you plunge, and that will be centered automatically.

The other thing of course is to start with the clay centered. I think I wrote this before but the main issue for me is using lots of water and releasing the pressure very gradually. Also I'm now using plastic bats because they have precise bolt holes, the wooden bats all wobble and it's impossible to get a perfect center that way. I've noticed that most pro potters don't use bolts like we do, they attach the bat with a disc of clay, and if you do that you don't get any wobble.

05-03-11 - Image Dehancement

So many of the real estate ads now feature images that look like this :

Vomit. Quit it. Turning the saturation and contrast up to 11 does not make things look better.

old rants