1/24/2011

01-24-11 - Blah Blah Blah

I'm reading "Creation" now and one tidbit is that Pythagorus believed that beans contained men's souls. A while ago I read a bunch of Ghandi, and one thing that struck me was Ghandi's bizarre emphasis on homespun clothing and spinning your own cotton (the other thing that struck me is that Ghandi was kind of a dick, but that's a rant for another day). Newton believed that God mediated gravity and that any attempt to explain it physically was not only foolish but sacrilege. It's strange to separate the man who can believe in some utter nonsense from the man who is quite reasonable and intelligent, and it just seems so odd to me that so many people can have both aspects in them.

Having ideas is fucking easy. People love to say shit like "I had the idea for Facebook two years before it came out, I could be rich!". Big fucking whoop, I'm sure tons of people had that idea (obviously Myspace had that idea, as did ConnectU, etc.). Ideas are fucking easy, I have a million ideas a day. The hard thing is identifying the ideas that are the really good ones, and then making the decision to go for them. Anyone who's smart and creative has ideas, but we're afraid to go for it, or we don't believe in it enough to take a risk, or whatever. I see lots of people who sit on the sidelines and make retarded comments like this ("I had that idea! I'm so smart!"), but there are also plenty of businesses that have made their fortune and falsely think it was because of their "great ideas" (in general the ability of businesses to be un-self-aware and not understand why they are successful is astonishing).

Working on software tools that enhance my own computing experience is incredibly satisfying. The things that I've done in the last few years that please me most are my NiftyPerforce replacement, my window manager, autoprintf, my google chart maker, my bitmap library, etc. things that I use in my coding life to write more code. It's like being a metal worker and spending your time making tools for metalworking. It's incredibly satisfying because you use these things every day so you get to have the benefit yourself. Paul Phillips talked about the "exponential productivity boost of writing software for yourself" ; in theory there is an exponential benefit, because if writing software tools makes you X% more efficient at writing software tools, you can write more to help yourself, then even more, etc. I believe in practice that it is in fact not exponential, but I'm not sure why that is.

There is, however, an interesting non-linear jump in tool making and process enhancement. The issue is that what we do is somewhat art. You need inspiration, you need to be in the right mindset, you need to be able to play with your medium. When the craft is too difficult, when there's too much drudge work, you sap the vital juices from your mind, and you will never have a big epiphany. If you spend some time just working on your tools and process, you can make the actual act of creation easier and more pleasant for yourself, so that you come into it with a totally different mind set and you have different kinds of ideas. It might seem more efficient to just knock out the work the brute force way, but there is something magical that happens if you transform the work into something that feels natural and fun to play with and experiment with.

Things I want to avoid : dumb TV, booze, sugar, surfing the net, web forums, lying on the couch. But god damn, when you cut all those things out, life is hard. When night time rolls around and you're tired and bored, it's hard to get through the dark hours without those crutches. My real goal is to spend less time on the computer that's not real good productive time.

It's depressing trying to manage your investments in a down-market period. Despite the fact that we're in a mini-bubble false recovery at the moment, I believe that stocks will performly badly for the next 10 years or so; if you can beat inflation during this period you are doing well. I believe that there is very little skill in most "business" ; if you happen to start a company during an up-swing in the economy, you will do well and think you are a genius, if you do it during a down-swing you will do badly (but still probably think you are a genius). In particular, if you are lucky enough to be able to run something with big leverage during a general market up-swing, you basically just get to print free money. It's not that you were some brilliant real estate developer (or whatever), it's just that you put big money into the market when all boats were rising.

1/23/2011

01-23-11 - Cars.com extraction

I wrote a little program to extract records from Cars.com and make charts. For example :

Subaru WRX'es :


2002 : median=  9394.00 mean=  9462.42 sdev=  1932.74
2003 : median=  9995.00 mean= 10287.76 sdev=  1667.03
2004 : median= 11495.00 mean= 11295.16 sdev=  2122.11
2005 : median= 13995.00 mean= 13306.94 sdev=  1892.77
2006 : median= 15995.00 mean= 15854.00 sdev=  2065.38
2007 : median= 17990.00 mean= 17455.84 sdev=  2029.77
2008 : median= 20500.00 mean= 20591.19 sdev=  1935.96
2009 : median= 23595.00 mean= 23540.47 sdev=  1523.07
2010 : median= 25900.00 mean= 25750.80 sdev=  3505.73
2011 : median= 25989.00 mean= 24726.00 sdev=  1786.15

The WRX results are roughly what our intuition expects. There is a small step down associated with model year, but the steady price decrease for increasing mileage is much stronger. This is however somewhat surprising because there was a major model change in 2008, and the 2008 cars are known to be much worse than the 2009's.

Porsche 911 3.8 S Manual Coupes :


2006 : median= 56900.00 mean= 55829.26 sdev= 4804.42
2007 : median= 61991.00 mean= 61161.85 sdev= 5069.27
2008 : median= 67981.00 mean= 68558.00 sdev= 4532.27
2009 : median= 79880.00 mean= 79492.58 sdev= 6171.92

The Porsche is the exact opposite of the WRX. I'm surprised at how stable prices are vs mileage, but there is a very steady step down with model year. This is also strange since the 2006-2008 cars are identical.

Honda Civics , Manual , excluding the new 2.0 I4 :


2000 : median=  6990.00 mean=  6978.33 sdev=   977.55
2001 : median=  7999.00 mean=  7999.00 sdev=     0.00
2002 : median=  7995.00 mean=  7472.50 sdev=   738.93
2003 : median=  6995.00 mean=  6852.88 sdev=  1109.37
2004 : median=  8995.00 mean=  8615.50 sdev=  1186.42
2005 : median=  8657.00 mean=  8710.30 sdev=  1412.42
2006 : median= 12495.00 mean= 12604.31 sdev=  2074.50
2007 : median= 12990.00 mean= 13538.41 sdev=  2174.33
2008 : median= 14694.00 mean= 14918.31 sdev=  2282.32
2009 : median= 17975.00 mean= 17244.96 sdev=  2532.70

Hondas behave strangely on the market. There is very little depreciation with mileage for a given model year. There is a big price step at 2006 when the new model came out - people seem to like the new one much better - but other than that there is very little depreciation with age either.

1/19/2011

01-19-11 - Good practices

Some things that I always regret when I don't do them. These are as much reminders for myself not to get lazy as they are finger-wags at you all.

1. Save every log.

My programs generally log very verbosely. I log more to the file than to stdout. The key thing is that you shouldn't just overwrite your previous log. You should save every log of your runs *forever*. Disks are big, there is no reason to ever clean this up.

Your log file should contain the time & date of the run + the time & date of the build (this is mildly annoying to do in practice, just using __DATE__ somewhere in your code doesn't work because that module may not be fresh compiled). Ideally it would have the P4 sync state of the code as well (see comments). Ideally it would also log the modtime and/or checksum of every file that you take as input, so you can associate a run with the condition of the files it loads as well.

This is an okay way to start your logs :


// log the command line :
void lprintfCommandLine(int argc,char * argv[])
{
    for(int i=0;i < argc;i++)
    {
        lprintf("%s ",argv[i]);
    }
}

// log the command line + build time :
void lprintfLogHeader(const char * progName,int argc,char * argv[])
{
    __time64_t long_time;
    _time64( &long_time );
    
    lprintf("-----------------------------------------------------------------\n");
    // note: asctime has a \n in it already
    lprintf("Log opened : %s",asctime(_localtime64( &long_time )));
    lprintf("%s built %s, %s\n",progName,__DATE__,__TIME__);    
    lprintf("args: ");
    lprintfCommandLine(argc,argv);
    lprintf("\n");  
}

Something I've doing for a while now is to automatically write my logs to "c:\logs\programname.log" , and if a previous one of those exists I append it onto programname.prev . That's okay but not perfect, for one thing the big prev file gets hard to search through; perhaps worse, it doesn't play well with running multiple instances of your program at once (their logs get interleaved and the prev is moved at a random spot).

My videotest does something that I like even better. It makes a new log file for each run and names it by the date/time of the run and the command line args. They all go in a dir "c:\logs\programname\" and then the names are like "-c0rpjt1.rmv-irparkjoy_444_720p_lagarith_50.avi-p-d0-Sun-Jan-16-15-42-27-2011.log" which makes it very easy for me to find particular runs and see what the args were.

2. Make tests reproducable.

Often I'll run some tests and record the result, and then later when I go back to it I'm confused by what was run exactly, where the data files were, etc. Doing this is really just about discipline. There are a few things that I believe help :

2.A. Always make batch files. When you run command lines, do not just type in the command line and run it to execute your test. Put the command line in a batch file and run that. Then check your batch file into perforce!

2.B. Take the results of the test and write a quick note about how they were run and what they were testing. eg. exactly what #define did you flip in the code to run the test. It's so easy to not take this step because you think "its obvious" what you did, but 12 months later it won't be obvious. Check this into perforce.

2.C. For more interesting tests, make a directory and copy the whole thing! Copy in the data files that you ran on, the batch files, the results, and the code! Also copy in any tools you used! eg. if you used imagemagick or ffmpeg or whatever as part of your process, just copy it in to the directory. Disk is cheap and huge! Save the whole thing somewhere so you have a record of what exactly you ran.

2.D. If you changed code to run the test - check it in! Even if it's just for the test run - check in the test code and then check it in back to normal. eg. if you flip a #define or change some constants - check that in to P4 with a description saying "for test such and such".

(ASIDE : the first company I ever worked out was CTC in Houston. At CTC when we finished a major project, we would make an archive of the entire project and everything we needed to reproduce it. That means the code, the compilers, any tools or libraries, and the whole OS. We could take an archive and restore it to a fresh machine and instantly have a working system that would build it. I just thought that made a lot of sense and obviously every developer did that, right? Nope. I have yet to see anyone else do it since. People seem to believe that they can just archive the code without recording everything that needed to be done to the tools and build environment and so on to make a working build system.)

3. Never leave magic numbers in code.

I often stumble back on some old code and find something like :


x = y + 7 * z;

umm.. WTF where did that 7 come from? Even if you only use that value in one spot in the code, give it a name. eg.

const double c_averageACtoDCratio = 7;

x = y + c_averageACtoDCratio * z;

Ah, okay. Which is related to :

4. When values come from training or tweaking, document exactly how they were generated.

Ideally you saved the training run as per #2, so you should be able to just add a comment like


// this value comes from training; see train_averageACtoDCratio.bat

You may think you've found the perfect tweak value, but some day you may come back to it and think "where did this come from? am I sure this right? does this still apply since I've changed a bunch of other things since then?". Also, what data did you train it on exactly? Does that data actually reflect what you're using it for now?

Ideally ever value that comes from training can be regenerated at any time. Don't just do your training run and save the value and then break your training code. Assume that any tweak value will have to be regenerated in the future.

5. Some notes on default values.

In more complex programs, don't just put the default value of a parameter in its initializer, eg.


int maxNumMovecSearches = 32;

no no. The problem is that the initializer can be hidden away in a weird spot, and there may be multiple initializers, and you may want to reset values to their defaults, etc. Instead give it a proper name, like :

const int c_maxNumMovecSearches_default = 32;

int maxNumMovecSearches = c_maxNumMovecSearches_default;

Now you can also show defaults on the command line, such as :

lprintf("videotest usage:\n");
lprintf("-m# : max num movec searches [%d]\n",c_maxNumMovecSearches_default);

For args that are unclear what scale they're on, you should also show a reasonable parameter range, eg.

lprintf("-l# : lagrange lambda [%f] (0.1 - 1.0)\n",c_lagrangeLambda_default);

(though this example violates my own rule of baking constants into strings)

6. Make all options into proper enums with names.

You should never have a "case 2:" in your code or an "if ( mode == 1 )". Every time I short-cut and do that I wind up regretting it. For example I had some code doing :


SetupDCT(quantizer,6);

To select the 6th qtable. Of course that inevitably leads to bugs when I decide to reorder the qtables.

Give your enums the right name automatically using XX macros , eg. :


#define DCTQTableIndex_XX  \
    XX(eDctQ_Flat) YY(=0),\
    XX(eDctQ_Jpeg),\
    XX(eDctQ_JpegChroma),\
    XX(eDctQ_JpegFlattened),\
    XX(eDctQ_JpegSquashed),\
    XX(eDctQ_H264_HVS_1),\
    XX(eDctQ_H264_HVS_2),\
    XX(eDctQ_LerpedChroma),\
    XX(eDctQ_LerpedLuma),\
    XX(eDctQ_Count)

enum EDCTQTableIndex
{
    #define XX(x) x
    #define YY(y) y
    DCTQTableIndex_XX
    #undef XX
    #undef YY
};

const char * c_DCTQTableIndex_names[] =
{
    #define XX(x) #x
    #define YY(y)
    DCTQTableIndex_XX
    #undef XX
    #undef YY
};

And then make sure you echo the name when somebody selects an enum numerically so you can show that they are correct.

Do not keep a table of names and a table of enums that must be kept in sync manually. In general do not keep *anything* in code that must be kept in sync manually.

Show them as part of the command line help, eg :


        myprintf("imdiffs:\n");
        for(int f=0;f < eImDiff_Count;f++)
        {
            myprintf("%2d : %s\n",f,c_imDiffTypeNames[f]);
        }

For example if you tried to be a good boy and had something like :

    lprintf("dct qtable names:\n");
    lprintf("0: flat\n");
    lprintf("1: jpeg\n");
    ....

You have just "hard-coded" the values through the use of string constants. Don't bake in values.

One nice way is if command line arg "-m#" selects mode #, then "-m" with no arg should list the modes ("-m" followed by anything non-numeric should do the same).

If somebody has to go read the docs to use your program, then your command line user interface (CLUI) has failed. Furthermore docs are another example of bad non-self-maintaining redundancy. The list of arg mappings in the docs can get out of sync with the code : never rely on the human programmer to keep things in sync, make the code be correct automagically.

7. Copy the exe.

It's so easy during work to think of the exe as a temp/derivative item which you are replacing all the time, but you will often get yourself into scenarios where it's hard to get back to some old state and you want to play with the exe from that old state. So just copy off the exe every so often. Any time you generate major test results is a good time for this - output your CSV test files or whatever and just take a copy of the exe used to make the tests.

A semi-related practice that I've taken up now is to copy off the exe out of the build target location any time I run a test, so that I can still build new versions, and if I do build new versions, the version I wanted to test isn't fouled. eg. I use something like :

run_vt_test.bat :

  call zc -o release\vt.exe r:\vt_test.exe
  r:\vt_test.exe %*

8. Automate your processing

If you're running some process that requires a few steps - like run this exe, take the number it outputs, copy it into this code, recompile, run that on this data file - automate it.

It might seem like it's faster just to do it yourself than it is to automate it. That may in fact be true, but automating it has lots of ancillary value that makes it worth doing.

For one thing, it documents the process. Rather than just having to write a document that describes "how to export a character for the game" (which you will inevitably not write or not keep up to date), instead you have your batch file or script or whatever that does the steps.

For another, it gives you a record of what's being done and a way to exactly repeat the process if there are bugs. Any time you have a human step in the process it adds an element of non-repeatability and uncertainty, is this character broken because there's a bug or just because someone didn't follow the right steps?

There are some very simple ghetto tricks to easy automation. One is to have your program fwrite some text to a batch file and then just run that batch.


More generally, I wish my computer kept a full journal of everything I ever did on it. Everything I type, the state of every file, everything run should be stored in a history which I can play back like a movie any time I want. I should just be able to go, "mmkay, restore to the condition of May 13, 2009 and play back from 3:15 to 3:30". Yeah, maybe that's a bit unrealistic, but it certainly is possible in certain limitted cases (eg. apps that don't access the net or take input from any weird drivers) which are the cases I mainly care about.

1/18/2011

01-18-11 - Hadamard

The normal way the Hadamard Transform (Wikipedia) is written is not in frequency order like the DCT. For example the 8-item Hadamard as written on Wikipedia is :

    +1 +1 +1 +1 +1 +1 +1 +1
    +1 -1 +1 -1 +1 -1 +1 -1
    +1 +1 -1 -1 +1 +1 -1 -1
    +1 -1 -1 +1 +1 -1 -1 +1
    +1 +1 +1 +1 -1 -1 -1 -1
    +1 -1 +1 -1 -1 +1 -1 +1
    +1 +1 -1 -1 -1 -1 +1 +1
    +1 -1 -1 +1 -1 +1 +1 -1

The correct reordering is :

  { 0 7 3 4 1 6 2 5 }

When that's done what you get is :

    +1 +1 +1 +1 +1 +1 +1 +1
    +1 +1 +1 +1 -1 -1 -1 -1
    +1 +1 -1 -1 -1 -1 +1 +1
    +1 +1 -1 -1 +1 +1 -1 -1
    +1 -1 -1 +1 +1 -1 -1 +1
    +1 -1 -1 +1 -1 +1 +1 -1
    +1 -1 +1 -1 -1 +1 -1 +1
    +1 -1 +1 -1 +1 -1 +1 -1

which more closesly matches the DCT basis functions.

You can of course do the Hadamard transform directly like an 8x8 matrix multiply, but the faster way is to use a "fast hadamard transform" which is exactly analogous to a "fast fourier transform" - that is, you decompose it into a log(N) tree of two-item butterflies; this gives you 8*3 adds instead of 8*8. The difference is the Hadamard doesn't involve any multiplies, so all you need are {a+b,a-b} butterflies.

{
ADDENDUM : to be more concrete, fast hadamard is this :

    vec[0-7] = eight entries
    evens = vec[0,2,4,6]
    odds  = vec[1,3,5,7]

    Butterfly :
        vec[0-3] = evens + odds
        vec[4-7] = evens - odds

    Hadamard8 :
        Butterfly three times
this produces "canonical order" not frequency order, though obviously using a different shuffle in the final butterfly fixes that easily.
}

To do 2d , you obviously do the 1d Hadamard on rows and then on columns. The normalization factor for 1d is 1/sqrt(8) , so for 2d it's just 1/8 , or if you prefer the net normalization for forward + inverse is 1/64 and you can just apply it on either the forward or backward. The Hadamard is self-inverting (and swizzling rows doesn't change this).

The correctly ordered Hadamard acts on images very similarly to the DCT, though it compacts energy slightly less on most images, because the DCT is closer to the KLT of typical images.

In these examples I color code the 8x8 DCT or Hadamard entries. The (0,y) and (x,0) primary row and column are green. The (x,y) (x>0,y>0) AC entries are a gradient from blue to red, more red where vertical detail dominates and more blue where horizontal detail dominates. The brightness is the magnitude of the coefficient.

If you look at the two images, you should be able to see they are very similar, but Hadamard has more energy in the higher frequency AC bands.

original :

dct :

hadamard :

I also found this paper :

Designing Quantization Table for Hadamard Transform based on Human Visual System for Image Compression

which applies JPEG-style CSF design to make a quantization matrix for the Hadamard transform.

In straight C, the speed difference between Hadamard and DCT is not really super compelling. But Hadamard can be implemented very fast indeed with MMX or other SIMD instruction sets.

It seems that the idea of using the Hadamard as a rough approximation of the DCT for purposes of error or bit-rate estimation is a good one. It could be made even better by scaling down the high frequency AC components appropriately.


ADDENDUM : some more stats :

DCT :
root(L2) : 
+-------+-------+-------+-------+-------+-------+-------+-------+
|126.27 |  7.94 |  4.41 |  2.85 |  2.00 |  1.48 |  1.12 |  0.90 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  8.47 |  4.46 |  3.06 |  2.10 |  1.51 |  1.15 |  0.86 |  0.69 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  4.86 |  3.27 |  2.43 |  1.76 |  1.34 |  1.02 |  0.78 |  0.62 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  3.13 |  2.33 |  1.88 |  1.45 |  1.12 |  0.90 |  0.72 |  0.60 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  2.22 |  1.71 |  1.47 |  1.18 |  0.96 |  0.77 |  0.63 |  0.55 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  1.62 |  1.26 |  1.13 |  0.98 |  0.80 |  0.65 |  0.58 |  0.56 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  1.23 |  0.94 |  0.90 |  0.79 |  0.66 |  0.58 |  0.52 |  0.51 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  0.92 |  0.77 |  0.72 |  0.67 |  0.59 |  0.56 |  0.51 |  0.79 |
+-------+-------+-------+-------+-------+-------+-------+-------+

Hadamard :
root(L2) : 
+-------+-------+-------+-------+-------+-------+-------+-------+
|126.27 |  7.33 |  4.11 |  3.64 |  2.00 |  2.03 |  1.97 |  1.73 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  7.82 |  4.01 |  2.75 |  2.16 |  1.47 |  1.46 |  1.33 |  1.00 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  4.51 |  2.92 |  2.15 |  1.69 |  1.28 |  1.21 |  1.09 |  0.81 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  3.90 |  2.26 |  1.74 |  1.41 |  1.10 |  1.04 |  0.93 |  0.71 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  2.22 |  1.66 |  1.39 |  1.14 |  0.96 |  0.89 |  0.78 |  0.62 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  2.24 |  1.59 |  1.28 |  1.06 |  0.88 |  0.81 |  0.74 |  0.60 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  2.19 |  1.42 |  1.14 |  0.94 |  0.78 |  0.74 |  0.67 |  0.59 |
+-------+-------+-------+-------+-------+-------+-------+-------+
|  1.88 |  1.04 |  0.85 |  0.74 |  0.65 |  0.60 |  0.59 |  0.92 |
+-------+-------+-------+-------+-------+-------+-------+-------+

DCT :
FracZero : 
+-------+-------+-------+-------+-------+-------+-------+-------+
|  4.48 | 36.91 | 52.64 | 61.32 | 68.54 | 76.64 | 82.60 | 87.18 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 34.78 | 48.93 | 57.95 | 65.56 | 72.17 | 79.63 | 84.84 | 88.69 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 49.88 | 57.31 | 63.19 | 69.21 | 76.14 | 81.96 | 86.48 | 89.98 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 58.79 | 64.01 | 68.24 | 73.48 | 79.53 | 84.15 | 88.10 | 91.14 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 66.13 | 70.03 | 74.37 | 78.79 | 82.44 | 86.57 | 89.97 | 92.38 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 72.79 | 76.66 | 79.56 | 82.31 | 85.42 | 88.87 | 91.62 | 93.51 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 79.74 | 82.22 | 83.90 | 86.16 | 88.68 | 91.13 | 93.16 | 94.72 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 85.08 | 86.57 | 87.88 | 89.59 | 91.35 | 93.07 | 94.61 | 95.74 |
+-------+-------+-------+-------+-------+-------+-------+-------+

Hadamard :
FracZero : 
+-------+-------+-------+-------+-------+-------+-------+-------+
|  4.48 | 38.38 | 53.95 | 50.15 | 68.54 | 69.20 | 67.81 | 65.50 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 36.14 | 51.13 | 60.26 | 61.90 | 72.82 | 73.11 | 73.67 | 76.59 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 51.21 | 59.22 | 65.63 | 68.40 | 76.63 | 76.53 | 77.78 | 82.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 47.59 | 61.22 | 68.33 | 70.63 | 78.36 | 78.52 | 80.18 | 84.12 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 66.13 | 70.80 | 75.02 | 77.85 | 82.44 | 82.72 | 83.62 | 88.09 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 66.09 | 71.37 | 75.47 | 77.84 | 82.67 | 83.22 | 84.83 | 88.54 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 65.34 | 72.54 | 77.05 | 79.82 | 83.99 | 85.04 | 86.52 | 89.92 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| 63.44 | 76.15 | 81.42 | 83.66 | 87.91 | 88.36 | 89.81 | 92.43 |
+-------+-------+-------+-------+-------+-------+-------+-------+

1/17/2011

01-17-11 - ImDiff Release

Imdiff Win32 executables are now available for download. You can get them from my exe index page or direct link to zip download .

If you wish to link to imdiff, please link to this blog post as I will post updates here.

If you wish to run the JPEG example batch file, you may need to get JPEG exes here and get PackJPG here . Install them somewhere and set a dos environment variable "jpegpath" to where that is. Or modify the batch files to point at the right place. If you don't know how to use dos batch files, please don't complain to me, instead see here for example.

Once you have your JPEG installed correctly, you can just run "jpegtests image.bmp" and it will make nice charts like you've seen here.

I am not connected to those JPEG distributions in any way. Imdiff is not a JPEG tester. The JPEG test is just provided as an example. You should learn from the batch files and do something similar for whatever image compressor you wish to test.

ADDENDUM : See the Summary Post of all imdiff related blog posts.

1/12/2011

01-12-11 - ImDiff Sample Run and JXR test

This is the output of a hands-off fully automatic run :


(on lena 512x512 RGB ) :


I was disturbed by how bad JPEG-XR was showing so I went and got the reference implementation from the ISO/ITU standardization committee and built it. It's here .

They provide VC2010 projects, which is annoying, but it built relatively easily in 2005.

Unfortunately, they just give you a bunch of options and not much guide on how to get the best quality for a given bit rate. Dear encoder writers : you should always provide a mode that gives "best rmse" or "best visual quality" for a given bit rate - possibly by optimizing your options. They also only load TIF and PNM ; dear encoder writers : you should prefer BMP, TGA and PNG. TIF is an abortion of an over-complex format (case in point : JXR actually writes invalid TIFs from its decoder (the tags are not sorted correctly)).

There are two ways to control bit-rate, either -F to throw away bit levels or -q to quantize. I tried both and found no difference in quality (except that -F mucks you up at high bit rate). Ideally the encoder would choose the optimal mix of -F and -q for R/D. I used their -d option to set UV quant from Y quant.

There are three colorspace options - y420,y422,y444. I tried them all. With no further ado :

Conclusions :

This JXR encoder is very slightly better than the MS one I was using previously, but doesn't differ significantly. It appears the one I was using previously was in YUV444 color space. Obviously Y444 gives you better RMSE behavior at high bitrate, but hurts perceptual scores.

Clearly the JXR encoders need some work. The good RMSE performance tells us it is not well perceptually optimized. However, even if it was perceptually optimized it is unlikely it would be competitive with the good coders. For example, Kakadu already matches it for RMSE, but kills it on all other metrics.


BTW you may be asking "cbloom, why is it that plain old JPEG (jpg_h) tests so well for you, when other people have said that it's terrible?". Well, there are two main reasons. #1 is they use screwed up encoders like Photoshop that put thumbnails or huge headers in the JPEG. #2 and probably the main reason is that they test at -3 or even -4 logbpp , where jpg_h falls off the quality cliff because of the old-fashioned huffman back end. #3 is that they view the JPEG at some resolution other than 1:1 (under magnification of minification); any image format that is perceptually optimized must be encoded at the viewing resolution.

One of the ways you can get into that super-low-bitrate domain where JPEG falls apart is by using images that are excessively high resolution for your display, so that you are always scaling them down in display. The solution of course is to scale them down to viewing resolutions *before* encoding. (eg. lots of images on the web are actually 1600x1200 images, encoded at JPEG Q=20 or something very low, and then displayed on a web page at a size of 400x300 ; you would obviously get much better results by using a 400x300 image to begin with and encoding at higher quality).

1/10/2011

01-10-11 - Perceptual Metrics

Almost done.

RMSE of fit vs. observed MOS data :


RMSE_RGB             : 1.052392
SCIELAB_RMSE         : 0.677143
SCIELAB_MyDelta      : 0.658017
MS_SSIM_Y            : 0.608917
MS_SSIM_IW_Y         : 0.555934
PSNRHVSM_Y           : 0.521825
PSNRHVST_Y           : 0.500940
PSNRHVST_YUV         : 0.480360
MyDctDelta_Y         : 0.476927
MyDctDelta_YUV       : 0.444007

BTW I don't actually use the raw RMSE as posted above. I bias by the sdev of the observed MOS data - that is, smaller sdev = you care about those points more. See previous blog posts on this issue. The sdev biased scores (which is what was posted in previous blog posts) are :


RMSE_RGB             : 1.165620
SCIELAB_RMSE         : 0.738835
SCIELAB_MyDelta      : 0.720852
MS_SSIM_Y            : 0.639153
MS_SSIM_IW_Y         : 0.563823
PSNRHVSM_Y           : 0.551926
PSNRHVST_Y           : 0.528873
PSNRHVST_YUV         : 0.515720
MyDctDelta_Y         : 0.490206
MyDctDelta_YUV       : 0.458081
Combo                : 0.436670 (*)

(* = ADDENDUM : I added "Combo" which is the best linear combo of SCIELAB_MyDelta + MS_SSIM_IW_Y + MyDctDelta_YUV ; it's a static linear combo, obviously you could do better by going all Netflix-Prize-style and treating each metric as an "expert" and doing weighted experts based on various decision attributes of the image; eg. certain metrics will do better on certain types of images so you weight them from that).

For sanity check I made plots (click for hi res) ; the X axis is the human observed MOS score, the Y axis is the fitted metric :

Sanity is confirmed. (the RMSE_RGB plot has those horizontal lines because one of the distortion types is RGB random noise at a few fixed RMSE levels - you can see that for the same amount of RGB RMSE noise there are a variety of human MOS scores).

ADDENDUM : if you haven't followed old posts, this is on the TID2008 database (without "exotics"). I really need to find another database to cross-check to make sure I haven't over-trained.

Some quick notes of what worked and what didn't work.


What worked :

Variance Masking of high-frequency detail

Variance Masking of DC deltas

PSNRHVS JPEG-style visibility thresholds

Using the right spatial scale for each piece of the metric
  (eg. what size window for local sdev, what spatial filter for DC delta)

Space-frequency subband energy preservation

Frequency subband weighting


What didn't work :

Luma Masking

LAB or other color spaces than YUV in most metrics

anything but "Y" as the most important part of the metric

Nonlinear mappings of signal and perception
  (other than the nonlinear mapping already in gamma correction)

01-10-11 - Perceptual Results - PDI

This is my 766x1200 version of the PDI test image (that I made by scaling down a 3600 tall jpeg one).

Hipix and JPEG-XR are both very bad. I wonder if the JPEG-XR encoder I'm using could be less than the best? If someone knows a reference to the best command line windows JPEG-XR encoder, please post it. The one I'm using is from some Microsoft HD-Photo SDK distribution.

It's interesting to see how the different encoders do on the different metrics. x264, webp and kakadu are all identical under MyDctDelta. Kakadu falls down in SSIM, but does much better on SCIELAB. This tells you that kakadu is not preserving local detail characteristics as well, but is preserving smooth overall DC levels much better.

01-10-11 - Perceptual Results - mysoup

Two sets of compressors cuz I have too many on this image ; jpg_pack is on both charts as a baseline.




As before, we see AIC , Hipix and JPEG-XR are all very poor. WebP is marginally okay (the early encoder I'm using is still very primitive) ; it does well on SSIM, but doesn't have the good low bitrate behavior you would expect from a modern coder. x264 and Kakadu are quite good. My two coders (vtims and newdct) are better than the terrible trio but not close to the good ones (I need to fix my shit).

01-10-11 - Perceptual Results - Moses

Moses is a 1600x1600 photo of a guy.

01-10-11 - Perceptual Metrics Warmup - x264 Settings

Beginning the warmup. Quick guide to these charts :

On the left you will see either "err" or "fit". "err" is the raw score of the metric. "fit" is the metric after fitting to a 0-10 human visual quality scale. SSIM err is actually percent acos angle as usual.

The fit score is 0-10 for 0 = complete ass and 10 = perfect, but I have set the graph range to 3-8 , because that is the domain we normally care about. 8 = very hard to tell the difference.

x264 on mysoup , testing different "tune" options. I'm using my y4m to do the color convert for them which helps a lot.

Well "tune psnr" is in fact best on psnr - you can see a big difference on the RMSE chart. "tune ssim" doesn't seem to actually help much on SSIM, it only beats "tune psnr" at very low bit rate. "tune stillimage" just seems to be broken in my build of x264.

Oh, and I use "xx" to refer to x264 because I can't have numbers in the names of things.

Henceforth we will use tune = ssim. (change : psnr)

ADDENDUM :

I looked into this a little more on another image (also trying x264 with no explicit tune specified) :

You can see that "--tune ssim" does help a tiny bit on MS-SSIM , but it *hurts* IW-MS-SSIM , which is a better metric (it hurts also on MyDctDeltaNew). Though the differences are pretty negligible for our level of study. No explicit tune x264 is much worse. "tune psnr" seems to be the best option according to our best metrics.

01-10-11 - Perceptual Metrics Warmup - JPEG Settings

This is a repeat of old info, just warming up the new system and setting baselines.

Our JPEG is just regular old IJG JPEG with various lossless recompressors ; results on PDI :

As seen before, PAQ has some screwups at low bitrate but is otherwise very close to packjpg, and flat quantization matrix is obviously best for RMSE but worse for visual quality ("flat" here is flat + pack).

From now on we will use jpg_pack as the reference point.

1/09/2011

01-09-11 - On Ranting

In the last few weeks there have been an awful lot of rants on other tech blogs that I follow. (there have also been a lot of pointless "year end summaries" and "keep alive" posts; something about being on vacation for the holidays makes people write a lot of drivel). As a long time ranter, allow me to give you all some tips.

Don't rant about boring shit that everybody already knows. eg. Windows is so annoying, OpenGL is all fucked up, C++ is not what it should be, producers don't understand developers, blah blah blah, snooze. If you want to rant about some boring shit to blow off some steam, at least make it funny or angry or something. Don't take yourself too seriously. eg. don't make bullet-pointed outlines of your ranting. It's fucking ranting, ranting is juvenile, it's self-indulgent, it's ill-conceived, don't dress it up like it's a fucking academic paper.

Moving on.

We watched about 15 minutes of Scott Pilgrim (cringing the whole time) before turning it off in disgust. Can we fucking get past this comic book fad already? Just because you call them "indie" or "graphic novels" or whatever doesn't make them any less vapid. And no more superheroes please.

I'm fucking sick of seeing the NYT talk about how some banker or mortgage criminal "lost $100M in the financial crisis". That's ridiculous. First of all, when your pay is in stock or equity or whatever and the value goes down, you aren't really losing pay. You can't set your norm point at the highest value of the stock; if you're a sensible trader or gambler, you don't book the value of your win until you cash it out, and you know that your positive variance one year may well be balanced by negative variance in another. But the main issue is that these people really *made* massive amounts on the financial crisis. Just because they lost back a little at the end doesn't mean they lost overall. Some fucking Goldman or Lehman crook made $50 M over the last 10 years by speculating with free government funds, over-leveraging, and selling off bogus mortgage packages. Then they lost $10 M at the end. You didn't fucking lose $10M on the crisis, you liar, you *made* $40M on it. If you profitted in the bubble that ran up to the collapse, you did not lose money. It is absolutely not comparable to people (or pension funds or municipalities) who made money over many years, and then invested it in vehicles they were told were very safe and conservative, and then lost most of it.

I get a similar cognitive dissonance when I see small business owners who are "struggling". They drive a fancy new car, they have a big house, they eat out fancy food all the time, and yet their "business is hardly making it". Wait a fucking minute. The division between your personal finances and your company's is of your own making. How can it be that you are in the pink and your business is struggling because of the recession? Are you still paying yourself salary? Are you still illegally taking cash from the company till and paying your personal expenses as if they were business expenses? Then don't fucking tell me you're struggling. It's bizarre, and yet this behavior is completely standard for sole-proprietors. It seems like these people just don't even make the cognitive connection between the two; I met a guy at a track day last year who would alternate between talking about the new car he was buying, and the fact that his business was struggling, and there was no hint that he considered the two topics at all related.

A common retarded winter time rant that I see is how the city is fucking up the snow preparedness, they aren't doing enough plowing or salting or chaining buses or whatever you think they should be doing. This usually goes along with "government is incompetent" rants from small-government idealogues (occasionally you get some "they should privatize it" harmonies). Ummm... hello. You guys are the ones who have crippled our local governments by starving them of any income. They literally have no money, and now you're complaining that they aren't spending enough to take care of the streets?

Seattle's entire annual budget for pot hole repairs is $3M. It was cut last year due to general budget shortfalls, and the four teams were reduced to three. Well that $3M is already spent, and we have finally announced some emergency pot-hole fillers, which is now coming out of the general street maintenance fund (which is a paltry $42M) which will cripple other programs. Among other things, the pot-hole repairs are temporary, and spending more on pot-hole filling means having less cash to repave streets, which means the problem just gets worse year after year. (Seattle's general budget for transportation has been below maintenance level since 1995 ). (see also here or here ).

Seattle's budget for street repairs on non-arterial streets is $0. Zero. It has been $0 since the 90's. Seattle's budget to even *look* at non-arterial streets is $0. There is no survey of non-arterial street condition and no program to pave them. (see here )

BTW buses are in a similar situation to the streets. Maintenance has been deferred for years, it's the easy/sleazy/lazy way for politicians to cut budgets to comply with the retarded voters wish to starve the government - you simply slash those pesky repair budgets and let things gradually decay.

I only know the details in Seattle, but it seems like the same retarded contradictory outroar goes up all over America. One week it's "the government isn't taking care of the streets well enough!" and the next week it's "we need to cut government spending!". Uhh.. do you guys just not see the connection between these two issues? You can moan about taxes or you can moan about shitty government services, but you can't moan about both! (at least not in a place like WA where there *are* no taxes).

In other "you all are retarded about taxes" new; WA is desperately out of money at all levels of government, services are being cut everywhere, and of course the government is turning to horrible non-tax ways of raising money; all sorts of fees are going up, Seattle is increasing the parking meter rate dramatically, and our very limitted number of public prosecutors are now being assigned to traffic court to get revenue : Why It Just Got a Lot Harder to Fight a Traffic Ticket in King County - Seattle News - The Daily Weekly
Kitsap aims to turn traffic court into money maker
So watch out WA speeders, it may no longer be quite to trivial to make tickets disappear here.

In other news, last week the NYT ran this amazing chart of stock return for various entry and exit years . It's a beautiful piece of info-graphics.

1/08/2011

01-08-11 - Random Music Stuff

In the recent pop music category, Tame Impala and Caribou are my faves :

YouTube - Tame Impala - Solitude is Bliss
YouTube - CARIBOU - Odessa

(I actually am doing a pretty good job these days of completely cutting myself off from pop media like TV and radio, so I have very little concept of what is "overplayed" and thus uncool; it lets me just listen to what I like without caring if its too mainstream to be cool enough for me, which is nice)

There's so much retro-remake 80's synth shit going on right now; I don't know why you would listen to that when you could just go to the originals :

YouTube - Visage - Fade to Grey
YouTube - Sharevari @ The Scene, Detroit (Remastered)
YouTube - Bruce Haack Party Machine
(BTW the documentary on Bruce Haack is pretty great)

Decent sexy groove, a bit cheezy :

YouTube - The Weeknd - What You Need

This is the one that really stuck from my old prog searches, the song just carries you on this epic journey, takes you away some place weird, then brings you home to rock :

YouTube - Room - Andromeda (1970)

I watched Lila Says which was a pretty shit movie, but it reminded me how amazing this song is :

YouTube - ( Air - ) Run

Digging dubstep in general; it's amusing to compare the Ed Solo version to the original :

YouTube - Ed Solo - Age of Dub
YouTube - AGE OF LOVE

Dig this heavy slow jam :
Sukh Knight - Clotted Cream
Sukh Knight - Jewel Thief

Some other electronica that's good :

YouTube - Moderat - Rusty Nails
YouTube - Burial & Four Tet - Moth

We got a record player for christmas, and it's fun finding old shit and rediscovering the magic of simple stereo-mastered vinyl. Older music sounds so much better, I think you have to go before roughly 1980 to get stuff that hasn't been ruined by studios and producers (plus, modern shit is mixed in really weird ways so it sounds okay on headphones and 7.1 systems and such, you want to find good old stereo mixes). It's so raw and immediate, it sounds like you're in the same room as the band. All that horrible classic rock that they play constantly on the radio sounds so amazing on vinyl with a plain old analog stereo amp. Anyway, some random shit that I rediscovered the greatness of through the vinyl :

YouTube - Eddie Money - Baby Hold On
YouTube - Be My Friend by Free - their best-ever version
Free - Woman ; Free is so fucking great I can't get over it
YouTube - THREE DOG NIGHT- SHAMBALA

01-08-11 - Random Car Stuff

There's a whole industry of schadenfreude videos of winter car crashes (for example : Watch An Icy 20-Car Pileup As It Happens ). The thing that really strikes me when I watch this shit is that these are not "accidents". These are largely intentional crashes. That is, first of all the driver chose to drive on a day that his car is woefully unprepared for, then second of all (and worst) he saw a street where there is a huge pileup from other people crashing and he doesn't just stop and reconsider trying to go up/down that hill, he thinks "ha ha those guys all crashed, but I'll be fine". The car damage should not be covered by insurance, when you intentionally crash your car you should have to pay out of pocket. You morons.

Interesting bit in here about the GT3 RSR - they actually move the engine forward on the race car (compared to the street GT3) : JB tests drives the GT3 and interviews Patrick Long for Jay Leno's Garage JustinBell.com - Justin Bell's Official Website
Because of the extra-wide track of the wide body Porsches, they can actually get the engine between the wheels, it doesn't have to be behind the rear axle. Similarly in the M3 GT2 race car, they move the engine down and back. BMW M3 GT Race Car - Feature Test - Car and Driver
By regulation the cabin firewall has to be in the same place, so they get it as close to the driver as possible. In both cases I wonder why they don't do similar things for the road cars (things like the M3 GTS or the Porsche GT3 are not as close to the race cars as they want you to believe).

There are more "Corvette fail" videos on Youtube than any other car type; not sure if that more reflects the car or the average owner. Certainly if you see a Corvette car club coming down the road - take cover !
YouTube - Corvette oops
YouTube - Corvette Fail

This sarcastic Corvette review walks a fine line between funny and excessive douchery : YouTube - 200mph in Corvette ZO6. Fun or Fail ; it's hard to make fun of others' douchery when you yourself are a humongous douche

Sometimes I think a Lotus might be cool (though the Evora S seems to still not exist in America), but then I am reminded that it's not a real car. It's a British sports car, which means it's a few bits of plastic held together with paper clips. For example :
Warranty Work - LotusTalk - The Lotus Cars Community
3 days of raining waterlogged my Evora. - LotusTalk - The Lotus Cars Community
How hard is it really to buy a Toyota engine and some standard 3rd party brakes and suspension bits and tack them onto some aluminum bars? Seriously Lotus?

I think the E86 BMW M Coupe (2006-2008, Z4 based, 330 hp, 3200 pounds) is pretty sweet, though I guess a modern Cayman just beats in almost every way. The earlier weird-looking ones (though sort of adorable for their ugliness) had engine problems apparently. I test drove one a while ago and thought it felt a bit weird to be sitting so far back on the chassis, and also the amount of cargo space is too small.
BMW M Coupe M Roadster Resource Network - MCoupe.com - BMW M Coupe, M Roadster Z3 & Z4 Resource Network

Mazda RX8's have a lot of problems with their engine. Mazda stepped up and took the rare and amazing action of extending the engine warranty to 8 years / 100k miles. Nice one Mazda!
Mazda expects to recall RX-8s - RotaryNews.com
Mazda RX 8's Engine Failure Problem - Mazda Discussions at Automotive.com
Mazda extends rotary warranty on RX-8 to 100k miles � Autoblog

There are various reports around of coking/deposit problems on intake valves with the new DFI engines that are in lots of modern cars, for example : (very amazing pictures in this link :)
Journal N54 Total Engine Rebuild & Upgraded Internals - BMW 3-Series (E90 E92) Forum - E90Post.com

I love the BMW E30 M3, it's the greatest M3 IMO. (the good BMW design is all boxy angles, with forward slanting nose, ala the 2002, and a rectangular grill, not the dual-butt-chin bangled bullshit) (people are putting modern S54 engines them now). It's a chance to watch this video, one of my favorite rally clips of all time : Youtube - Patrick Snijers - E30 Rally

And some more crazy build/mod stuff :

Ran When Parked
JDM Legends - Vintage Japanese Car Sales and Restoration Vehicles for Sale
2001 porsche 996 ls1 conversion - LS1TECH


Aside : this is a real dumb addiction I've fallen into recently. I always seem to have some dumb thing that I think about way more than I should, but cars are particularly bad because 1. they're very expensive and 2. being a "car guy" is repellant to women (and good men). Car guys are either white trash (with cars up on blocks), nerds (who drive an imported JDM R33 GTR and can give you lectures on the correct racing line but promptly crash when they try to execute it), or insecure yuppies (small-penis midlife-crisis better-than-the-joneses).

1/05/2011

01-05-11 - QuadraticExtremum

In today's story I do someone's homework for them :

/*
QuadraticExtremum gives you the extremum (either minimum or maximum)
of the quadratic that passes through the three points x0y0,x1y1,x2y2
*/
inline double QuadraticExtremum(
    double x0,double y0,
    double x1,double y1,
    double x2,double y2)
{   
    // warning : this form is pretty but it's numerically very bad
    //  when the three points are near each other
    
    double s2 = x2*x2;
    double s1 = x1*x1;
    double s0 = x0*x0;
    
    double numer = y0*(s1-s2) + y1*(s2-s0) + y2*(s0-s1);
    double denom = y0*(x1-x2) + y1*(x2-x0) + y2*(x0-x1);
    
    double ret = 0.5 * numer / denom;
    
    return ret;
}

01-05-11 - Golden 1d Searches

GoldenSearch1d finds the minimum of some function if you know the finite range to look in. Search1d_ExpandingThenGoldenDown looks in the infinite interval >= 0 . Pretty trivial thing, but handy.

The Golden ratio arises from doing a four point search and trying to reuse evaluations. If your four points are { 0, rho, 1-rho ,1 } and you shrink to the lower three { 0, rho, 1-rho }, then you want to reuse the rho evaluation as your new high interior point, so you require (1-rho)^2 = rho , hence 1-rho = (sqrt(5)-1)/2

BTW because you have various points you could obviously use interpolation search (either linear or quadratic) at various places here and reduce the number of evaluations needed to converge. See Brent's method or Dekker's method .

Also obviously this kind of stuff only works on functions with simple minima.

Also obviously if func is analytic and you can take derivatives of it there are better ways. I use this for evaluating things that aren't simple functions, but have nice shapes (such as running my image compressor with different quantizers).



template< typename t_functor >  
double GoldenSearch1d( t_functor func, double lo, double v_lo, double hi, double v_hi, double minstep )
{
    const double rho = 0.381966;
    const double irho = 1.0 - rho; // = (sqrt(5)-1)/2 

    // four points :
    // [lo,m1,m2,hi]

    double m1 = irho*lo + rho*hi;
    double m2 = irho*hi + rho*lo;

    double v_m1 = func( m1 );
    double v_m2 = func( m2 );
    
    while( (m1-lo) > minstep )
    {
        // step to [lo,m1,m2] or [m1,m2,hi]
        // only one func eval per iteration :
        if ( MIN(v_lo,v_m1) < MIN(v_hi,v_m2) )
        {
            hi = m2; v_hi = v_m2;
            m2 = m1; v_m2 = v_m1;
            m1 = irho*lo + rho*hi;
            v_m1 = func( m1 );
        }
        else
        {
            lo = m1; v_lo = v_m1;
            m1 = m2; v_m1 = v_m2;
            m2 = irho*hi + rho*lo;
            v_m2 = func( m2 );
        }
        
        ASSERT( fequal(m2, irho*hi + rho*lo) );
        ASSERT( fequal(m1, irho*lo + rho*hi) );
    }
    
    // could do a cubic fit with the 4 samples I have now
    //  but they're close together so would be numerically unstable
    
    /*
    //return (lo+hi)/2.0;
    
    if ( v_m1 < v_m2 ) return m1;
    else return m2;
    
    /*/
    // return best of the 4 samples :
    if ( v_lo < v_m1 ) { v_m1 = v_lo; m1 = lo; }
    if ( v_hi < v_m2 ) { v_m2 = v_hi; m2 = hi; }
    
    if ( v_m1 < v_m2 ) return m1;
    else return m2;
    /**/
}

template< typename t_functor >  
double GoldenSearch1d( t_functor func, double lo, double hi, double minstep )
{
    double v_lo = func( lo );
    double v_hi = func( hi );
    
    return GoldenSearch1d( func, lo, v_lo, hi, v_hi, minstep );
}

template< typename t_functor >  
double Search1d_ExpandingThenGoldenDown( t_functor func, double start, double step, double minstep , const int min_steps = 8)
{
    struct Triple
    {
        double t0,f0;
        double t1,f1;
        double t2,f2;
    };
    
    Triple cur;
    cur.t2 = cur.t1 = cur.t0 = start;
    cur.f2 = cur.f1 = cur.f0 = func( start );
        
    int steps = 0;
    
    Triple best = cur;
    
    for(;;)
    {
        cur.t0 = cur.t1; 
        cur.f0 = cur.f1;
        cur.t1 = cur.t2; 
        cur.f1 = cur.f2;
        
        cur.t2 = cur.t1 + step;
        cur.f2 = func( cur.t2 );
        
        if ( cur.f1 <= best.f1 )
        {
            best = cur;
        }
        
        // if we got worst and we're past min_steps :
        if ( cur.f2 > cur.f1 && steps > min_steps )
            break;
                
        const double golden_growth = 1.618034; // 1/rho - 1
        step *= golden_growth; // grow step by some amount
        steps++;
    }
        
    // best is at t1 bracketed in [t0,t2]   
    // could save one function eval by passing in t1,f1 as well
    
    return GoldenSearch1d(func,best.t0,best.f0,best.t2,best.f2,minstep);
}


Usage example :



#define MAKE_FUNCTOR(type,func) \
struct STRING_JOIN(func,_functor) { \
  type operator() (type x) { return func(x); } \
};

double TFunc( double x )
{
    return 100 / ( x + 1) + x;
}

MAKE_FUNCTOR(double,TFunc);

int main(int argc,char *argv[])
{

double t = Search1d_ExpandingThenGoldenDown(TFunc_functor(),0.0,1.0,0.0001);

lprintfvar(t);
lprintf(TFunc(t) , "\n");

return 0;
}

old rants