10-17-06 - 2

Okay, so I tore apart my NetFlix app and redid some of the basics with more thorough investigations. I can now definitely confirm that most of what's done in the literature is just really wrong and dumb. It reminds me a lot of the way the Compression field was back in the 80's. Somebody came up with some decent ideas way back when, and everyone just keeps using parts of that algorithm even though it was just a rough/poor initial guess. People keep introducing fancier ideas but they keep the broken base. I don't mean to rag on the researchers and make it sound like I'm better than them. They come up with beautiful algorithms that I never would've come up with. But then they just don't try very hard. Like the classic example with PPM was the escape frequency. They tried 1. They tried 1/2. But what about 0.67 ? What about a number that varies based on the context depth & occupation? anyhoo...

Some of the current leaders have finally spoken up in the Netflix forum. They seem to all be using some sort of standard academic bayesian/neural net learning system. This could be hard to beat if they're good & have powerful server clusters to do the training, which they presumably do at universities.

No comments:

old rants