One is how nice it is to work on things that humans do better than computers. Humans are just AMAZING at pitch detection. We can pick out the pitches of individual instruments or voices even in the middle of massive amounts of noise. Obviously there are tons of other things humans still do better than computers - play poker, compress text, recognize images, etc.

There are a few advantages to working on these things. For one thing, it lets you know for sure that there does exist a solution that's within practical reach. The human brain is complex but it's not many orders of magnitude beyond what a computer can do these days, so you know that if a human can do it, you could teach a computer to do it and actually get an algorithm that runs fast enough. If you didn't have that proof of feasability, you might spend a long time working on a problem and find out it just can't be practically done. The other big advantage is that you can sort of ask yourself how you do it. Copying the way that humans approach the problem isn't always the best algorithm for computers, but it is an interesting reference and lets you sort of get a personal closeness to the problem. It also gives you a really good sanity check, when some retard says "I've got this image recognition algorithm and it's state of the art nyah" you can just go "look dude, any human can do better so obviously there's a lot of room for improvement". It lets you know that it's worth still exploring, which is one of the most useful pieces of information to have; you don't want to spend a long time researching a better way to do something if there might not be a much better way - know how close you are to optimal is very valuable information. eg. the great ability of humans to predict text tells us that state of the art text compressors are still missing some crucial and it's worthwhile to explore further in that field. If you just looked at the way they've been asymptotically improving you might think they were near a dead end.

The other general topic I've been thinking about is the "physics way" of exploring new algorithms. In physics when you're trying to write down the equations of motion (you usually want to write a Hamiltonian or a Lagrangian), if you don't know exactly what they should be, you take guidance from the units and the symmetries of the system. Weinberg does this beautifully in his QFT book, but Feynman does it all the time in Lectures, and actually it's a really common trick in fluid dynamics.

Just as an example, say you want to write the Hamiltonian for a harmonic oscillator and you don't know what it is. You know you have a mass M and a frequency W to work with. Of course you also have the momentum and position, P and X. Your H has units of energy, so what can you write down that has units of energy? Well, PP/M is one. You can also make a fake momentum Q = M*X/W , so QQ/M is one too, which is M*X^2/W^2 , which is in fact the right term. But, there's more. You can also make a unitless number from (P/Q). Any time you can make a unitless number you can stick it anywhere you want. If we were in 3d you would require symmetry under rotations; eg. if I rotate my coordinates H shouldn't change, which would mean that only terms in X dot X and P dot P would be allowed, but I could still make a unitless number from (PP/QQ) and screw around with that, though these terms have poles so we can guess they're not the system that we want.

The point of view in physics is that any equations of motion that you can write down which have the right units and symmetry are perfectly valid physical systems which might exist somewhere or model some real physical thing. If you want to model some specific physical system, you have to find the one of those possibilities that matches your situation. Typically we want to start with the simplest possible model and see what kind of behavior that gives us; adding more terms is certainly allowed but generally models some more complex system.

I use this same kind of thinking to explore algorithms. Say you're trying to write a correlation between two vectors X and Y. You want the output to be unitless, so it's independent of the units or overall scale of the inputs. You want it to be independent of the length of the vectors (in a simple linear sense, obviously it will be effected, but if you just repeat all the same values twice it should be unaffected). So, there are various things you can write down that make sense. The numerator should be something like < X * Y > or < X - Y >. To make it unitless and unaffected by overall scale you need to divide by something like ( < X^2 > + < Y^2 > ) or < X^2 > * < Y^2 >. You want symmetry in swapping labels X <-> Y so other forms are no good. All of these possibilities that are allowed by symmetry and units are perfectly viable candidates. There's something else which is totally an option which is subtracting off the average of each vector. Using the physics/symmetry point of view, we should consider all of these to be possibilities. Fortunately in computers we can easily just try them all and see what's best.

There's a few things that should raise your spidey senses. Whenever somebody writes an equation which doesn't have the right symmetries or units or invariances. Like anything that's not immune to scale and bias in general should make you wonder why the units and zero-point have become crucial parts of your problem. The other thing is when someone is trying to write down an equation and just pulls some terms out, and you see that there are lots of other possibilities which are perfectly valid. Why was one chosen over the other? Often there is no good reason and you should consider the other possibilities.

A common example is averaging. You can take an arithmetic average : (X+Y)/2 , or a geometric average : sqrt(X*Y) , or a pythagorean average : sqrt( X*X + Y*Y ). All are perfectly valid depending on your problem - and in fact you can take the ratios of them to make unitless numbers which you can stick anywhere you want.

## No comments:

Post a Comment