Fast Ranking Algorithm: Astonishing Paper by Raykar, Duraiswami, and Krishnapuram

The July 08 (Vol. 30, #7) IEEE Transactions on Pattern Analysis and Machine Intelligence has an incredible paper by Raykar, Duraiswami, and Krishnapuram. A Fast Algorithm for Learning a Ranking Function from Large-Scale Data Sets appears to be a game-changer for an incredibly important problem in machine learning. Basically, they use a "fast multipole method" developed for computational physics to rapidly estimate (to arbitrary precision) the conjugate gradient of an error function. (In other words, they tweak the parameters and "get a little better" the next time through the training data.)

The precise calculation of the conjugate gradient is O(m\^2). This estimation algorithm is O(m)! (That's an exclamation point, not a factorial!)

On a first reading, I don't grok how the crucial transform necessarily moves towards an error minimum, but the algorithm looks (surprisingly) easy to implement and their benchmark results are jaw-dropping. Of course, others will have to implement it and analyze it for applicability across different types of data sets, but this is one of the most impressive algorithmic claims I've seen in years.

Once upon a time, I had the great fortune to write a column for a magazine on artificial intelligence and could justify spending huge amounts of time implementing AI algorithms (well, I think I was paid \$450 per month for my column, so I'm not really sure that "justified" 80 hours of programming, but I was young). Man, would I love to see how this algorithm works for training a neural network...