One thing I'm trying to do is fix up all the nasty epsilon robustness issues. A small part of that is solving a quadratic. "Easy!" I hear you say. Everyone knows how to solve a quadratic, right? Not so.
I found this page which has a nice summary of the issues, written by a sour old curmudgeon who just whines about how dumb we all are but doesn't actually provide us with a solution.
You can also find the Wikipedia page or the Numerical Recipes (5.6) snippet about the more robust numerical way to find the roots that avoids subtracting two nearly identical numbers. Okay, that's all well and good but there's a lot more code to write to deal with all the degenerate cases.
This is what I have so far : (I'm providing the case where the coefficients are real but the solutions may be complex; you can obviously modify to complex coefficients or only real solutions)
// A t^2 + B t + C = 0; // returns number of solutions int SolveQuadratic(const double A,const double B,const double C, ComplexDouble * pT0,ComplexDouble * pT1) { // first invalidate : *pT0 = FLT_MAX; *pT1 = FLT_MAX; if ( A == 0.0 ) { if ( B == 0.0 ) { if ( C == 0.0 ) { // degenerate - any value of t is a solution *pT0 = 0.0; *pT1 = 0.0; return -1; } else { // no solution return 0; } } double t = - C / B; *pT0 = t; *pT1 = t; return 1; } else if ( B == 0.0 ) { if ( C == 0.0 ) { // A t^2 = 0; *pT0 = 0.0; *pT1 = 0.0; return 1; } // B is 0 but A isn't double discriminant = -C / A; ComplexDouble t = ComplexSqrt(discriminant); *pT0 = t; *pT1 = - t; return 2; } else if ( C == 0.0 ) { // A and B are not zero // t = 0 is one solution *pT0 = 0.0; // A t + B = 0; *pT1 = -B / A; return 2; } // Numerical Recipes 5.6 : double discriminant = ( B*B - 4.0 * A * C ); if ( discriminant == 0.0 ) { double t = - 0.5 * B / A; *pT0 = t; *pT1 = t; return 1; } ComplexDouble sqrtpart = ComplexSqrt( discriminant ); sqrtpart *= - 0.5 * fsign(B); ComplexDouble Q = sqrtpart + (- 0.5 * B); // Q cannot be zero *pT0 = Q / A; *pT1 = C / Q; return 2; }
One thing that is missing is refinement of roots by Newton-Raphson. The roots computed this way can still have large error, but gradient descent can improve that.
11 comments:
You have the case where b^2>>4ac where you have huge error bounds for the root near zero unless you rewrite the quadratic equation.
Hmm.. doesn't the Q formulation do that for you? The root near zero will be the C/Q root.
It does seem like the errors can still get really huge any time one of the coefficients dwarfs others.
Yeah, you are right. I just took a quick look and noticed there wasn't a special case for that.
It's possible NR has converged over time to something better, but it's long been complained that NR is not a trustworth source for numerical algorithms.
http://math.stanford.edu/~lekheng/courses/302/wnnr/nr.html
link
Have you read Jim Blinn's articles on the topic?
http://portal.acm.org/citation.cfm?id=1100878
http://portal.acm.org/citation.cfm?id=1158859
They make for a good read...
JL
I like the Blinn articles, but they don't really add anything, he winds up saying that just using the Q formulation is best. I do like the intuition that he creates though.
Sean, that NR bashing site just annoys me. Yeah, that's all well and good, so maybe it is horrible, so give me something better. Write a better book, make me a better library.
Well, the article does have a link to more "trusted" numeric libraries right at the bottom. However, a lot of them are 404s by now.
Their main reference is Netlib, which is a pretty mixed bag itself. Yeah, a lot of it (especially LAPACK and the TOMS collection) is peer reviewed and has been in use for a long time; but precisely because it's been around that long (some of the code dates from the mid-60s), it's missing some significant algorithmic improvements and is usually tweaked for pre-IEEE floating point implementations. Plus most of it is either sparsely commented FORTRAN 66 or 77 code, or FORTRAN->C conversions with the same 6-char identifiers and the same lack of comments.
Yeah, I went to this link :
http://math.stanford.edu/~lekheng/courses/302/wnnr/other-sw.html
and couldn't find a single useful thing there.
I'm currently using some code I found for least squares and svd and some other things like that, and it's all that ancient horrible fortran stuff. It seems like nobody has written mathematical code since 1980 !?
A pretty nice lib for your basic linear algebra needs is Newmat (downloads here, that page has a strange organization).
The code is (readable!) C++ with useful comments, it does some lazy evaluation to make sure temporaries don't kill performance, it avoids metaprogramming orgies that result in impossible to read error messages, it has optimized routines for most important special cases (upper/lower triangular, symmetric, diagonal and banded matrices) so there's no needless arithmetic on zeros, and performance is okay. Not something I'd use for HPC, but definitely handy to have around.
ADDENDUM : of course the other thing that's missing is the checks for zero should really be checks against epsilon, and in the epsilon region you should use series expansion.
Of course then saying it has 1 or 2 roots in that region is sort of lame, so just always say it has 2 and they might be very close or equal.
Post a Comment