tag:blogger.com,1999:blog-5246987755651065286.post4982161546727464994..comments2024-02-22T16:15:42.388-08:00Comments on cbloom rants: 11-12-10 - Some notes on function approximation by iterationcbloomhttp://www.blogger.com/profile/10714564834899413045noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-5246987755651065286.post-42553603949558109912010-11-17T16:49:32.753-08:002010-11-17T16:49:32.753-08:00Yeah, you're right that it should be minimax/r...Yeah, you're right that it should be minimax/remez polynomials, though if you bring x into a standard power of two range like [1,2) then just use L2 norm instead of ratio, it's not that far off.<br /><br />(BTW I just noticed Boost now has a Remez resolver : <br /><br />http://www.cppprog.com/boost_doc/libs/math/doc/sf_and_dist/html/math_toolkit/backgrounders/remez.html<br /><br />)cbloomhttps://www.blogger.com/profile/10714564834899413045noreply@blogger.comtag:blogger.com,1999:blog-5246987755651065286.post-13509056222441037112010-11-17T16:30:59.170-08:002010-11-17T16:30:59.170-08:00"we could also do r = x_n * (1+e) or r = x_n/..."we could also do r = x_n * (1+e) or r = x_n/(1+e) or whatever. Sometimes these give better iterations (in terms of complexity vs. accuracy)."<br />For the specific case of reciprocal, square root and reciprocal square root computations, this is called "Goldschmidt's algorithm". Both Newton-Raphson and Goldschmidt actually evaluate the same series expansion in this case, but Goldschmidt is less serial which makes it popular for FP dividers. See <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.68.7202" rel="nofollow">this paper</a> for example.<br /><br />There's some papers on HW implementations and math libraries for newer architectures and they're worth reading, e.g. <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.5177" rel="nofollow">this one</a> (also Google for the paper and look at the citations etc). Lots of good stuff in there.<br /><br />"You could use Legendre Polynomials to find the actual best N-term approximation (or any other orthogonal polynomial basis)"<br />Orthogonal polynomials give you an optimal approximation wrt. weighted L2 norms (int_a^b (f_approx(x) - f_real(x))^2 w(x) dx, where w(x) is a weighting function corresponding to your choice of basis). So it minimizes average error, but to minimize maximum absolute/relative error (which is usually what you want) you need a different approach (minimax approximation). Still, they're usually significantly better than Taylor polynomials, and the coefficients are easy to determine even without a computer algebra system (unlike minimax polynomials).ryghttps://www.blogger.com/profile/03031635656201499907noreply@blogger.com