Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Horner's method
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Efficiency === Evaluation using the monomial form of a degree <math>n</math> polynomial requires at most <math>n</math> additions and <math>(n^2+n)/2</math> multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to <math>n</math> additions and <math>2n-1</math> multiplications by evaluating the powers of <math>x</math> by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately <math>2n</math> times the number of bits of <math>x</math>: the evaluated polynomial has approximate magnitude <math>x^n</math>, and one must also store <math>x^n</math> itself. By contrast, Horner's method requires only <math>n</math> additions and <math>n</math> multiplications, and its storage requirements are only <math>n</math> times the number of bits of <math>x</math>. Alternatively, Horner's method can be computed with <math>n</math> [[fused multiply–add]]s. Horner's method can also be extended to evaluate the first <math>k</math> derivatives of the polynomial with <math>kn</math> additions and multiplications.<ref>{{harvnb|Pankiewicz|1968}}.</ref> Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. [[Alexander Ostrowski]] proved in 1954 that the number of additions required is minimal.<ref>{{harvnb|Ostrowski|1954}}.</ref> [[Victor Pan]] proved in 1966 that the number of multiplications is minimal.<ref>{{harvnb|Pan|1966}}.</ref> However, when <math>x</math> is a matrix, [[Polynomial evaluation#Matrix polynomials|Horner's method is not optimal]]. This assumes that the polynomial is evaluated in monomial form and no [[preconditioning]] of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then [[Polynomial evaluation#Evaluation with preprocessing|faster algorithms are possible]]. They involve a transformation of the representation of the polynomial. In general, a degree-<math>n</math> polynomial can be evaluated using only {{floor|''n''/2}}+2 multiplications and <math>n</math> additions.<ref>{{harvnb|Knuth|1997}}.</ref> ====Parallel evaluation==== {{see also|Estrin's scheme}} A disadvantage of Horner's rule is that all of the operations are [[data dependency|sequentially dependent]], so it is not possible to take advantage of [[instruction level parallelism]] on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: <math display="block">\begin{align} p(x) & = \sum_{i=0}^n a_i x^i \\[1ex] & = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n \\[1ex] & = \left( a_0 + a_2 x^2 + a_4 x^4 + \cdots\right) + \left(a_1 x + a_3 x^3 + a_5 x^5 + \cdots \right) \\[1ex] & = \left( a_0 + a_2 x^2 + a_4 x^4 + \cdots\right) + x \left(a_1 + a_3 x^2 + a_5 x^4 + \cdots \right) \\[1ex] & = \sum_{i=0}^{\lfloor n/2 \rfloor} a_{2i} x^{2i} + x \sum_{i=0}^{\lfloor n/2 \rfloor} a_{2i+1} x^{2i} \\[1ex] & = p_0(x^2) + x p_1(x^2). \end{align}</math> More generally, the summation can be broken into ''k'' parts: <math display="block">p(x) = \sum_{i=0}^n a_i x^i = \sum_{j=0}^{k-1} x^j \sum_{i=0}^{\lfloor n/k \rfloor} a_{ki+j} x^{ki} = \sum_{j=0}^{k-1} x^j p_j(x^k)</math> where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows ''k''-way [[SIMD]] execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for [[floating-point]] calculations this requires enabling (unsafe) reassociative math{{Citation needed|date=February 2022}}. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of [[instruction-level parallelism]].
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Horner's method
(section)
Add topic