A General Condition Number for Polynomials

. This paper presents a generic condition number for polynomials that is useful for polynomial evaluation of a ﬁnite series of polynomial basis deﬁned by means of a linear recurrence. This expression extends the classical one for the power and Bernstein bases, but it also provides us a general framework for all the families of orthogonal polynomials like Chebyshev, Legendre, Gegenbauer, Jacobi


Introduction.
In many practical applications it is quite common to have a polynomial evaluation inside a more generic algorithm.Let B n (x) = {b 0 (x), . . ., b n (x)} be a basis of the vector space P n of polynomials of degree lower than or equal to n on an interval I ⊂ R. Thus, given a polynomial p(x) ∈ P n , there are unique coefficients c i ∈ R such that Depending on the polynomial basis, the standard evaluation algorithms are the Horner algorithm (power basis), the de Casteljau algorithm (Bernstein basis), the Clenshaw algorithm (Chebysev basis), and the extended Clenshaw algorithm (orthogonal polynomial basis).The cases of power and Bernstein bases are well known, and there are backward, forward, and running-error bounds in the literature [10,15,16].The Clenshaw algorithm [7] is a recursive method to evaluate polynomials represented in the Chebyshev basis.The error analysis of this algorithm has already been considered in [3,4,5,9].The problem is that the evaluation of these families of polynomials is more complex because it uses the recurrence relations that define the polynomial basis itself.Thus, the backward and forward error bounds are more involved.However, finite series of orthogonal polynomials (or particular cases like Chebyshev, Legendre, or Gegenbauer polynomials) appear in several fields of physics, engineering, and mathematics, as, for example, in the approximation of functions, in the 1281 integration of ordinary and partial differential equations by means of the collocation method, and in nuclear physics.Therefore, the development of a general setting that embraces all these cases is an important task.
The analysis of the condition number for polynomials in the context of root finding with different bases was first introduced by Wilkinson [19,20].Other authors, such as Gautschi [12,13,14], compared the condition numbers of polynomials in monomial, orthogonal, and Lagrangian bases.More recently, in [21] the conditions of polynomials in different forms were compared by an extended definition of the condition number of a polynomial but were mainly developed for the root finding problem.Nevertheless, the standard polynomial condition number does not provide us a direct error bound, as occurs in the power and Bernstein cases.
In this paper, we present a new expression of condition number valid for any polynomial basis obtained from a linear recurrence and useful for polynomial evaluation.This expression has as particular cases the classical one for the power and Bernstein bases, but it also can be applied to all the families of orthogonal polynomials like Chebyshev, Legendre, Gegenbauer, Jacobi, and Sobolev orthogonal polynomial bases.Moreover, the use of this condition number permits us to give, in a direct way, a general theorem about the forward error in the evaluation of finite series in any of these polynomial bases by means of the extended Clenshaw algorithm.To complete the analysis, we also present a running-error bound of the extended algorithm.
The paper is organized as follows.Section 2 introduces basic notation; section 3 presents the new condition number and a general theorem about the error bound; section 4 gives a running-error bound of the extended Clenshaw algorithm; and finally, in section 5, all the bounds are compared in several numerical examples.

Basic notation and definitions.
Let us introduce some basic notation in error analysis and condition number of polynomial bases.In this paper we assume that the computation in floating-point arithmetic obeys the model where •∈{+, −, ×, /} and |ε 1 |, |ε 2 | ≤ u (with u the round-off unit).We also assume that all the operations are done with rounding to the nearest and that the computed result of a ∈ R in floating-point arithmetic is denoted by a or fl(a).Following [15], we will use the following classic properties in error analysis: The absolute condition number κ(x) at the point x of a function f : X → Y from a normed vector space X of data to a normed vector space Y of solutions is given by [18] (2.2) κ(x) := lim A relative condition number may be obtained in a similar way, κ(x) := κ(x) • ( x / f (x) ), although for the evaluation of polynomials close to zero the absolute condition number is the standard option.(In the relative condition number it is implicitly assumed that f (x) = 0.) Note that the condition number κ(x) is a first order Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpmeasure of sensitivity of the problem with respect to the parameters of the function, and, for smooth functions f , it is related with the Jacobian matrix of f .
In the particular case of the problem of evaluating finite polynomial series, we have that given the vector space P n , B n (x) = {b 0 (x), b 1 (x), . . ., b n (x)} a basis of P n , and a polynomial p(x) = n k=0 c k b k (x) ∈ P n , the most standard condition number of p(x) is given by [11] (2.3) In this paper, we are going to develop an analysis of the case where the polynomial basis satisfies an (m + 1)-order homogeneous linear recurrence, that is, we have a polynomial basis {p k (x)} where, for some p 0 (x), we have For these polynomial bases the standard general algorithm for the evaluation of the polynomial p(x) = n k=0 c k p k (x) is the extended Clenshaw algorithm.The Cleshaw algorithm was designed for the evaluation of Chebyshev orthogonal polynomials, but it can be extended [5] to any polynomial basis that satisfies a homogeneous linear recurrence relation.(For instance, the Horner algorithm can be seen as the particular case of the extended Clenshaw algorithm for monomial bases.) The extended Clenshaw algorithm.
Input: x, n, m, p 0 (x), {c i }, {a i+j,j } Initialize variables q n+1 = . . .= q n+m = 0 for i = n to 0 by −1 In order to help our study we introduce the absolute polynomial basis associated with the basis {p k (x)} [2,3].
Definition 2.1.Let {p k (x)} be a polynomial basis that satisfies the homogeneous linear recurrence (2.4).Then we define the absolute polynomial basis associated with the basis {p k (x)} as the basis {p k (x)} that satisfies the homogeneous linear recurrence Note that p k (x) ≥ 0 and |p k (x)| ≤ p k (x) ∀k, x.Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php3. A general condition number for polynomials.The problem of the standard condition number is that S Bn (p(x)) was defined for the power basis and the Bernstein basis [11], where the evaluation of the polynomials of the basis is particularly simple.In the case of power basis, X = {1, x, x 2 , . . ., x n }, we have [15] the following.
is the value computed by the Horner algorithm, then

) and p(t) is the value computed by the de Casteljau algorithm, then
Note that these theorems assume that the coefficients c k are known and representable exactly in the computer.If we have c k = c k (1 + θ nc ) with |θ nc | ≤ γ nc , then we will have the factor γ 2n+nc instead of γ 2n in the above theorems.In a similar way, if we have x = x(1 + θ 1 ), then we will have the factor γ 3n .In any case, the standard condition number provides us useful information on the behavior of the algorithm to evaluate the polynomial for both bases.What happens for an orthogonal polynomial basis?In this case the bounds that exist in the literature are not so compact, and the standard condition number is not so useful.
, where the polynomial basis Φ = {p i (x)} satisfies the (m + 1)-order homogeneous linear recurrence (2.4) and p(x) is the value computed by the extended Clenshaw algorithm, then ), Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpwhere absolute value is to be understood componentwise; A 0 denotes a matrix A except for the last row that is the null vector; P (x) ∈ R n+1 is the row vector P (x) = (p 0 (x), . . ., p n (x)); I k ∈ R k×k is the identity matrix; R(x) is given by (3.1); and R m (x) is the matrix R(x) but only with bandwidth m instead of m + 1.
This bound, although in some sense it resembles the bounds for the power and Bernstein bases, has a quite technical term The reason is quite simple: in the power and Bernstein bases the algorithm to evaluate the polynomials in the basis is trivial, whereas in the orthogonal polynomial basis (or a general basis obtained from a linear recurrence) we have to take into account an extra source of errors, the linear recurrence of the basis.
First we obtain a backward error estimate.From (2.5), If a i+j,j (x) := a 1 i+j,j x + a 0 i+j,j and we suppose that x = x(1 + δ), a l i+j,j = a l i+j,j (1 + θ na ) for l = 0, 1 with |θ na | ≤ γ na , (3.3) are the polynomials obtained by perturbing the coefficients in the recurrence relation (2.4) by a k,j = a k,j (1 + δa k,j ).Now, we extend the definition of condition number to avoid the problem of the generic polynomial basis.First, to help in the analysis, we define the relative error counter n ū, in a similar way as in the standard notation for rounding error analysis [15] but now using a new ū instead of the round-off unit u.We define (3.5) ū := max Next, in a similar way to [21], we define for p, p ∈ P n the perturbation norm As we are considering n > m, this definition measures the perturbations in the coefficients c k and a k,j (x).Thus, the absolute condition number for p(x), using the general definition of condition number of a function (2.2), will be where Therefore, we have to bound the term If we take the recurrence relation (2.4), then, for k ≥ 2m, we have for some coefficients a k,t,2 (x).(We do not need them explicitly, but, for example, ) By continuing this process, we can express p k (x) using the first polynomial p 0 (x) where A k,i (x) is the sum of products of i unperturbed recurrence coefficients of (2.4) and so they are polynomial coefficients, A k,i (x) ∈ P i (x).For example, A k,k (x) = k t=1 a t,1 (x) and it is perturbed by the largest error term of p k (x) which is given by k ū = k t=1 (1 + δat,1 ).Note that p k (x) at the final stage just depends on the coefficients of the recursion and the first polynomial p 0 (x).Therefore, taking into account that the recurrence with absolute values gives us the recurrence for the absolute polynomials (2.6), we obtain Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Now, from (3.8) and (3.10), taking into account that Therefore, considering γ n+1,ū (n + 1)ū and p − p c,a = (n + 1)ū we obtain and taking into account the general expression of condition number (2.2) we define our general condition number for polynomials.Definition 3.4.Let p(x) = n k=0 c k p k (x), where Φ = {p 0 (x), . . ., p n (x)} is a polynomial basis that satisfies an (m+1)-order homogeneous linear recurrence relation (a i,j = 0, when i < j).
Then, the absolute condition number of p(x) is given by where {p 0 (x), . . ., p n (x)} is the basis of the absolute polynomials associated with the basis Φ.Note that this new condition number extends the classical one.For the power basis, we have that p i (x) = x i , and the recurrence now is trivial, p i (x) = x • p i−1 (x).Therefore, |p i (x)| = p i (x), and so both condition numbers are the same.In the case of Bernstein polynomials, B = {b n 0 (t), b n 1 (t), . . ., b n n (t)}, the recurrence is given by where the polynomial basis Φ = {p i (x)} satisfies the (m + 1)-order homogeneous linear recurrence (2.4) and p(x) is the value computed by the extended Clenshaw algorithm, then, up to first order in u, where {p i (x)} is the basis of the absolute polynomials (2.6) associated with the basis {p i (x)} and μ = max{n c , n a + m + 3} with n a and n c defined in (3.3).
Proof.We have discussed the change in the polynomial evaluation with regards to the perturbations of the coefficients c k of the polynomial and those of the coefficients a k,j (x) of the homogeneous linear recurrence.From (3.4), with μ = max{n c , n a + m + 3}.Thus, from (3.5) we deduce that ū ≤ γ μ+1 .Then, taking into account γ n+1,ū γ (n+1)(μ+1) , using the error bound (3.11) and the new definition (3.13), we have the expected result (3.14).
In the previous theorem we have considered that the computation of the coefficients has been done working with the same precision as the rest of the computations.However, sometimes it may be advisable to calculate these coefficients in multiple precision [6] with a new round-off unit u u.In this case the error in the computation of the coefficients is assumed to be negligible, and now the bound is simply Note that Theorem 3.5 may be applied to all the standard polynomial bases that use linear recurrences, as power basis, Chebyshev, Legendre, Gegenbauer, Jacobi, and Sobolev orthogonal polynomial bases.This result provides us a compact bound that gives information of the basis and the recurrence relation that permits one to obtain the basis.From Theorem 3.5 (for p(x) = 0) we may obtain, as is usual in literature [15,Chapter 5], a relative error bound

A general running-error bound for polynomial evaluation.
The extended Clenshaw algorithm provides us the general framework for polynomial evaluation of polynomial bases obtained by linear recurrences.This subsection is devoted to a running-error analysis of the algorithm, which provides an a posteriori error bound.In the next theorem we assume that p 0 (x) can be computed accurately.
Proof.With i (x) := q i (x) − q i (x), we have n+1 = • • • = n+m = 0.For i = n, . . ., 0, we have to prove that where π i (x) is given by (4.2).Applying formula (2.1) to the computation of the extended Clenshaw algorithm, and assuming the order of computation such as (where a⊕b := fl(a+b) and a⊗b := fl(a×b) represent the corresponding floating-point computations), we can derive , and taking into account in the previous formula that a r,s (x) = a r,s (x) (1 + θ na+3 ), we can deduce And thus ( q i+j (x) − i+j (x))θ na+3 , Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php And taking into account that 1 we have , and so up to first order in u we have and by induction, we obtain (4.3).
In a similar way to Theorem 3.5, if coefficients have been performed in multiple precision, instead we obtain π 0 (x) derived from Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php The complexity of the algorithm of the previous theorem is not significantly greater than the complexity of the extended Clenshaw algorithm and it can be incorporated into the evaluation algorithm.The pseudocode of the complete algorithm is given by the following (where ⊕ means all the addition operations are performed in floating-point arithmetic).
The extended Clenshaw algorithm with running error bound.

Numerical tests.
In this section we compare the different error bounds for the evaluation of the Wilkinson polynomials [19,20]: As we focus on the use of orthogonal polynomial bases we also study the behavior in the evaluation of the approximation of the function by means of a polynomial of degree 30.Now we use the nominal interval x ∈ [−1, 1] for the orthogonal basis.The approximation is obtained by truncating the development in Chebyshev polynomials of the first kind.Later, the coefficients for the other bases are obtained exactly by using the algebraic manipulator Mathematica.
We have chosen three polynomial bases, the power basis: the polynomial basis of the Chebyshev polynomials of the first kind {T i (x)} (in the case of the interval t ∈ [0, 1] we use the shifted Chebyshev polynomials {T * i (t)} [1]), and a polynomial Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpbasis of Gegenbauer polynomials with the parameter λ = 5/2, {C λ i (x)} (in the case of the interval [0, 1] we use the shifted Gegenbauer polynomials {C * ,λ i (t)} [1]).The Gegenbauer orthogonal polynomials satisfy the three-term recurrence relation [17] In Figure 5.1 we can see the relative errors of the Horner, Clenshaw, and extended Clenshaw algorithms for the three polynomial bases in the evaluation of the Wilkinson Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phppolynomials p 1 (t) and p 2 (t).We can observe that the orthogonal polynomial bases present problems for evaluating the ill-conditioned Wilkinson problem p 2 (t) as most of the zeros of the function are located close to one end of the evaluation interval.The running relative error bound of Theorem 4.1 provides a sharp estimate of the relative error in all the cases.(Note that the running-error bound of the Horner algorithm is just a particular case.)The use of the new condition number given by Definition 3.4 permits us to have an a priori error bound given by Theorem 3.5 and (3.16).This result gives a bound quite close to the running-error bound, and so it provides a useful formula for a general setting.
In Figure 5.2 we study the behavior when evaluating a polynomial of degree 30 approximating the function f (x).Now, all the polynomial evaluations are wellconditioned, as shown from all the theoretical bounds.The best performance is observed by the orthogonal polynomial bases, as one may expect due to the good properties of these polynomial bases in function approximation.This situation is more clear for the Chebyshev polynomials.(The Gegenbauer polynomials are not a standard situation, but the figure shows that all the results may be applied to this more general case.)Again, the new error bounds give us quite accurate information about the evaluation process.
Obviously, the relative errors are unbounded when the exact value of the function is zero.Therefore, we can observe peaks in the pictures when the value of x is very close to a zero of the function.However, the error bounds remain valid.In Figure 5.  we show the relative and absolute errors and their corresponding bounds around π/8 (x ∈ [π/8 − 10 −8 , π/8 + 10 −8 ]), one of the zeros of f .We can see how the absolute errors have similar global behavior, while the relative errors have, as expected, worse behavior near the zeros of the function.In any case, we can observe that both error bounds provide useful information.

6.
Conclusions.This paper introduces a generic polynomial condition number useful for any polynomial basis defined by a linear recurrence.This condition number is motivated by the lack of a useful one for orthogonal polynomial bases in the literature.The standard polynomial condition number only gives us information about the Downloaded 05/07/13 to 131.96.253.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phperror of polynomial evaluation when we use simple algorithms like the Horner (power bases) and the de Casteljou (Bernstein bases) algorithms.This new condition number extends the standard one for the power and Bernstein bases, and it provides an elegant forward error bound when the evaluation of finite series is developed by the extended Clenshaw algorithm.A running error bound is obtained, and the pseudocode that permits us to apply the extended Clenshaw algorithm together with the running error bound is provided.Finally, different numerical tests have been developed to show how the different bounds fit the real error.These tests show the usefulness of these bounds.

c
k b k (x), x∈ I.

Theorem 4 . 1 .
A running-error bound of the evaluation by means of the extended Clenshaw algorithm of a polynomial p(x) = n k=0 c k p k (x) written as a linear combination of a polynomial basis Φ = {p k (x)} that satisfies the (m+1)-order homogeneous linear recurrence (2.4) is given by

Fig. 5 . 1 .
Fig. 5.1.Decimal logarithm of the relative error and of the relative error bounds (a priori (3.16) and running-error (4.6) bounds) for the evaluation of the Wilkinson polynomials (5.1).