We can approximate smooth functions with polynomials.
Polynomials can approximate some functions
In our study of mathematics, we’ve found that some functions are easier to work with than others. For instance, if you are doing calculus, typically polynomials are “easy” to work with because they are easy to differentiate and integrate. Other functions, like are more difficult to work with. However, there are polynomials that mimic the behavior of near zero.
How do we produce approximating polynomials?
Cutting straight to the point, the approximating polynomials we’ll discuss are called Taylor polynomials and Maclaurin polynomials.
- The Taylor polynomial of degree of at is
- A special case of the Taylor polynomial is the Maclaurin polynomial, where . That is, the Maclaurin polynomial of degree of is
We say these polynomials have a center of , and so Maclaurin polynomials are Taylor polynomials centered at zero.
In the previous example, why did we choose to use for the center? There are several reasons. We often like to use for the center of a Taylor polynomial (which is why such polynomials have a special name). In the case of , however, is undefined. So, we must choose a different center. Since we have to evaluate the function at the center, any center point we choose should be a value for which we know and can easily compute the function value. In the case of , the value we know best is , so is a good choice for the center. Of course, in general we can choose any center we like, but this will give us a different polynomial (and may make our calculations more difficult.)
You may be wondering, how exactly Taylor polynomials and Maclaurin polynomials approximate these functions. Here’s the idea: Suppose you have two functions and which are each as differentiable as we need them to be. If for some specific value we have that
then it makes sense that for all . The more derivatives we use, the better our approximation (usually) is. In that sense, we are just working with a better version of linear approximation – we could call this polynomial approximation! The Taylor and Maclaurin polynomials are “cooked up” so that their value and the value of their derivatives equals the value of the related function at . Check it out: here we see the third Maclaurin polynomial for : we see Note that in the case of sine, shares the function’s value at and shares the first derivatives, though the th derivative is different. Let’s see a graph to help us understand what is going on.Taylor’s theorem
Again, let’s get to the point by stating Taylor’s theorem (which is a generalization of the mean value theorem):
The first part of Taylor’s theorem states that , where is the th order Taylor polynomial and is the remainder, or error, in the Taylor approximation. The second part gives bounds on how big that error can be. If the th derivative is large, the error may be large; if is far from , the error may also be large. However, the term in the denominator tends to ensure that the error gets smaller as increases.
The following example computes error estimates for the approximations of and made earlier.
We practice again. This time, we use Taylor’s theorem to find that guarantees our approximation is within a certain amount.
Connections to differential equations
Our final example gives a brief introduction for using Taylor polynomials to solve differential equations.
- , and
- .
Find the degree Maclaurin polynomial of .
Now we find information about . Starting with , take derivatives of both sides, with respect to . That means we must use implicit differentiation.
Now evaluate both sides at : We repeat this once more to find . We again use implicit differentiation; this time the Product Rule is also required. Now evaluate both sides at : In summary, we have: We can now form : It turns out that the differential equation we started with, , where , can be solved without too much difficulty: . This makes sense in regard to the previous examples.Taylor polynomials are very useful approximation in two basic situations:
- (a)
- When is known, but perhaps “hard” to compute directly. For instance, we can define as either the ratio of sides of a right triangle (“adjacent over hypotenuse”) or with the unit circle. However, neither of these provides a convenient way of computing . A Taylor polynomial of sufficiently high degree can provide a reasonable method of computing such values using only operations usually hard-wired into a computer (, , and ). However, even though Taylor polynomials could be used in calculators and computers to calculate values of trigonometric functions, in practice they generally aren’t. Other more efficient and accurate methods have been developed, such as the CORDIC algorithm.
- (b)
- When is not known, but information about its derivatives is known. This occurs more often than one might think, especially in the study of differential equations.