(c) copyright Foundation Coalition (S. A. Fulling) 1998

# Taylor Expansions

Note: Since the Web does not yet speak Greek easily, we will use the Icelandic letter Ð for "theta", the pound sterling sign £ for "lambda", and the cent sign ¢ for "epsilon".

## Introductory example: The crosswalk

[Insert graphic.]

A crosswalk extends across a straight street between two signal posts, A and B. Stationed 1000 meters down the sidewalk at point C, an engineer uses a theodolite to measure the angle between A and B as Ð = 2 degrees.

1. How long is the crosswalk?
2. How much farther is it from C to B than from C to A?

In preparation for class discussion of this problem, use a calculator to find the following to at least 2 significant digits:

• the value of Ð in radians
• sin Ð
• tan Ð
• cos Ð
• 1 - cos Ð

Use Maple to plot Ð, sin Ð, and tan Ð on the same axes, at various scales, around Ð = 0. Then do the same for 1 - cos Ð and 1/2 Ð2.

Here is what you should have observed. (available after class 25.T)

## Taylor expansions: The basics

• Taylor series are your friends! There is no reason to be afraid of them or bored by them. They let you approximate functions by polynomials, which are easy to calculate with. Few topics in the entire calculus sequence are more practical!
• The key fact: For x near a, f(x) is approximately equal to its Nth-degree Taylor polynomial,

The error in this approximation "behaves like" (x-a)N+1 (if f has enough derivatives).

When this statement is made more precise, it becomes Taylor's theorem with remainder (see below).

• Such an approximation is known by various names: Taylor expansion, Taylor polynomial, finite Taylor series, truncated Taylor series, asymptotic expansion, Nth-order approximation, or (when f is defined by an algebraic or differential equation instead of an explicit formula) a solution by perturbation theory (see below).

Usually we can arrange things so that the base point, a, is 0. Then the Taylor expansion is called a Maclaurin expansion.

## Calculations with Taylor polynomials

When f is a complicated function, Taylor's formula (with the f(j)/j! terms) is usually not the best way to find a Taylor expansion of f. Instead, one tries to find the series by algebra and calculus from the previously known expansions of simpler functions. Let's start with two easy examples:

2. Find the first 3 nonvanishing terms in the Maclaurin series of x2 ex. (answer) (PDF version)

Now let's try something more ambitious: (Expect to do this in class as a RAT!)

3. Find the first 3 terms of the Maclaurin expansion of ex f(x), supposing that all we know about f is that its Maclaurin series starts out

f(x) = 1 + 3x + x2 + ....

Here we had to multiply two Taylor series together. In such a calculation there are two common pitfalls that you must avoid:

• Don't forget the cross terms! Truncated Taylor series are multiplied just like any other polynomials: Multiply each term of the first series by each term of the second series. Then combine the terms with the same exponent.
• High-order terms (those with large exponents) are like less-significant digits in decimal arithmetic: If you keep one term of order xn, then you must keep all terms of that order, else your answer will be rubbish.

If you know that x = 3.5 approximately and y = 4.0962792966 exactly, then you know better than to write x + y = 7.5962792966; instead, all you can say is that x + y = 7.6 approximately.

Similarly, if you know that

f(x) = 3 + x - x2 + 5x3 + 9x4 + ...

and that g(x) = 2 - x + ..., where the higher-order terms in g are not known, then you should not write

f(x) + g(x) = 5 - x2 + 5x3 + 9x4 + ...;

all we can say is that

f(x) + g(x) = 5 + O(x2).

(The O notation means that the first neglected or unknown term is of the order x2. That is, the linear approximation to f + g is actually constant in this case, and we do not have enough information to calculate the quadratic approximation.)

In the case that started this discussion we were multiplying two quadratic Taylor polynomials. Your answer should not contain any terms of higher order than second, because there are third-order terms in the product that cannot be determined from the information given.

4. Find the first few terms of the Maclaurin series of sin(2x + 1). There's another pitfall here, so we'll work this one out for you. (PDF version)

One can also integrate and differentiate Taylor expansions:

5. Find the Maclaurin series for ln (1 - x) by integrating the geometric series. (answer) (PDF version)

## Application: The black-body radiation law

This is an elaboration of Exercise 30 of Sec. 10.12 of Stewart. For more information on this important part of the history of physics, read the chapter on "The Origin of the Quantum Theory" in F. K. Richtmyer et al., Introduction to Modern Physics. (The chapter number and the coauthors' names vary from one edition to the next.) Notation:

• £ = wavelength
• T = temperature
• k, h, c are certain physical constants.

During the 19th century there were two rival theoretical predictions for the energy distribution of radiation in thermal equilibrium in a perfectly absorbing cavity.

1. The Rayleigh-Jeans law,

This was experimentally verified for long wavelengths but obviously wrong at short wavelengths, where it predicts an infinite amount of energy.

2. Wien's law,

where the constants C1 and C2 needed to be experimentally determined. This was experimentally verified for short wavelengths. (Why is it not infinite there?)

In 1900 the correct formula was discovered: Planck's law,

[Here there should be a plot of the 3 functions, but I have not yet had time to produce one. So you will probably want to plot them in Maple!]

Class exercise: Use our list of Maclaurin expansions (PDF version) to find an approximation to Planck's law valid for large £ and one valid for small £. Show that the first term of each expansion agrees with the appropriate 19th-century formula.

## Theory break: Discovering and justifying Taylor's theorem

It is time to face the question of what it means to say that a Taylor polynomial is a good approximation to the function that it represents. Let's start with the case of a quadratic approximation, and use the familiar time-position-velocity-acceleration notation for the quantities involved. (Also, to simplify the notation we'll take the expansion point, a, to be 0.)

This argument could be generalized to prove the simplest and most useful version of Taylor's theorem with remainder. (PDF version) Another proof (See Stewart ed. 3, p. 662) proves another version (PDF version), which is slightly more precise about the remainder term. The neat thing about that version is that the formula for the remainder, RN, looks just like what would be the next term of the series, except that the derivative is evaluated at the unknown point z instead of at a.

## The mean value theorem and Taylor's theorem

Taylor's theorem with remainder (in the second version stated above) is a generalization of the mean value theorem; also, the mean value theorem is used in its proof. Although we skipped over it in a hurry, the mean value theorem appeared back in Section 3.2 of Stewart and was then used to prove many of the elementary properties of derivatives. Now we can take a closer look and acquire a better appreciation of both the mean value theorem and Taylor's theorem.

Study (or review) the mean value theorem.

## Example: Approximating "impossible" integrals

In practical work, many people get into the habit of using the "rule of thumb" that the numerical error in truncating a series is roughly as large as the first term neglected. In our first try at this example, that term was the integral of x4/24, which came out to be 1/216 = 0.00463 (less than the requested tolerance of 0.01). Notice that it is smaller than the rigorous error bound by a factor e. If the upper limit in the integration had been much larger than 1, the extra factor ec could have been very large! Therefore, the rule of thumb can be dangerous if used carelessly.

Under certain special circumstances, however, the rule of thumb is exactly correct! This is so when the Taylor series satisfies the "alternating series estimation theorem" (Stewart, p. 632), which is used by Stewart to solve problems of this type. For instance, if the exponent in the integrand of our example had been -x2 (actually a more useful integral, because of its connection with probability!) we would have Stewart's Example 8, pp. 660-661.

There was no required reading assignment from Stewart's text at the top of this Web page. The reason is that Stewart's approach to Taylor expansions (like that of many calculus textbooks) starts from the general theory of convergent infinite series, which we have not yet studied. This makes the relevant sections, 10.9 through 10.12, hard to understand out of context. Now that we have presented the basic ideas about finite Taylor series and their applications, you may be ready to give Sections 10.9-12 a first reading. Ignore for now all discussions of convergence of the entire infinite series. We will come back to those sections again at the end of the course.

The approach to Taylor expansions and related matters that we are following here owes a great deal to an article by T. W. Tucker, "Rethinking Rigor in Calculus: The Role of the Mean Value Theorem," American Mathematical Monthly, Vol. 104, pp. 231-240 (March, 1997).

## Perturbation theory: Solving equations by Taylor series

Not surprisingly, having a Taylor approximation to a function is most useful when one does not have an exact formula for the function. (If you know the function exactly, you are less interested in an approximation.) But in that situation, it may be difficult to use Taylor's formula directly. (How do you calculate f(j)(a) if you don't know f ?) If the unknown function is defined by an equation to be solved, one can assume that the function is given by a Taylor series, with unknown coefficients, and plug the series into the equation. With luck, the result will be a set of consistent equations that can be solved to yield the mysterious coefficients.

A simple but nontrivial example is provided by an algebraic equation. (PDF version)

Class or homework exercise: Let's solve some quadratic equations by this method, and compare the result with the Taylor expansion of the exact solution given by the quadratic formula.

1. x2 + 2 ¢ x + 1 =0
2. ¢ x2 + 2 x + 1 =0 (In this case, what happens to the second root in the perturbative calculation?)

Class exercise: Let's list as many different ways as we can think of to find the Maclaurin series of the function g(x) = 1/(5-3x). (One of them should be "perturbative". Fill in the details of finding the first few terms that way.) (answers available after the class) (PDF version)

## Application to differential equations

Perturbation theory can also be applied to differential equations. Here there is the complication that the unknown is a function of a second variable (the independent variable of the differential equation, say t) as well as of the small parameter, say ¢. A frequent defect of the method is that the solution may lose accuracy as t increases, even if ¢ is very small. Here are two examples:

• Atmospheric drag on a falling body. (PDF version) In Lab 27.M we will attempt to generalize this to two dimensions and apply it to the targeting algorithm for your Ping-Pong ball launcher!
• The damped harmonic oscillator. (PDF version) Like the quadratic equation above, this problem has an exact solution that can be expanded in a Taylor series to match the perturbative solution exactly.

## Error bounds in numerical integration

As a final application of Taylor expansion, we shall derive the formulas for the maximum error in the familiar rules for numerical integration. This is unfinished business from last semester. (This document is not yet written. If we have time, we will come back to this at the end of the semester. See also the article by Tucker cited above.)