Note: Since the Web does not yet speak Greek easily, we will use the Icelandic letter Ð for "theta", the pound sterling sign £ for "lambda", and the cent sign ¢ for "epsilon".
[Insert graphic.]
A crosswalk extends across a straight street between two signal posts, A and B. Stationed 1000 meters down the sidewalk at point C, an engineer uses a theodolite to measure the angle between A and B as Ð = 2 degrees.
In preparation for class discussion of this problem, use a calculator to find the following to at least 2 significant digits:
Use Maple to plot Ð, sin Ð, and tan Ð on the same axes, at various scales, around Ð = 0. Then do the same for 1 - cos Ð and 1/2 Ð^{2}.
Here is what you should have observed. (available after class 25.T)
T_{N}(x) = [click here]. (PDF version)
The error in this approximation "behaves like" (x-a)^{N+1} (if f has enough derivatives).
When this statement is made more precise, it becomes Taylor's theorem with remainder (see below).
Usually we can arrange things so that the base point, a, is 0. Then the Taylor expansion is called a Maclaurin expansion.
This will be the subject of lab 26.M. Read this in advance. (PDF version)
When f is a complicated function, Taylor's formula (with the f^{(j)}/j! terms) is usually not the best way to find a Taylor expansion of f. Instead, one tries to find the series by algebra and calculus from the previously known expansions of simpler functions. Let's start with two easy examples:
Now let's try something more ambitious: (Expect to do this in class as a RAT!)
f(x) = 1 + 3x + x^{2} + ....
(answer available after the quiz) (PDF version)
Here we had to multiply two Taylor series together. In such a calculation there are two common pitfalls that you must avoid:
If you know that x = 3.5 approximately and y = 4.0962792966 exactly, then you know better than to write x + y = 7.5962792966; instead, all you can say is that x + y = 7.6 approximately.
Similarly, if you know that
f(x) = 3 + x - x^{2} + 5x^{3} + 9x^{4} + ...
and that g(x) = 2 - x + ..., where the higher-order terms in g are not known, then you should not write
f(x) + g(x) = 5 - x^{2} + 5x^{3} + 9x^{4} + ...;
all we can say is that
f(x) + g(x) = 5 + O(x^{2}).
(The O notation means that the first neglected or unknown term is of the order x^{2}. That is, the linear approximation to f + g is actually constant in this case, and we do not have enough information to calculate the quadratic approximation.)
In the case that started this discussion we were multiplying two quadratic Taylor polynomials. Your answer should not contain any terms of higher order than second, because there are third-order terms in the product that cannot be determined from the information given.
One can also integrate and differentiate Taylor expansions:
This is an elaboration of Exercise 30 of Sec. 10.12 of Stewart. For more information on this important part of the history of physics, read the chapter on "The Origin of the Quantum Theory" in F. K. Richtmyer et al., Introduction to Modern Physics. (The chapter number and the coauthors' names vary from one edition to the next.) Notation:
During the 19th century there were two rival theoretical predictions for the energy distribution of radiation in thermal equilibrium in a perfectly absorbing cavity.
This was experimentally verified for long wavelengths but obviously wrong at short wavelengths, where it predicts an infinite amount of energy.
where the constants C_{1} and C_{2} needed to be experimentally determined. This was experimentally verified for short wavelengths. (Why is it not infinite there?)
In 1900 the correct formula was discovered: Planck's law,
[Here there should be a plot of the 3 functions, but I have not yet had time to produce one. So you will probably want to plot them in Maple!]
Class exercise: Use our list of Maclaurin expansions (PDF version) to find an approximation to Planck's law valid for large £ and one valid for small £. Show that the first term of each expansion agrees with the appropriate 19th-century formula.
It is time to face the question of what it means to say that a Taylor polynomial is a good approximation to the function that it represents. Let's start with the case of a quadratic approximation, and use the familiar time-position-velocity-acceleration notation for the quantities involved. (Also, to simplify the notation we'll take the expansion point, a, to be 0.)
Remark on the logic of the proof (PDF version)
This argument could be generalized to prove the simplest and most useful version of Taylor's theorem with remainder. (PDF version) Another proof (See Stewart ed. 3, p. 662) proves another version (PDF version), which is slightly more precise about the remainder term. The neat thing about that version is that the formula for the remainder, R_{N}, looks just like what would be the next term of the series, except that the derivative is evaluated at the unknown point z instead of at a.
(navigate back to "The basics")
Taylor's theorem with remainder (in the second version stated above) is a generalization of the mean value theorem; also, the mean value theorem is used in its proof. Although we skipped over it in a hurry, the mean value theorem appeared back in Section 3.2 of Stewart and was then used to prove many of the elementary properties of derivatives. Now we can take a closer look and acquire a better appreciation of both the mean value theorem and Taylor's theorem.
Study (or review) the mean value theorem.
Look at what Taylor's theorem is really saying. (PDF version)
In practical work, many people get into the habit of using the "rule of thumb" that the numerical error in truncating a series is roughly as large as the first term neglected. In our first try at this example, that term was the integral of x^{4}/24, which came out to be 1/216 = 0.00463 (less than the requested tolerance of 0.01). Notice that it is smaller than the rigorous error bound by a factor e. If the upper limit in the integration had been much larger than 1, the extra factor e^{c} could have been very large! Therefore, the rule of thumb can be dangerous if used carelessly.
Under certain special circumstances, however, the rule of thumb is exactly correct! This is so when the Taylor series satisfies the "alternating series estimation theorem" (Stewart, p. 632), which is used by Stewart to solve problems of this type. For instance, if the exponent in the integrand of our example had been -x^{2} (actually a more useful integral, because of its connection with probability!) we would have Stewart's Example 8, pp. 660-661.
There was no required reading assignment from Stewart's text at the top of this Web page. The reason is that Stewart's approach to Taylor expansions (like that of many calculus textbooks) starts from the general theory of convergent infinite series, which we have not yet studied. This makes the relevant sections, 10.9 through 10.12, hard to understand out of context. Now that we have presented the basic ideas about finite Taylor series and their applications, you may be ready to give Sections 10.9-12 a first reading. Ignore for now all discussions of convergence of the entire infinite series. We will come back to those sections again at the end of the course.
The approach to Taylor expansions and related matters that we are following here owes a great deal to an article by T. W. Tucker, "Rethinking Rigor in Calculus: The Role of the Mean Value Theorem," American Mathematical Monthly, Vol. 104, pp. 231-240 (March, 1997).
Not surprisingly, having a Taylor approximation to a function is most useful when one does not have an exact formula for the function. (If you know the function exactly, you are less interested in an approximation.) But in that situation, it may be difficult to use Taylor's formula directly. (How do you calculate f^{(j)}(a) if you don't know f ?) If the unknown function is defined by an equation to be solved, one can assume that the function is given by a Taylor series, with unknown coefficients, and plug the series into the equation. With luck, the result will be a set of consistent equations that can be solved to yield the mysterious coefficients.
A simple but nontrivial example is provided by an algebraic equation. (PDF version)
(navigate back to "The basics")
Class or homework exercise: Let's solve some quadratic equations by this method, and compare the result with the Taylor expansion of the exact solution given by the quadratic formula.
Class exercise: Let's list as many different ways as we can think of to find the Maclaurin series of the function g(x) = 1/(5-3x). (One of them should be "perturbative". Fill in the details of finding the first few terms that way.) (answers available after the class) (PDF version)
Perturbation theory can also be applied to differential equations. Here there is the complication that the unknown is a function of a second variable (the independent variable of the differential equation, say t) as well as of the small parameter, say ¢. A frequent defect of the method is that the solution may lose accuracy as t increases, even if ¢ is very small. Here are two examples: