Taylor's Theorem
Consider the general power series centered at
for a differentiable function:
We can easily see that
Now differentiate the series:
So
Differentiate again:
so
And again
so
and again
so
In general, we can see inductively that
What this means is that if we start with a differentiable function and we wish to find the coefficients in its power series centered at we can use the following convenient formula
So the power series representation for centered at is given by
This method, known as Taylor's Theorem, generalizes to a power series centered at any differentiable point in the interior of the domain of the function.
Let be an infinitely differentiable function and be a point in the interior of the domain of , then the power series centered at for is given by
This powerful result allows us to generate a power series for just about any commonly used function.
Example: Use Taylor's theorem to find a power series for centered at .
Here are the relevant derivatives evaluated at
.
So the power series for is
The general term follows a simple pattern so we can write
Red Question: Use Taylor's theorem to find power series centered at
for , and .
Radius of Convergence
A power series is valid for a given function only on a specified interval. Outside of that interval the power series is worthless and will not converge. Each function is different and must be considered on a case by case basis.
For example, the series for
, and are wonderfully accommodating and will converge for all . However the logarithm series that we found above obviously has problems at its vertical asymptote of
.
If you plug in
to the series you can see why.
You get the harmonic series which is divergent.
However, if you plug in
you get the alternating harmonic series which is convergent. So we say that the above power series representation for is valid on the interval
That means I can plug in any value within that interval into the series. Thus
and
and therefore
STEP 2006, Math II, #2: Using the series
show that .
Show that
for and hence show that .
Show that the curve with equation
,
has a minimum turning point between and and give a sketch to show the shape of the curve.
STEP 2006, Math III, #4: The function satisfies the identity
(*)
for all
and . Show that and deduce that . By considering the Maclaurin series for , find the most general function that satisfies (*).
[Do not consider issues of existence or convergence of Maclaurin series in this question.]
(i) By considering the function
, defined by , find the most general function that, for all and , satisfies the identity
.
(ii) By considering the function
, defined by , find the most general function that, for all and , satisfies the identity
.
(iii) Find the most general function
that, for all and , satisfies the identity
,
where
.
STEP 2012, Math II, #4: In this question, you may assume that the infinite series
is valid for .
(i) Let
be an integer greater than 1. Show that, for any positive integer ,
.
Hence show that
.
Deduce that
.
(ii) Show, using an expansion in powers of , that
for .
Deduce that, for any positive integer
,
.
(iii) Use parts (i) and (ii) to show that as
.
STEP 2000, Math III, #7: Given that
use the binomial theorem to show that
for any positive integer
.
The product is defined, for any positive integer , by
Use the arithmetic-geometric mean inequality,
to show that for all .
Explain briefly why tends to a limit as . Show that .
A Gem from Leonhard Euler
Leonhard Euler pulled a clever trick with series by simply ignoring lots of unnecessary issues about convergence and pretending that a power series could work just like a normal polynomial. (It can and it can't.) Here's an example of his brilliance.
The
function has the power series representation
so let's treat it like a polynomial. Euler's idea was to factor it. Well, it can't really be factored, but we know its roots, so it may be OK to think of it as a polynomial written as the product of its linear factors. The roots of
are
, , , , ...
so its linear factors must look like this
, , , , ...
Therefore, the function
must have linear factors that are like
, , , ...
and hence roots that are
, , , ...
However consider the power series for
We have
It clearly has a constant term of 1. So we need to make sure the factored form of would give us a constant term of 1.
The factored form that makes the most sense is
which can also be written as
Equating the power series with this form we get
Now let's do a change of variables. Let which gives us a nicer equation
(*)
Then we have a new polynomial in terms of
with roots of , in general, where is a positive integer.
Euler then considered what happens to polynomials of the above factored form when they only have a finite number of factors. Here's a cubic:
(**)
We can easily show that for a polynomial factored in the form (*) the sum of the reciprocals of the roots is the negative of the coefficient of
.
And that's his super clever idea.
Applying this---recklessly---to the polynomial (*) gives us the interesting statement
or
He has found a formula for the sum of the reciprocals of the perfect squares. This series is known to converge, but rather slowly, and a good program like Mathematica can verify that it works.
Blue Question: Evaluate