10/28/09 (by Ted)
As a first step, we will consider the infinite sum 1/2 + 1/4 + 1/8 + 1/16 + ... Our discussion begins having seen the following argument: Let this infinite sum equal x. Then 2x = 1 + 1/2 + 1/4 + ... and therefore 2x  1 = x, implying that x = 1. However, we remain sceptical of this decidedly ingenious deduction. Here is an attempt at a convincing argument.
Observe that, for any z different from 1 and any positive integer n, z + z^{2} + z^{3} + ... + z^{n} = (z^{n+1}  z)/(z  1). This is true because we can multiply both sides of the equality by z  1, and note that the product on the left hand sides telescopes as follows:
(z  1)(z + z^{2} + z^{3} + ... + z^{n}) = z^{2}  z + z^{3}  z^{2 }+ ... + z^{n+1}  z^{n} = z^{n+1}  z (1)
Let A_{n} = 1/2 + 1/4 + ... + 1/2^{n} and use (1) with z = 1/2 to deduce that A_{n} = (1/2^{n}  1/2)/(1/2  1) = 1  2^{n}. It is immediately obvious therefore that A_{n} remains bounded to at most 1 as n increases. So the infinite sum 1/2 + 1/4 + 1/8 + ..., which is obtained as the limit of A_{n} when n is allowed to grow arbitrarily, remains bounded, despite its infinitely many terms. In particular, for any B>0 there is a finite C = ln(A)/ln(2) such that for all k>C, A_{k}  1<B. Thus, eventually A_{n }approaches 1 arbitrarily closely. We summarize this information by saying that the limit of A_{n}, as n increases arbitrarily (which is the definition of the infinite sum), is equal to 1.
Admittedly, arguments involving algebraic manipulations of infinite sequences and series can be counterintuitive. In order to reinforce this lesson, here is a little challenge:
Consider a square and split it into four equal squares by drawing a cross through the center. Color the top left quarter square red. Repeat this process with the remaining three squares. Show that eventually the entire square will be red!
11/6/09 (by Ted)
Earlier today we had a stimulating discussion about the previous entry and other ways in which the algebra of expressions with infinite terms confuses us. The question at the focus of our discussion was the meaning of the statement 0.999... = 1. We were skeptical that this is indeed so, because we never make it to 1 and besides, the last digit of the expression on the left hand side will also be 9, right?
Well, not quite. Let's call z = 0.999... First of all, to the former point, we realized that, for any positive N, 0.999...9 < z <=1, where the expression on the left hand side has N nines. Thus, as we increase N, we are squeezing z towards 1. Moreover, given any positive number a, we can find a finite k =  log_{10}a such that for any N>k, 1  z<a. Therefore, there is no number that lies between z and 1, because if such a number b existed, then let k =  log_{10}(1b) and the above argument would show that z is closer to 1 than b. It is a simple matter to show that the standard rules of algebra apply equivalently to z and 1, i.e. 1 + 1 = 2 and there is no number between z + 1 and 2. Thus, for all intents and purposes, we may as well not use a separate symbol and let z = 1.
With respect to the latter point, we should appreciate that z does not possess a last digit! (If it did, it would of course be 9). It is precisely this lack of a last digit which makes it be impossible to find a number between z and 1.
Now consider half of z. What convinces you that it is 0.4999...?
12/24/09 On Euler's Identity (by Ted)
Our discussion of Euler's identity begins with power series, a remarkably powerful tool based on our explorations of infinite series, that helped shape modern analysis. In particular, consider a realvalued function of a single real variable f(x) and assume that it can be expressed as a generalized polynomial, namely one of infinite degree:
f(x) = a_{0} + a_{1}x + a_{2}x^{2} + a_{3}x^{3} + ... (2)
for some coefficients a_{k}. This is a very handy representation of the general function f, since the monomials involved in the sum are wellunderstood quantities. The natural question to ask is which function can be written this way. In order to explore this question, we neet to know how to relate the coefficients a_{k }to the function f. Here is a trick that is widely used in such functional equations: evaluate the function f at a point which simplifies the right hand side. In this case, the point 0 is the obvious candidate. Clearly, if the infinite polynomial representation is to hold, a_{0} = f(0). This zeroth order approximation to f is simply a constant function, which matches f at the chosen point x=0.
To proceed, we ask how quickly the two sides of (2) change when x moves around 0. The way to quantify this rate of change is the concept of the derivative: f'(x) is the limit of (f(x+h)f(x))/h as h, which can be positive or negative, is made smaller in magnitude. For each h, (2) becomes
(f(x+h)f(x))/h = (a_{0}a_{0})/h + (a_{1}(x+h)a_{1}x)/h + (a_{2}(x+h)^{2}a_{2}x^{2})/h + (a_{3}(x+h)^{3}a_{3}x^{3})/h + ... = = 0 + a_{1} + a_{2}(2x+h) + a_{3}(3x^{2}+3xh+h^{2}) + ...
Evaluating this at x=0 we obtain (f(h)a_{0})/h = a_{1} + a_{2}h + a_{3}h^{2} + ... When we let h go down to zero, the left hand side becomes the derivative of f at 0, f'(0), while the right hand side becomes a_{1}, so a_{1} = f'(0). If we let h go down to zero before setting x=0, we obtain a power series expansion for the derivative of f, f'(x) = a_{1} + 2a_{2}x + 3a_{3}x^{2} + 4a_{4}x^{3} + ...
Repeating this procedure on f' instead of f we obtain
(f'(x+h)f'(x))/h = (a_{1}a_{1})/h + (a_{2}(2x+h)2a_{2}x)/h + (3a_{3}(x+h)^{2}3a_{3}x^{2})/h + (4a_{4}(x+h)^{3}4a_{4}x^{3})/h + ... = = 0 + 2a_{2} + 3a_{3}(2x+h) + 4a_{4}(3x^{2}+3xh+h^{2}) + ...As before, evaluating this at x=0 we obtain (f'(h)f'(0))/h = 2a_{2} + 3a_{3}h + 4a_{4}h^{2} + ... When we let h go down to zero, the left hand side becomes the second derivative of f at 0, f''(0), while the right hand side becomes 2a_{2}, so a_{2} = f''(0)/2. If we let h go down to zero before setting x=0, we obtain a power series expansion for the second derivative of f, f''(x) = 2a_{2} + 6a_{3}x + 12a_{4}x^{2} + 20a_{5}x^{3} + ...
Iterating this procedure an arbitrary number of times, we obtain a_{k} = f^{(k)}(0)/k!, where f^{(k)} is the k^{th} derivative of f and k! = 1*2*3*...*k is k factorial. The resulting power series expansion (1) is called the Taylor expansion of f at 0, sometimes also called the Maclaurin series.
We now have the answer to the question of which functions can be represented in the form (2) above. Firstly, f must have finite derivatives of all orders at 0, otherwise some coefficients will be infinite. But this isn't sufficient. For the representation (2) to be more than a formality, the right hand side of (2) must make sense, which means it must converge. Roughly speaking, this means that the change to the right hand side as we add higher derivative terms must be decreasing with the order of the derivatives. Of course this will depend on x as well. In fact, it is typically the case that the right hand side of (2) converges only for x small enough, i.e. close enough to x=0, the point around which we based our power series expansion. The function f is called analytic for the range of x over which the right hand side of (2) converges. Analyticity is a nuanced property. As implied above, there are elementary examples of functions that have finite derivatives of all orders at 0 but nonetheless fail to be analytic, because the derivatives f^{(k)}(0) grow too quickly in k. Such an example is given by f(x) = e^{1/x} when x>0 and f(x) = 0 otherwise (try to show this; it involves some fairly involved limits).
We are now in a position to apply (2) to some functions of interest. To begin with, let f(x) =sinx. Then f(0) = 0, f^{(4k)}(x) = sinx, f^{(4k+2)}(x) = sinx, f^{(4k+1)}(x) = cosx and f^{(4k+3)}(x) = cosx, which implies that a_{2k} = f^{(2k)}(0)/(2k)! = 0, a_{2k+1} = f^{(2k+1)}(0)/(2k+1)! = 1/(2k+1)! and a_{2k+3} = f^{(2k+3)}(0)/(2k+3)! = 1/(2k+3)!. Thus
sinx = x  x^{3}/6 + x^{5}/120  x^{7}/5040 + ... (3)
Similarly (check on your own),
cosx = 1  x^{2}/2 + x^{4}/24  x^{6}/720 + ... (4)
Now, let's try f(x) = e^{x}. Clearly, f^{(k)}(x) = e^{x}, and therefore e^{x} = 1 + x + x^{2}/2 + x^{3}/6 + x^{4}/24 + x^{5}/120 + x^{6}/720 + x^{7}/5040 + ... (5)
The similarity of (3), (4) and (5) isn't accidental. To take advantage of it, we need a way of flipping every other sign in (5). The imaginary unit i offers us precisely this property. In particular
e^{ix} = 1 + ix  x^{2}/2  ix^{3}/6 + x^{4}/24 + ix^{5}/120  x^{6}/720  ix^{7}/5040 + ... = = (1  x^{2}/2 + x^{4}/24  x^{6}/720 + ... ) + (ix  ix^{3}/6 + ix^{5}/120  ix^{7}/5040 + ... ) = = cosx + isinx
which is Euler's formula for the representation of the unit circle on the complex plane. It remains for us to evaluate this expression at x equal to pi and rearrange terms. Now that we established Euler's identity, e to the pi i plus 1 = 0, try to answer this question: is pi to the i e plus 1 more or less than one? 12/24/09 On the Riemann zeta function (by Ted)
The development of power series representations and analytic functions leads naturally to the extension of function from the reals to the reals, to mappings of the complex plane onto itself. For example, consider f(x) = lnx. What meaning could we assign to f if x is allowed to be complex? Using the polar coordinate representation of the complex plane, x = re^{iy}, we can set lnx = lnr + iy. But the Euler formula developed above shows us that lnr + i(y+2pi) is also a valid logarithm of x, because e^{i(y+2pi)} = e^{iy}. In fact, lnr + i(y+2kpi) is a valid logarithm of x for all integer k, even though these values are all different. Thus, the extension of f(x) to the complex plane is a multivalued mapping. This is a typical occurence in complex analysis and it gives rise to very powerful equivalence relations that strongly constrain complex analytic functions.
Moving beyond power series, let's consider the harmonic series 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ... Since the 14th century this has been the archetypical example of a divergent series. In fact, it serves to illustrate what distinguishes a convergent from a divergent series. After all, the terms of this series, the inverse positive integers, go down to 0. But they don't decrease quickly enough! Here is a clear elementary argument that establishes the harmonic series' divergence:
1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ... > 1 + 1/2 + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ... > 1 + 1/2 + 1/2 + 1/2 + ... (6)
where we lower bounded 1/3 by 1/4 and 1/5, 1/6 and 1/7 by 1/8 etc. Since the right hand side of (6) is manifestly infinite (an infinite sum of 1/2 plus 1), so must be the left hand side which dominates it. Incidentally, where does this argument break down with 1/2 + 1/4 + 1/8 + 1/16 + ... which converges to 1 as we saw in an earlier entry? The next series that will occupy us is another major accomplishment of Euler's, and it serves to sharpen significantly our sense of balance between convergent and divergent series. It is the sum of the reciprocals of the squares of all positive integers, 1 + 1/4 + 1/9 + 1/16 + ... In order to explore this series, we'll need a little trigonometry.
The figure above shows the unit circle. Let w = <AOB. We can see that, when 0<w<pi/2, 0 < sinw = BC < w, since w is equal to the length of the arc BA. Also, tanw = BD and therefore the area of triangle OBD is tanw/2. The area of the circular sector OBA is pi*w/(2*pi) = w/2. But since the circular sector OBA is included in the triangle OBD, w<tanw. Thus, 0<sinw<w<tanw, and upon inverting we obtain cotw<1/w<cscw = 1/sinw.
To proceed, we'll use the Euler formula for the representation of complex numbers to obtain De Moivre's handy formula: (cosx + isinx)^{k} = e^{ikx} = cos(kx) + isin(kx). Thus,
[cos((2k+1)x) + isin((2k+1)x)]/sin^{2k+1}x = (cotx + i)^{2k+1} = = [C(2k+1,0)cot^{2k+1}x  C(2k+1,2)cot^{2k3}x + ... +(1)^{k}C(2k+1,2k)cotx] + + i [C(2k+1,1)cot^{2k}x  C(2k+1,3)cot^{2k2}x + ... +(1)^{k}C(2k+1,2k+1)] (7)
Let x_{r} = r*pi/(2k+1) for r = 1, 2, ..., k. Using x = x_{r} we see that the left hand side of (7) becomes real (the imaginary part becomes zero), and therefore, cot^{2}x_{r} are the m distinct roots of the mdegree polynomial
C(2k+1,1)y^{2k}  C(2k+1,3)y^{2k2} + ... +(1)^{k}C(2k+1,2k+1) = 0
Thus, the sum of the x_{r} is equal to minus the ratio of the two highest order coefficients of the polynomial. In order to see this in general, consider a general polynomial a_{n}x^{n} + a_{n1}x^{n1} + a_{n2}x^{n2} + ... a_{1}x + a_{0} = a_{n}(x^{n} + x^{n1}a_{n1}/a_{n} + x^{n2}a_{n2}/a_{n} + ... xa_{1}/a_{n} + a_{0}/a_{n}) = a_{n}(x  R_{1})(x  R_{2})(x  R_{3})...(x  R_{n}) = a_{n}[x^{n}  x^{n1}(R_{1} + R_{2} + R_{3} + ... + R_{n}) + x^{n2}(R_{1}R_{2} + R_{1}R_{3} + ... + R_{n1}R_{n})  ... +(1)^{n}R_{1}R_{2}R_{3}...R_{n}]. Therefore,
cot^{2}x_{1} + cot^{2}x_{2} + cot^{2}x_{3} + ... + cot^{2}x_{k} = C(2k+1,3)/C(2k+1,1) = k(2k1)/3
But cot^{2}x = csc^{2}x  1 and therefore
csc^{2}x_{1} + csc^{2}x_{2} + csc^{2}x_{3} + ... + csc^{2}x_{k} = k(2k1)/3 + k = 2k(k+1)/3
Observe that all the x_{r} are between 0 and pi/2 and therefore, as we saw earlier, cotx_{r}<1/x_{r}<cscx_{r} and therefore cot^{2}x_{r}<1/x_{r}^{2}<csc^{2}x_{r} and so
cot^{2}x_{1} + cot^{2}x_{2} + ... + cot^{2}x_{k} = k(2k1)/3 < (2k+1)/pi^{2} (1 + 1/4 + 1/9 + 1/16 + ... + 1/k^{2}) < csc^{2}x_{1} + csc^{2}x_{2} + ... + csc^{2}x_{k} = 2k(k+1)/3
which implies that pi^{2}k(2k1)/[3(2k+1)] < 1 + 1/4 + 1/9 + 1/16 + ... + 1/k^{2} < 2pi^{2}k(k+1)/[3(2k+1)]. Letting k go to infinity both the upper and the lower bound approach pi^{2}/6, and therefore 1 + 1/4 + 1/9 + 1/16 + ... = pi^{2}/6. This amazing fact, shown in the mid 1700s, can also be used as a means of approximating pi. Unfortunately it converges quite slowly, requiring 600 terms to obtain the first three significant digits if pi, namely 3.14.
In fact, using more refined techniques, to which we may one day return, one can show that the series 1 + 1/2^{1+c} + 1/3^{1+c} + 1/4^{1+c} + ... converges for all positive c and it diverges for all nonpositive c. Bernhard Riemann in the mid19th century introduced his zeta function as an explicit generalization of these series to the complex plane, using Euler's formula. Namely
zeta(z) = 1 + 2^{z} + 3^{z} + 4^{z} + ...
where z is a complex number. TBC
