## Flogging Infinity

(Warning: Much of this was written in a cloud of decongestants. If there are any errors, well, you know why….)

Infinity can be a slippery concept, and it causes no end of woes to mathematicians. But, over the years, they have gotten a better and better handle on the concept. This was helped greatly by the work of Georg Cantor, who developed the basic mathematical tools used today for grappling with infinities. But he was by no means the first, nor the last. The legendary 18th century mathematician Leonhard Euler made great strides in devising methods for dealing with divergent infinite series, and the definitive work on that subject is the 1949 book by G. H. Hardy,  Divergent Series.

Let’s get a common misconception out of the way: Something divided by zero most certainly does NOT equal infinity. Division by zero is undefined, as the concept is absolutely meaningless by any mathematical definition of division. What is the case is that, in the limit as the divisor of an expression approaches zero, the value of the expression goes to either positive or negative infinity. (Depending upon the function, it can be either positive or negative depending upon which direction the limit is taken from.) A function exhibiting such behavior is said to have a discontinuity at that point.

Knowing how to deal with infinities and divergences can be crucial. The need to tame divergences in the blackbody radiation problem led Planck to take the first step in creating quantum mechanics (although, to be fair, Planck basically reformulated the problem in a form that didn’t result in infinities).  Quantum electrodynamics calculations required the creation the “dippy procedure” of renormalization. And, on the bleeding edge of theoretical physics, the challenge of reconciling quantum theory and general relativity is fraught with seemingly intractable divergences. But such heady problems aren’t the only places where divergences crop up. They can arise in the most seemingly simple math problems.

By way of introduction, I would like you to take a moment and watch this video. Go ahead. I’ll wait.

Did you watch it? Did your brain threaten to have an aneurism partway through it? Yeah, mine as well.

I can hear your protests already. “That can’t be right! Infinity doesn’t equal -1. That same series can’t be equal to two different values. For crying out loud, there aren’t even any negative values in that sum! I call shenanigans!”

As well you might be tempted to do. Math has a reputation for being among the most rigorous of disciplines. No hand-waving allowed. The basic explanation is that there is more going on under the hood than is being described. Working with infinite series is a tricky business, and there is a lot of bookkeeping to take into account.

But, before I dive into explaining what is going on, here is a recap for those of you who didn’t bother to watch the video (for whatever bizarre reason).

#### The Problem

Consider the following infinite series:

$\sum\limits_{n=0}^\infty {2^n} = 1 + 2 + 4 + 8 + \ldots$

Now, it is pretty obvious that this series diverges to infinity. But we know that we can safely multiply anything by 1 and get what we started with. We also know that 1 can be expressed as (2-1), so let’s multiply our series by (2-1):

$\begin{array}{lcr}(2-1) \sum\limits_{n=0}^\infty {2^n} & = & (2-1) (1 + 2 + 4 + 8 + \ldots )\\ & = & 2 + 4 + 8 + 16 + \ldots\\ & & - 1 - 2 - 4 - 8 - 16 - \ldots\end{array}$

Cancel out the offsetting terms, and we are left with

$\sum\limits_{n=0}^\infty {(2-1)2^n} = -1$

Scary, n’est-ce pas?

#### The Explanation

To be honest, there is a little hanky-panky in the procedure described above. The cancellation of terms is not entirely rigorous. Every time a positive term and a corresponding negative term is cancelled out, there is still another larger positive term offsetting the initial -1. But then there is another negative term to offset that, and so on. Programmers will recognize this as a race condition. Dealing with infinite series is a tricky business.

But there is a legitimate mathematical procedure roughly equivalent to the above, and which yields the same result. The idea is to take a step back, and consider a more generic expression which reduces to the same problem. More specifically, I’m going to start with a function, and show that the infinite series we are describing is a legitimate power series expansion of that function. Yep, we are working the problem from back to front!

Consider the following expression (shown in the graph to the right, after a variable substitution of y=2x):

$f(y) = \frac{1}{1-y}$

The Maclaurin Series expansion for this is:

$f(y) = \frac{1}{1-y} = \sum\limits_{n=0}^\infty {y^n} = 1 + y + y^2 + y^3 + y^4 + \ldots$

Now let us substitute y=2x:

$f(x) = 1 + 2x + 4x^2 + 8x^3 + \ldots + 2^nx^n + \ldots = \frac{1}{1-2x}$

Well, fine and dandy. For x=1, the expansion reduces back to our original series, and the equivalence shown does in fact work out to -1. But there is a problem. That series still diverges. In fact, it will diverge for any value of x greater than or equal to 1/2. (In other words, the radius of convergence for this series is 1/2.) So how can this equivalence be correct?

#### Analytic Continuation to the Rescue!

Earlier, we started with a Maclaurin Series expansion for our function, which is simply a Taylor Series expansion around the origin. Now we can take a Taylor Series expansion around any point within the radius of convergence of our original expansion to try to extend the region for which the expansion is valid, but we keep coming up against that discontinuity at x=1/2. Imagine being a tightrope artist, where the real number line is the tightrope, and there is a pole at x=1/2 which, ideally, we would like to get past. Well, the simple solution is to lower the tightrope to the ground, such that we can simply walk around the pole! In this case, the ground is the complex plane.

If we extend the domain of $f(x)$ from the real numbers $\mathbb{R}$ to the complex plane $\mathbb{C}$, the function is no longer divergent except at $x=\frac{1}{2}$.  We can take a Taylor Series expansion around a point off to the side of the real number line, but still within the radius of convergence of our original Maclaurin Series expansion. (With the domain extended to the complex plane, the radius of convergence is now a disk on the complex plane.) Then we can take another Taylor series expansion within the radius of convergence of THAT expansion, and so forth, building overlapping disks of radii of convergence, until we have covered the entire complex plane (except for the pole at x=1/2), thus patching together a Riemann surface over which the expansion is valid. With the domain thus extended, $f(1)=-1$.

This approach of changing the domain of a function to sidestep divergences and other difficulties is known as analytic continuation.

#### Another Example

Consider for a moment, the Riemann hypothesis, the proof of which remains one of the longest-standing goals in mathematics. The hypothesis states that the non-trivial roots of the Riemann Zeta function lie along a critical strip such that the real portion of each root has a value of 1/2. But the Riemann zeta function is only convergent for real values of 1 or greater. Due to this, the zeta function must be analytically continued in order for the hypothesis to be applicable. Not only are the consequences of the Riemann hypothesis of immense importance to the field of number theory, but it turns out that the distribution of roots along the aforementioned critical strip bears an uncanny similarity to the distribution of atomic energy levels in nuclei.