Computing ζ(3)
I've started reading Paul Nahin's new book In Pursuit of (3)." The actual title is In Pursuit of Zeta-3." I can understand why a publisher would go with such a title, but I assume most people who read this blog are not afraid of Greek letters.
I've enjoyed reading several of Nahin's books, so I was pleased to see he had a new one.
Nahin's latest book is about attempts to find a closed-form expression for the sum
If you replace the 3" with s" in the sum above, you have an expression for the Riemann zeta function (s), and so (3) is a convenient way of describing the sum and putting it in context [1]. There are closed-form expressions for (s) when s is an even integer [2] but so far not nobody has found one for s = 3.
The value of (3) is sometimes called Apery's constant because in 1978 Roger Apery was the first to prove that it is an irrational number.
The names (3)" and Apery's constant" are anachronisms because Euler studied the sum in the 18th century, but Riemann came along in the 19th century and Apery in the 20th.
Naive calculationThe most obvious way to compute (3) numerically is to replace the upper limit of the infinite sum with a large number N. This works, but it's very inefficient.
Let's calculate the sum with upper limit 100 with a little Python code.
>>> sum(n**-3 for n in range(1, 101)) 1.2020074006596781
We can judge the accuracy of this approximation using the implementation of the zeta function in SciPy.
>>> from scipy.special import zeta >>> zeta(3) 1.2020569031595942
We can predict how accurate our sum will be by estimating the tail of the sum using the integral test as follows.
The integral on the left evaluates to N-2/2 and the integral on the right evaluates to (N+1)-2/2.
This says that if we sum up to N, our error will be between (N+1)-2/2 and N-2/2.
We can estimate from this that if we want d decimal places, we'll need to sum 10d/2 terms. So to get 16 decimal places, we'd have to sum 100,000,000 terms. This is certainly not what the call to zeta(3) above is doing.
A better approachNahin's book derives an equation for (3) by Ernst Kummer (1810-1893):
Because the denominator in the sum is a 5th degree polynomial, we get faster convergence than we do from directly evaluating the definition of (3). We could find an interval around the error using the integral test above, and it would show that the error is O(N-4).
Let's try this out in Python.
>>> f = lambda n: (n**3 * (n**2-1))**-1 >>> 1.25 - sum(f(n) for n in range(2, 101)) 1.2020569056101726 >>> _ - zeta(3) 2.4505784068651337e-09
This says we get 8 correct decimal places from summing 100 terms, which is just what we'd expect from our error estimate.
Doing even betterKummer's approach is better than naively computing (3) from its definition, but there are much more efficient methods.
With naive summation up to N, we get
2 log10 N
correct decimals. With Kummer's sum we get twice as many,
4 log10 N
correct decimals.
But there are methods for which the number of correct decimals is linear in N rather than logarithmic. Wikipedia mentions a method that gives 5.04 N decimal places.
I may write about more methods for computing (3) as I continue to read Nahin's book.
Related posts[1] That's how the Riemann zeta function is defined for s with real part greater than 1. For the rest of the complex plane, except s = 1, the Riemann zeta function is defined by analytic continuation. The sum is not valid for s with real part less than 1, and so, for example, the sum makes no sense at s = -1. But the analytic continuation of the sum is valid at -1, and equals -1/12 there. This leads to the rigorous justification of the otherwise nonsensical expression
1 + 2 + 3 + ... = -1/12.
[2] For s = 2n where n is a positive integer,
For s = 0 and negative integers, we have
Here the Bs are the Bernoulli numbers, which have closed-form expressions.
The post Computing (3) first appeared on John D. Cook.