Asimov’s question about π
In 1977, Isaac Asimov [1] asked how many terms of the slowly converging series
I = 4 - 4/3 + 4/5 - 4/7 + 4/9 - "
would you have to sum before doing better than the approximation
I a 355/113.
A couple years later Richard Johnsonbaugh [2] answered Asimov's question in the course of an article on techniques for computing the sum of series. Johnsonbaugh said you would need at least N = 3,748,630 terms.
Johnsonbaug's answer is based on exact calculations. I wondered how well you'd do with N terms using ordinary floating point arithmetic. Would there be so much rounding error that the result is terrible?
I wrote the most direct implementation in Python, with no tricks to improve the accuracy.
from math import pi s = 0 N = 3748630 for n in range(1, N+1): s += (-1)**(n+1) * 4/(2*n - 1)
I intended to follow this up by showing that you could do better by summing all the positive and negative terms separately, then doing one subtraction at the end. But the naive version actually does quite well. It's essentially as accurate as 355/113, with both approximations having an error of 2.66764 i- 10-7.
Extended precision with bcNext, I translated my program to bc [3] so I could control the precision. bc lets you specify your desired precision with its scale parameter.
scale = 20 pi = 4*a(1) s = 0 m = 3748630 for (n = 1; n <= m; n++) { s += (-1)^(n+1) * 4/(2*n - 1) }
Floating point precision is between 15 and 16 decimal places. I added more precision by setting set scale to 20, i.e. carrying out calculations to 20 decimal places, and summed the series again.
The absolute error in the series was less than the error in 355/113 in the 14th decimal place. When I used one less term in the series, its error was larger than the error in 355/113 in the 14th decimal place. In other words, the calculations suggest Johnsonbaugh found exactly the minimum number of terms needed.
I doubt Johnsonbaugh ever verified his result computationally. He doesn't mention computer calculations in his paper [4], and it would have been difficult in 1979 to have access to the necessary hardware and software.
If he had access to an Apple II at the time, it would have run at 1 MHz. My calculation took around a minute to run on a 2 GHz laptop, so I'm guessing the same calculation would have taken a day or two on an Apple II. This assumes he could find extended precision software like bc on an Apple II, which is doubtful.
The bc programming language had been written four years earlier, so someone could have run a program like the one above on a Unix machine somewhere. However, such machines were hard to find, and their owners would have been reluctant to give up a couple days of compute time for a guest to run a frivolous calculation.
Related posts[1] Isaac Asimov, Asimov on Numbers, 1977.
[2] Richard Johnsonbaugh, Summing an Alternating Series. The American Mathematical Monthly, Vol 86, No 8, pp.637-648.
[3] Notice that N from the Python program became m in bc. I've used bc occasionally for years, and didn't know until now that you cannot use capital letters for variables in standard bc. I guess I never tried before. The next post explains why bc doesn't allow capital letters in variable names.
[4] Johnsonbaugh's paper does include some numerical calculations, but he only sums up 500 terms, not millions of terms, and it appears he only uses ordinary precision, not extended precision.