Feed john-d-cook John D. Cook

Favorite IconJohn D. Cook

Link https://www.johndcook.com/blog
Feed http://feeds.feedburner.com/TheEndeavour?format=xml
Updated 2025-06-07 13:01
Sarkovsky’s theorem
The previous post explained what is meant by period three implies chaos. This post is a follow-on that looks at Sarkovsky’s theorem, which is mostly a generalization of that theorem, but not entirely [1]. First of all, Mr. Sarkovsky is variously known Sharkovsky, Sharkovskii, etc. As with many Slavic names, his name can be anglicized […]The post Sarkovsky’s theorem first appeared on John D. Cook.
Period three implies chaos
One of the most famous theorems in chaos theory, maybe the most famous, is that “period three implies chaos.” This compact statement comes from the title of a paper [1] by the same name. But what does it mean? This post will look at what the statement means, and the next post will look at […]The post Period three implies chaos first appeared on John D. Cook.
Better approximation for ln, still doable by hand
A while back I presented a very simple algorithm for computing natural logs: log(x) ≈ (2x – 2)(x + 1) for x between exp(-0.5) and exp(0.5). It’s accurate enough for quick mental estimates. I recently found an approximation by Ronald Doerfler that is a little more complicated but much more accurate: log(x) ≈ 6(x – […]The post Better approximation for ln, still doable by hand first appeared on John D. Cook.
Beta distribution with given mean and variance
It occurred to me recently that a problem I solved numerically years ago could be solved analytically, the problem of determining beta distribution parameters so that the distribution has a specified mean and variance. The calculation turns out to be fairly simple. Maybe someone has done it before. Problem statement The beta distribution has two […]The post Beta distribution with given mean and variance first appeared on John D. Cook.
Close but no cigar
The following equation is almost true. And by almost true, I mean correct to well over 200 decimal places. This sum comes from [1]. Here I will show why the two sides are very nearly equal and why they’re not exactly equal. Let’s explore the numerator of the sum with a little code. >>> from […]The post Close but no cigar first appeared on John D. Cook.
Arithmetic-geometric mean
The previous post made use of both the arithmetic and geometric means. It also showed how both of these means correspond to different points along a continuum of means. This post combines those ideas. Let a and b be two positive numbers. Then the arithmetic and geometric means are defined by A(a, b) = (a […]The post Arithmetic-geometric mean first appeared on John D. Cook.
Higher roots and r-means
The previous post looked at a simple method of finding square roots that amounts to a special case of Newton’s method, though it is much older than Newton’s method. We can extend Newton’s method to find cube roots and nth roots in general. And when we do, we begin to see a connection to r-means. […]The post Higher roots and r-means first appeared on John D. Cook.
Calculating square roots
Here’s a simple way to estimate the square root of a number x. Take a guess g at the root and compute the average of g and x/g. If you want to compute square roots mentally or with pencil and paper, how accurate can you get with this method? Could you, for example, get within […]The post Calculating square roots first appeared on John D. Cook.
Efficiently solving Kepler’s equation
A couple years ago I wrote a blog post on Kepler’s equation M + e sin E = E. Given mean anomaly M and eccentricity e, you want to solve for eccentric anomaly E. There is a simple way to solve this equation. Define f(E) = M + e sin E and take an initial guess at the solution and […]The post Efficiently solving Kepler’s equation first appeared on John D. Cook.
Unusually round exponential sum
The exponential sum page on this site draws lines between the consecutive partial sums of where m is the month, d is the day, and y is the last two digits of the year. The sum for today is unusually round: By contrast, the sum from yesterday is nowhere near round: Out of curiosity, I […]The post Unusually round exponential sum first appeared on John D. Cook.
Hypotenuse approximation
Ashley Kanter left a comment on Tuesday’s post Within one percent with an approximation I’d never seen. One that I find handy is the hypotenuse of a right-triangle with other sides a and b (where a<b) can be approximated to within 1% by 5(a+b)/7 when 1.04 ≤ b/a ≤1.50. That sounds crazy, but it’s right. […]The post Hypotenuse approximation first appeared on John D. Cook.
Coulomb’s constant
Richard Feynman said nearly everything is really interesting if you go into it deeply enough. In that spirit I’m going to dig into the units on Coulomb’s constant. This turns out to be an interesting rabbit trail. Coulomb’s law says that the force between two charged particles is proportional to the product of their charges […]The post Coulomb’s constant first appeared on John D. Cook.
Within one percent
This post looks at some common approximations and determines the range over which they have an error of less than 1 percent. So everywhere in this post “≈” means “with relative error less than 1%.” Whether 1% relative error is good enough completely depends on context. Constants The familiar approximations for π and e are […]The post Within one percent first appeared on John D. Cook.
Recursive grep
The regular expression search utility grep has a recursive switch -R, but it may not work like you’d expect. Suppose want to find the names of all .org files in your current directory and below that contain the text “cheese.” You have four files, two in the working directory and two below, that all contain […]The post Recursive grep first appeared on John D. Cook.
Calculating dogleg severity
Dogleg severity (DLS) is essentially what oilfield engineers call curvature. Oil wells are not simply vertical holes in the ground. The boreholes curve around underground, even if the intent is to drill a perfectly straight hole. With horizontal drilling the curvature can be substantial. Dogleg severity is calculated by measuring inclination and azimuth every 100 […]The post Calculating dogleg severity first appeared on John D. Cook.
Solving for Möbius transformation coefficients
Möbius transformations are functions of the form where ad – bc ≠ 0. A Möbius transformation is uniquely determined by its values at three points. Last year I wrote a post that mentioned how to determine the coefficients of a Möbius transformation. There I said The unique bilinear transform sending z1, z2, and z3 to […]The post Solving for Möbius transformation coefficients first appeared on John D. Cook.
Smallest denominator for given accuracy
The following table gives the best rational approximations to π, e, and φ (golden ratio) for a given accuracy goal. Here “best” means the fraction with the smallest denominator that meets the accuracy requirement. I found these fractions using Mathematica’s Convergents function. For any irrational number, the “convergents” of its continued fraction representation give a […]The post Smallest denominator for given accuracy first appeared on John D. Cook.
Simple approximation for Gamma function
I find simple approximations more interesting than highly accurate approximations. Highly accurate approximations can be interesting too, in a different way. Somebody has to write the code to compute special functions to many decimal places, and sometimes that person has been me. But somewhat ironically, these complicated approximations are better known than simple approximations. One […]The post Simple approximation for Gamma function first appeared on John D. Cook.
Mentally computing e^x
A few days ago I wrote about how to estimate 10x. This is an analogous post for exp(x) = ex. We will assume -0.5 ≤ x ≤ 0.5. You can bootstrap your way from there to other values of x. For example, exp(1.3) = exp(1 + 0.3) = e exp(0.3) and exp(0.8) = exp(1 – […]The post Mentally computing e^x first appeared on John D. Cook.
What makes the log10 trick work?
In my post on mentally calculating logarithms, I showed that log10 x ≈ (x – 1)/(x + 1) for 1/√10 ≤ x ≤ √10. You could convert this into an approximation for logs in any base by multiplying by the right scaling factor, but why does it work out so simply for base 10? Define […]The post What makes the log10 trick work? first appeared on John D. Cook.
Simple approximation for surface area of an ellipsoid
After writing the previous post, I wondered where else you might be able to use r-means to create accurate approximations. I thought maybe this would apply to the surface area of an ellipsoid, and a little searching around showed that Knud Thomsen thought of this in 2004. The general equation for the surface of an […]The post Simple approximation for surface area of an ellipsoid first appeared on John D. Cook.
Simple approximation for perimeter of an ellipse
The perimeter of an ellipse cannot be computed in closed form. That is, no finite combination of elementary functions will give you the exact value. But we will present a simple approximation that is remarkably accurate. So this post has two parts: exact calculation, and simple approximation. Exact perimeter The perimeter can be computed exactly […]The post Simple approximation for perimeter of an ellipse first appeared on John D. Cook.
Books and revealed preferences
Revealed preferences are the preferences we demonstrate by our actions. These may be different from our stated preferences. Even if we’re being candid, we may not be self-aware. One of the secrets to the success of Google’s PageRank algorithm is that it ranks based on revealed preferences: If someone links to a site, they’re implicitly […]The post Books and revealed preferences first appeared on John D. Cook.
Mentally calculating 10^x
This is the last in a sequence of three posts on mental calculation. The first looked at computing sine, cosine, and tangent in degrees. The second looked at computing logarithms, initially in base 10 but bootstrapping from there to other bases as well. In the previous post, we showed that log10 x ≈ (x-1)/(x+1) for […]The post Mentally calculating 10^x first appeared on John D. Cook.
Mentally calculating logarithms
The previous post looked at approximations for trig functions that are simple enough to compute without a calculator. I wondered whether I could come up with something similar for logarithms. I start with log base 10. Later in the post I show how to get to find logs in other bases from logs base 10. […]The post Mentally calculating logarithms first appeared on John D. Cook.
Simple trig function approximations
Anthony Robin gives three simple approximations for trig functions in degrees in [1]. The following plots show that these approximations are pretty good. It’s hard to distinguish the approximate and exact curves. The accuracy of the approximations is easier to see when we subtract off the exact values. The only problem is that the tangent […]The post Simple trig function approximations first appeared on John D. Cook.
Computing Fourier series coefficients with the FFT
The Discrete Fourier Transform (DFT) is a mathematical function, and the Fast Fourier Transform (FFT) is an algorithm for computing that function. Since the DFT is almost always computed via the FFT, the distinction between the two is sometimes lost. It is often not necessary to distinguish between the two. In my previous post, I […]The post Computing Fourier series coefficients with the FFT first appeared on John D. Cook.
Support of a signal and its FFT
The previous post looked at the Fourier uncertainty principle. This post looks at an analogous result for the discrete Fourier transform. The uncertainty principle for the (continuous) Fourier transform says a signal cannot be localized in both the time domain and the frequency domain. The more bunched up a function is, the more spread out […]The post Support of a signal and its FFT first appeared on John D. Cook.
Fourier uncertainty principle
Heisenberg’s uncertainty principle says there is a limit to how well you can know both the position and momentum of anything at the same time. The product of the uncertainties in the two quantities has a lower bound. There is a closely related principle in Fourier analysis that says a function and its Fourier transform […]The post Fourier uncertainty principle first appeared on John D. Cook.
Fourier transforms in Mathematica
Unfortunately there are many slightly different ways to define the Fourier transform. So the first two questions when using Mathematica (or any other software) to compute Fourier transforms are what definition of Fourier transform does it use, and what to do if you want to use a different definition. The answer to the first question […]The post Fourier transforms in Mathematica first appeared on John D. Cook.
Bessel determinants
The Bessel functions J and Y are analogous to sine and cosine. Bessel functions come up in polar coordinates the way sines and cosines come up in rectangular coordinates. There are J and Y functions of various orders, conventionally written with a subscript ν. I recently ran across a curious set of relations between these […]The post Bessel determinants first appeared on John D. Cook.
Computing normal probabilities with a simple calculator
If you need to calculate Φ(x), the CDF of a standard normal random variable, but don’t have Φ on your calculator, you can use the approximation [1] Φ(x) ≈ 0.5 + 0.5*tanh(0.8 x). If you don’t have a tanh function on your calculator, you can use tanh(0.8x) = (exp(1.6x) – 1) / (exp(1.6x) + 1). […]The post Computing normal probabilities with a simple calculator first appeared on John D. Cook.
Illustrating Gershgorin disks with NumPy
Gershgorin’s theorem gives bounds on the locations of eigenvalues for an arbitrary square complex matrix. The eigenvalues are contained in disks, known as Gershgorin disks, centered on the diagonal elements of the matrix. The radius of the disk centered on the kth diagonal element is the sum of the absolute values of the elements in […]The post Illustrating Gershgorin disks with NumPy first appeared on John D. Cook.
Calculating π with factorials
In honor of Pi Day, I present the following equation for calculating π using factorials. It’s not a very useful formula for π, but an amusing one. It takes a practical formula for approximating factorials, Stirling’s formula, and turns it around to make an impractical formula for approximating π. It does converge to π albeit […]The post Calculating π with factorials first appeared on John D. Cook.
Pareto and Pandas
This post muses about what it means to learn a software library. I’ll use Pandas as an example, but the post isn’t just about Pandas. Suppose you say “I want to learn Pandas.” That implicitly assumes Pandas one thing, and in a sense it is. In another sense Pandas is hundreds of things. At the […]The post Pareto and Pandas first appeared on John D. Cook.
i^i^i …
This post plots the sequence That is, we define a sequence by z1 = i and for k > 1, I ran across this in [1], but the sequence goes back as far as Euler. Here’s a plot of the points: This plot has three spiral arms, as if there are three sequences converging to […]The post i^i^i … first appeared on John D. Cook.
Broadcasting and functors
In my previous post, I looked at the map Δ that takes a column vector to a diagonal matrix. I even drew a commutative diagram, which foreshadows a little category theory. Suppose you have a function f of a real or complex variable. To an R programmer, if x is a vector, it’s obvious that […]The post Broadcasting and functors first appeared on John D. Cook.
Moving between vectors and diagonal matrices
This is the first of two posts on moving between vectors and diagonal matrices. The next post is Broadcasting and functors. Motivation When I first saw the product of two vectors in R, I was confused. If x and y are vectors, what does x*y mean? An R programmer would say “You multiply components together, […]The post Moving between vectors and diagonal matrices first appeared on John D. Cook.
Adding tubes to knots
Several months ago I wrote a blog post about Lissajous curves and knots that included the image below. Here’s an improved version of the same knot. The original image was like tying the knot in thread. The new image is like tying it in rope, which makes it easier to see. The key was to […]The post Adding tubes to knots first appeared on John D. Cook.
A stiffening spring
Imagine a spring with stiffness k1 attached to a ceiling and a mass m1 handing from the spring. There’s a second spring attached to the first mass with stiffness k2 and a mass m2 handing from that. The motion of the system is described by the pair of differential equations If the second spring were […]The post A stiffening spring first appeared on John D. Cook.
tcgrep: grep rewritten in Perl
In The Perl Cookbook, Tom Christiansen gives his rewrite of the Unix utility grep that he calls tcgrep. You don’t have to know Perl to use tcgrep, but you can send it Perl regular expressions. Why not grep with PCRE? You can get basically the same functionality as tcgrep by using grep with its PCRE […]The post tcgrep: grep rewritten in Perl first appeared on John D. Cook.
Scaled Beta2 distribution
I recently ran across a new probability distribution called the “Scaled Beta2” or “SBeta2” for short in [1]. For positive argument x and for positive parameters p, q, and b, its density is This is a heavy-tailed distribution. For large x, the probability density is O(x–q-1), the same as a Student-t distribution with q degrees […]The post Scaled Beta2 distribution first appeared on John D. Cook.
Applications of (1-z)/(1+z)
I keep running into the function f(z) = (1-z)/(1+z). The most recent examples include applications to radio antennas and mental calculation. More on these applications below. Involutions A convenient property of our function f is that it is its own inverse, i.e. f( f(x) ) = x. The technical term for this is that f […]The post Applications of (1-z)/(1+z) first appeared on John D. Cook.
Saxophone ranges
I stumbled on a recording of a contrabass saxophone last night and wondered just how low it was [1], so I decided to write this post giving the ranges of each of the saxophones. The four most common saxophones are baritone, tenor, alto, and soprano. These correspond to the instruments in the image above. There […]The post Saxophone ranges first appeared on John D. Cook.
Fraction comparison trick
If you want to determine whether a/b > c/d, it is often enough to test whether a+d > b+c. Said another way a/b is usually greater than c/d when a+d is greater than b+c. This sounds imprecise if not crazy. But it is easy to make precise and [1] shows that it is true. Examples […]The post Fraction comparison trick first appeared on John D. Cook.
Normal probability fixed point
Let Z be a standard normal random variable. Then there is a unique x such that Pr(Z < x) = x. That is, Φ has a unique fixed point where Φ is the CDF of a standard normal. It’s easy to find the fixed point: start anywhere and iterate Φ. Here’s a cobweb plot that […]The post Normal probability fixed point first appeared on John D. Cook.
The time-traveling professor
Suppose you were a contemporary professor sent back in time. You need to continue your academic career, but you can’t let anyone know that you’re from the future. You can’t take anything material with you but you retain your memories. At first thought you might think you could become a superstar, like the musician in […]The post The time-traveling professor first appeared on John D. Cook.
The debauch of indices
This morning I was working on a linear algebra problem for a client that I first solved by doing calculations with indices. As I was writing things up I thought of the phrase “the debauch of indices” that mathematicians sometimes use to describe tensor calculations. The idea is that calculations with lots of indices are […]The post The debauch of indices first appeared on John D. Cook.
Not-to-do list
There is an apocryphal [1] story that Warren Buffet once asked someone to list his top 25 goals in order. Buffet then told him that he should avoid items 6 through 25 at all costs. The idea is that worthy but low-priority goals distract from high-priority goals. Paul Graham wrote something similar about fake work. […]The post Not-to-do list first appeared on John D. Cook.
Herd immunity countdown
A few weeks ago I wrote a post giving a back-of-the-envelope calculation regarding when the US would reach herd immunity to SARS-COV-2. As I pointed out repeatedly, this is only a rough estimate because it makes numerous simplifying assumptions and is based on numbers that have a lot of uncertainty around them. See that post […]The post Herd immunity countdown first appeared on John D. Cook.
...25262728293031323334...