Feed john-d-cook John D. Cook

Favorite IconJohn D. Cook

Link https://www.johndcook.com/blog
Feed http://feeds.feedburner.com/TheEndeavour?format=xml
Updated 2025-09-11 20:31
Conditional independence notation
Ten years ago I wrote a blog post that concludes with this observation: The ideas of being relatively prime, independent, and perpendicular are all related, and so it makes sense to use a common symbol to denote each. This post returns to that theme, particularly looking at independence of random variables. History Graham, Knuth, and […]
Three composition theorems for differential privacy
This is a brief post, bringing together three composition theorems for differential privacy. The composition of an ε1-differentially private algorithm and an ε2-differentially private algorithm is an (ε1+ε2)-differentially private algorithm. The composition of an (ε1, δ1)-differentially private algorithm and an (ε2, δ2)-differentially private algorithm is an (ε1+ε2, δ1+δ2)-differentially private algorithm. The composition of an (α, […]
Minimizing worst case error
It’s very satisfying to know that you have a good solution even under the worst circumstances. Worst-case thinking doesn’t have to be concerned with probabilities, with what is likely to happen, only with what could happen. But whenever you speak of what could happen, you have to limit your universe of possibilities. Suppose you ask […]
Pecunia non olet
I’ve been rereading That Hideous Strength. I’m going through it slowly this time, paying attention to details I glossed over before. For example, early in the book we’re told that the head of a college has the nickname N.O. N.O., which stood for Non-Olet, was the nickname of Charles Place, the warden of Bracton. The […]
Simple clinical trial of four COVID-19 treatments
A story came out in Science yesterday saying the World Health Organization is launching a trial of what it believes are the the four most promising treatments for COVID-19 (a.k.a. SARS-CoV-2, novel coronavirus, etc.) The four treatment arms will be Remdesivir Chloroquine and hydroxychloroquine Ritonavir + lopinavir Ritonavir + lopinavir + interferon beta plus standard […]
Product of copulas
A few days ago I wrote a post about copulas and operations on them that have a group structure. Here’s another example of group structure for copulas. As in the previous post I’m just looking at two-dimensional copulas to keep things simple. Given two copulas C1 and C2, you can define a sort of product […]
How to Set Num Lock on permanently
When I use my Windows laptop, I’m always accidentally brushing against the Num Lock key. I suppose it’s because the keys are so flat; I never have this problem on a desktop. I thought there must be some way to set it so that it’s always on, so I searched for it. First I found […]
New Asymptotic function in Mathematica 12.1
One of the new features in Mathematica 12.1 is the function Asymptotic. Here’s a quick example of using it. Here’s an asymptotic series for the log of the gamma function I wrote about here. If we ask Mathematica Asymptotic[LogGamma[z], z -> Infinity] we get simply the first term: But we can set the argument SeriesTermGoal […]
Extended floating point precision in R and C
The GNU MPFR library is a C library for extended precision floating point calculations. The name stands for Multiple Precision Floating-point Reliable. The library has an R wrapper Rmpfr that is more convenient for interactive use. There are also wrappers for other languages. It takes a long time to install MPFR and its prerequisite GMP, […]
When is round-trip floating point radix conversion exact?
Suppose you store a floating point number in memory, print it out in human-readable base 10, and read it back in. When can the original number be recovered exactly? D. W. Matula answered this question more generally in 1968 [1]. Suppose we start with base β with p places of precision and convert to base […]
Group symmetry of copula operations
You don’t often see references to group theory in a statistics book. Not that there aren’t symmetries in statistics that could be described in terms of groups, but this isn’t often pointed out. Here’s an example from An Introduction to Copulas by Roger Nelsen. Show that under composition the set of operations of forming the […]
Product of Chebyshev polynomials
Chebyshev polynomials satisfy a lot of identities, much like trig functions do. This point will look briefly at just one such identity. Chebyshev polynomials Tn are defined for n = 0 and 1 by T0(x) = 1 T1(x) = x and for larger n using the recurrence relation Tn+1(x) = 2xTn(x) – Tn-1(x) This implies […]
The Brothers Markov
The Markov brother you’re more likely to have heard of was Andrey Markov. He was the Markov of Markov chains, the Gauss-Markov theorem, and Markov’s inequality. Andrey had a lesser known younger brother Vladimir who was also a mathematician. Together the two of them proved what is known as the Markov Brothers’ inequality to distinguish […]
Finding coffee in Pi
It is widely believed that π is a “normal number,” which would mean that you can find any integer sequence you want inside the digits of π, in any base, if you look long enough. So for Pi Day, I wanted to find c0ffee inside the hexadecimal representation of π. First I used TinyPI, a […]
Chebyshev approximation
In the previous post I mentioned that Remez algorithm computes the best polynomial approximation to a given function f as measured by the maximum norm. That is, for a given n, it finds the polynomial p of order n that minimizes the absolute error || f – p ||∞. The Mathematica function MiniMaxApproximation minimizes the relative […]
Remez algorithm and best polynomial approximation
The best polynomial approximation, in the sense of minimizing the maximum error, can be found by the Remez algorithm. I expected Mathematica to have a function implementing this algorithm, but apparently it does not have one. (But see update below.) It has a function named MiniMaxApproximation which sounds like Remez algorithm, and it’s close, but […]
MDS codes
A maximum distance separable code, or MDS code, is a way of encoding data so that the distance between code words is as large as possible for a given data capacity. This post will explain what that means and give examples of MDS codes. Notation A linear block code takes a sequence of k symbols […]
Automatic data reweighting
Suppose you are designing an autonomous system that will gather data and adapt its behavior to that data. At first you face the so-called cold-start problem. You don’t have any data when you first turn the system on, and yet the system needs to do something before it has accumulated data. So you prime the […]
Maximum gap between binomial coefficients
I recently stumbled on a formula for the largest gap between consecutive items in a row of Pascal’s triangle. For n ≥ 2, where For example, consider the 6th row of Pascal’s triangle, the coefficients of (x + y)6. 1, 6, 15, 20, 15, 6, 1 The largest gap is 9, the gap between 6 […]
Formatting in comments
The comments to the posts here are generally very helpful. I appreciate your contributions to the site. I wanted to offer a tip for those who leave comments and are frustrated by the way the comments appear, especially those who write nicely formatted snippet of code only to see the formatting lost. There is a […]
Sum of squared digits
Take a positive integer x, square each of its digits, and sum. Now do the same to the result, over and over. What happens? To find out, let’s write a little Python code that sums the squares of the digits. def G(x): return sum(int(d)**2 for d in str(x)) This function turns a number into a […]
Computing the area of a thin triangle
Heron’s formula computes the area of a triangle given the length of each side. where If you have a very thin triangle, one where two of the sides approximately equal s and the third side is much shorter, a direct implementation Heron’s formula may not be accurate. The cardinal rule of numerical programming is to […]
A tale of two iterations
I recently stumbled on a paper [1] that looks at a cubic equation that comes out of a problem in orbital mechanics: σx³ = (1 + x)² Much of the paper is about the derivation of the equation, but here I’d like to focus on a small part of the paper where the author looks […]
Perfect codes
A couple days ago I wrote about Hamming codes and said that they are so-called perfect codes, i.e. codes for which Hamming’s upper bound on the number of code words with given separation is exact. Not only are Hamming codes perfect codes, they’re practically the only non-trivial perfect codes. Specifically, Tietavainen and van Lint proved […]
Computing parity of a binary word
The previous post mentioned adding a parity bit to a string of bits as a way of detecting errors. The parity of a binary word is 1 if the word contains an odd number of 1s and 0 if it contains an even number of ones. Codes like the Hamming codes in the previous post […]
A gentle introduction to Hamming codes
The previous post looked at how to choose five or six letters so that their Morse code representations are as distinct as possible. This post will build on the previous one to introduce Hamming codes. The problem of finding Hamming codes is much simpler in some ways, but also more general. Morse code is complicated […]
ADFGVX cipher and Morse code separation
A century ago the German army used a field cipher that transmitted messages using only six letters: A, D, F, G, V, and X. These letters were chosen because their Morse code representations were distinct, thus reducing transmission error. The ADFGVX cipher was an extension of an earlier ADFGV cipher. The ADFGV cipher was based […]
ChaCha RNG with fewer rounds
ChaCha is a CSPRING, a cryptographically secure pseudorandom number generator. When used in cryptography, ChaCha typically carries out 20 rounds of its internal scrambling process. Google’s Adiantum encryption system uses ChaCha with 12 rounds. The runtime for ChaCha is proportional to the number of rounds, so you don’t want to do more rounds than necessary […]
Popcount: counting 1’s in a bit stream
Sometimes you need to count the number of 1’s in a stream of bits. The most direct application would be summarizing yes/no data packed into bits. It’s also useful in writing efficient, low-level bit twiddling code. But there are less direct applications as well. For example, three weeks ago this came up in a post […]
A brief comment on hysteresis
You might hear hysteresis described as a phenomena where the solution to a differential equation depends on its history. But that doesn’t make sense: the solution to a differential equation always depends on its history. The solution at any point in time depends (only) on its immediately preceding state. You can take the state at […]
Safe Harbor ain’t gonna cut it
There are two ways to deidentify data to satisfy HIPAA: Safe Harbor, § 164.514(b)(2), and Expert Determination, § 164.514(b)(1). And for reasons explained here, you may need to be concerned with HIPAA even if you’re not a “covered entity” under the statute. To comply with Safe Harbor, your data may not contain any of eighteen […]
Inverse congruence RNG
Linear congruence random number generators have the form xn+1 = a xn + b mod p Inverse congruence generators have the form xn+1 = a xn-1 + b mod p were x-1 means the modular inverse of x, i.e. the value y such that xy = 1 mod p. It is possible that x = […]
A better adaptive Runge-Kutta method
This is the third post in a series on Runge-Kutta methods. The first post in the series introduces Runge-Kutta methods and Butcher tableau. The next post looked at Fehlberg’s adaptive Runge-Kutta method, first published in 1969. This post looks at a similar method from Dormand and Prince in 1980. Like Fehlberg’s method, the method of […]
How to estimate ODE solver error
This post brings together several themes I’ve been writing about lately: caching function evaluations, error estimation, and Runge-Kutta methods. A few days ago I wrote about how Runge-Kutta methods can all be summarized by a set of numbers called the Butcher tableau. These methods solve by evaluating f at some partial step, then evaluating f […]
Trapezoid rule and Romberg integration
This post will look at two numerical integration methods, the trapezoid rule and Romberg’s algorithm, and memoization. This post is a continuation of ideas from the recent posts on Lobatto integration and memoization. Although the trapezoid rule is not typically very accurate, it can be in special instances, and Romberg combined it with extrapolation to […]
Python and the Tell-Tale Heart
I was browsing through SciPy documentation this evening and ran across a function in scipy.misc called electrocardiogram. What?! It’s an actual electrocardiogram, sampled at 360 Hz. Presumably it’s included as convenient example data. Here’s a plot of the first five seconds. I wrote a little code using it to turn the ECG into an audio […]
Why HIPAA matters even if you’re not a “covered entity”
The HIPAA privacy rule only applies to “covered entities.” This generally means insurance plans, healthcare clearinghouses, and medical providers. If your company is using heath information but isn’t a covered entity per the HIPAA statute, there are a couple reasons you might still need to pay attention to HIPAA [1]. The first is that […]
Scaling and memoization
The previous post explained that Lobatto’s integration method is more efficient than Gaussian quadrature when the end points of the interval need to be included as integration points. It mentioned that this is an advantage when you need to integrate over a sequence of contiguous intervals, say [1, 2] then [2, 3], because the function […]
Lobatto integration
A basic idea in numerical integration is that if a method integrates polynomials exactly, it should do well on polynomial-like functions [1]. The higher the degree of polynomial it integrates exactly, the more accurate we expect it will be on functions that behave like polynomials. The best known example of this is Gaussian quadrature. However, […]
Runge-Kutta methods and Butcher tableau
If you know one numerical method for solving ordinary differential equations, it’s probably Euler’s method. If you know two methods, the second is probably 4th order Runge-Kutta. It’s standard in classes on differential equations or numerical analysis to present Euler’s method as conceptually simple but inefficient introduction, then to present Runge-Kutta as a complicated but […]
Plotting complex functions
I wrote a blog post of sorts, spread over several tweets, about plotting functions of a complex variable. Plot of cosine over a square centered at the origin of the complex plane. Color = phase. Height = magnitude. pic.twitter.com/rOT7tAkfM9 — Analysis Fact (@AnalysisFact) February 11, 2020 Here are a couple of the images from the […]
The orbit that put men on the moon
Richard Arenstorf (1929–2014) discovered a stable periodic orbit between the Earth and the Moon which was used as the basis for the Apollo missions. His orbit is a special case of the three body problem where two bodies are orbiting in a plane, i.e. the Earth and the Moon, along with a third body of […]
Behold! The Brusselator!
Having watched a few episodes of Phineas and Ferb, when I see “Brusselator” I imagine Dr. Doofenschmertz saying “Behold! The Brusselator!” But the Brusselator is considerably older than Phineas and Ferb. It goes back to Belgian scientists René Lefever and Grégoire Nicolis in 1971 [1] who combined “Brussels” and “oscillator” to name the system after […]
TestU01 small crush test suite
In recent posts I’ve written about using RNG test suites on the output of the μRNG entropy extractor. This is probably the last post in the series. I’ve looked at NIST STS, PractRand, and DIEHARDER before. In this post I’ll be looking at TestU01. TestU01 includes three batteries of tests: Small Crush, Crush, and Big […]
DIEHARDER test suite
The first well-known suite of tests for random number generators was George Marsalia’s DIEHARD battery. The name was a pun on DieHard car batteries. Robert G. Brown took over the DIEHARD test suite and called it DIEHARDER, presumably a pun on the Bruce Willis movie. I’ve written lately about an entropy extractor that creates a […]
Using PractRand to test an RNG
Yesterday I wrote about my experience using NIST STS to test an entropy extractor, a filtering procedure that produces unbiased bits from biased sources. This post will look at testing the same entropy extractor using the Practically Random (PractRand) test suite. The results were much worse this time, which speaks to the limitations of both […]
Leap seconds
We all learn as children that there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, and 365 days in a year. Then things get more complicated. There are more like 365.25 days in a year, hence leap years. Except that’s quite right either. It’s more like 365.242 […]
Testing entropy extractor with NIST STS
Around this time last year I wrote about the entropy extractor used in μRNG. It takes three biased random bit streams and returns an unbiased bit stream, provided each input stream as has least 1/3 of a bit of min-entropy. I’ve had in the back of my mind that I should go back and run […]
Odd numbers in Pascal’s triangle
Here’s an interesting theorem I ran across recently. The number of odd integers in the nth row of Pascal’s triangle equals 2b where b is the number of 1’s in the binary representation of n. Here are the first few rows of Pascal’s triangle: 1 1 1 1 2 1 1 3 3 1 1 […]
Stiff differential equations
There is no precise definition of what it means for a differential equation to be stiff, but essentially it means that implicit methods will work much better than explicit methods. The first use of the term [1] defined stiff equations as equations where certain implicit methods, in particular BDF, perform better, usually tremendously better, than […]
...33343536373839404142...