Random minimum spanning trees
I just ran across a post by John Baez pointing to an article by Alan Frieze on random minimum spanning trees.
Here's the problem.
- Create a complete graph with n nodes, i.e. connect every node to every other node.
- Assign each edge a uniform random weight between 0 and 1.
- Find the minimum spanning tree.
- Add up the edges of the weights in the minimum spanning tree.
The surprise is that as n goes to infinity, the expected value of the process above converges to the Riemann zeta function at 3, i.e.
I(3) = 1/1^3 + 1/2^3 + 1/3^3 + "
Incidentally, there are closed-form expressions for the Riemann zeta function at positive even integers. For example, I(2) = I^2 / 6. But no closed-form expressions have been found for odd integers.
SimulationHere's a little Python code to play with this.
import networkx as nx from random import random N = 1000 G = nx.Graph() for i in range(N): for j in range(i+1, N): G.add_edge(i, j, weight=random()) T = nx.minimum_spanning_tree(G) edges = T.edges(data=True) print( sum([e[2]["weight"] for e in edges]) )
When I ran this, I got 1.2307, close to I(3) = 1.20205".
I ran this again, putting the code above inside a loop, averaging the results of 100 simulations, and got 1.19701. That is, the distance between my simulation result and I(3) went from 0.03 to 0.003.
There are two reasons I wouldn't get exactly I(3). First, I'm only running a finite number of simulations (100) and so I'm not computing the expected value exactly, but only approximating it. (Probably. As in PAC: probably approximately correct.) Second, I'm using a finite graph, of size 1000, not taking a limit as graph size goes to infinity.
My limited results above suggest that the first reason accounts for most of the difference between simulation and theory. Running 100 replications cut the error down by a factor of 10. This is exactly what you'd expect from the central limit theorem. This suggests that for graphs as small as 1000 nodes, the expected value is close to the asymptotic value.
You could experiment with this, increasing the graph size and increasing the number of replications. But be patient. It takes a while for each replication to run.
GeneralizationThe paper by Frieze considers more than the uniform distribution. You can use any non-negative distribution with finite variance and whose cumulative distribution function F is differentiable at zero. The more general result replaces I(3) with I(3) / F '(0). We could, for example, replace the uniform distribution on weights with an exponential distribution. In this case the distribution function is 1 - exp(-x), at its derivative at the origin is 1, so our simulation should still produce approximately I(3). And indeed it does. When I took the average of 100 runs with exponential weights I got a value of 1.2065.
There's a little subtlety around using the derivative of the distribution at 0 rather than the density at 0. The derivative of the distribution (CDF) is the density (PDF), so why not just say density? One reason would be to allow the most general probability distributions, but a more immediate reason is that we're up against a discontinuity at the origin. We're looking at non-negative distributions, so the density has to be zero to the left of the origin.
When we say the derivative of the distribution at 0, we really mean the derivative at zero of a smooth extension of the distribution. For example, the exponential distribution has density 0 for negative x and density exp(-x) for non-negative x. Strictly speaking, the CDF of this distribution is 1 - exp(-x) for non-negative x and 0 for negative x. The left and right derivatives are different, so the derivative doesn't exist. By saying the distribution function is simply exp(-x), we've used a smooth extension from the non-negative reals to all reals.