Fractional parts, invariant measures, and simulation
A function f: X a' X is measure-preserving if for each iteration of f sends the same amount of stuff into a given set. To be more precise, given a measure I1/4 and any I1/4-measurable set E with I1/4(E) > 0, we have
I1/4( E ) = I1/4( f -1(E) ).
You can read the right side of the equation above as "the measure of the set of points that f maps into E." You can apply this condition repeatedly to see that the measure of the set of points mapped into E after n iterations is still just the measure of E.
If X is a probability space, i.e. I1/4( X ) = 1, then you could interpret the definition of measure-preserving to mean that the probability that a point ends up in E after n iterations is independent of n. We'll illustrate this below with a simulation.
Let X be the half-open unit interval (0, 1] and let f be the Gauss map, i.e.
f(x) = 1/x - [1/x]
where [z] is the integer part of z. The function f is measure-preserving, though not for the usual Lebesgue measure. Instead it preserves the following measure:
Let's take as our set E an interval [a, b] and test via simulation whether the probability of landing in E after n iterations really is just the measure of E, independent of n.
We can't just generate points uniformly in the interval (0, 1]. We have to generate the points so that the probability of a point coming from a set E is I1/4(E). That means the PDF of the distribution must be p(x) = 1 / (log(2) (1 + x)). We use the inverse-CDF method to generate points with this density.
from math import log, floorfrom random import randomdef gauss_map(x): y = 1.0/x return y - floor(y)# iterate gauss map n timesdef iterated_gauss_map(x, n): while n > 0: x = gauss_map(x) n = n - 1 return x # measure mu( [a,b] )def mu(a, b): return (log(1.0+b) - log(1.0+a))/log(2.0)# Generate samples with Prob(x in E) = mu( E )def sample(): u = random() return 2.0**u - 1.0def simulate(num_points, num_iterations, a, b): count = 0 for _ in range(num_points): x = sample() y = iterated_gauss_map(x, num_iterations) if a < y < b: count += 1 return count / num_points# Change these parameters however you likea, b = 0.1, 0.25N = 1000000exact = mu(a, b)print("Exact probability:", exact)print("Simulated probability after n iterations")for n in range(1, 10): simulated = simulate(N, n, a, b) print("n =", n, ":", simulated)
Here's the output I got:
Exact probability: 0.18442457113742736Simulated probability after n iterationsn = 1 : 0.184329n = 2 : 0.183969n = 3 : 0.184233n = 4 : 0.184322n = 5 : 0.184439n = 6 : 0.184059n = 7 : 0.184602n = 8 : 0.183877n = 9 : 0.184834
With 1,000,000 samples, we expect the results to be the same to about 3 decimal places, which is what we see above.
Related post: Irrational rotations are ergodic.
(A transformation f is ergodic if it is measure preserving and the only sets E with f -1(E) = E are those with measure 0 or full measure. Rational rotations are measure-preserving but not ergodic. The Gauss map above is ergodic.)