Article 65VK2 Entropy of a Student t distribution

Entropy of a Student t distribution

by
John
from John D. Cook on (#65VK2)

I was looking up the entropy of a Student t distribution and something didn't seem right, so I wanted to look at familiar special cases.

The Student t distribution with degrees of freedom has two important special cases: = 1 and = . When = 1 we get the Cauchy distribution, and in the limit as we get the normal distribution. The expression for entropy is simple in these two special cases, but it's not at all obvious that the general expression at = 1 and = gives the entropy for the Cauchy and normal distributions.

The entropy of a Cauchy random variable (with scale 1) is

t_entropy1.svg

and the entropy of a normal random variable (with scale 1) is

t_entropy2.svg

The entropy of a Student t random variable with degrees of freedom is

t_entropy3.svg

Here is the digamma function, the derivative of the log of the gamma function, and B is the beta function. These two functions are implemented as psi and beta in Python, and PolyGamma and Beta in Mathematica. Equation for entropy found on Wikipedia.

This post will show numerically and analytically that the general expression does have the right special cases. As a bonus, we'll prove an asymptotic formula for the entropy along the way.

Numerical evaluation

Numerical evaluation shows that the entropy expression with = 1 does give the entropy for a Cauchy random variable.

 from numpy import pi, log, sqrt from scipy.special import psi, beta def t_entropy(nu): S = 0.5*(nu + 1)*(psi(0.5*(nu+1)) - psi(0.5*nu)) S += log(sqrt(nu)*beta(0.5*nu, 0.5)) return S cauchy_entropy = log(4*pi) print(t_entropy(1) - cauchy_entropy)

This prints 0.

Experiments with large values of show that the entropy for large is approaching the entropy for a normal distribution. In fact, it seems the difference between the entropy for a t distribution with degrees of freedom and the entropy of a standard normal distribution is asymptotic to 1/.

 normal_entropy = 0.5*(log(2*pi) + 1) for i in range(5): print(t_entropy(10**i)- normal_entropy)

This prints

 1.112085713764618 0.10232395977100861 0.010024832113557203 0.0010002498337291499 0.00010000250146458001
Analytical evaluation

There are tidy expressions for the function at a few special arguments, including 1 and 1/2. And the beta function has a special value at (1/2, 1/2).

We have (1) = - and (1/2) = -2 log 2 - where is the Euler-Mascheroni constant. So the first half of the expression for the entropy of a t distribution with 1 degree of freedom reduces to 2 log 2. Also, B(1/2, 1/2) = . Adding these together we get 2 log 2 + log which is the same as log 4.

For large z, we have the asymptotic series

t_entropy7.svg

See, for example, A&S 6.3.18. We'll also need the well-known fact that log(1 + z) z. for small z,

t_entropy4.svg

Next we use the definition of the beta function as a ratio of gamma functions, the fact that (1/2) = , and the asymptotic formula here to find that

t_entropy5.svg

This shows that the entropy of a Student t random variable with degrees of freedom is asymptotically

t_entropy6.svg

for large . This shows that we do indeed get the entropy of a normal random variable in the limit, and that the difference between the Student t and normal entropies is asymptotically 1/, proving the conjecture inspired by the numerical experiment above.

The post Entropy of a Student t distribution first appeared on John D. Cook.
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments