KL divergence from normal to normal
The previous post looked at the best approximation to a normal density by normal density with a different mean. Dan Piponi suggested in the comments that it would be good to look at the Kullback-Leibler (KL) divergence.
The previous post looked at the difference from between two densities from an analytic perspective, solving the problem that an analyst would find natural. This post takes an information theoretic perspective. Just is p-norms are natural in analysis, KL divergence is natural in information theory.
The Kullback-Leibler divergence between two random variablesX andY is defined as
There are many ways to interpret KL(X || Y), such as the average surprise in seeing Y when you expectedX.
Unlike the p-norm distance, the KL divergence between two normal random variables can be computed in closed form.
Let X be a normal random variable with mean X and variance ^2X and Y a normal random variable with mean Y and variance ^2Y. Then
If X = 0 and X = 1, then for fixed Y the value of^2Y that minimizes KL(X || Y) is
KL divergence is not symmetric, hence we say divergence rather than distance. More on that here. If we want to solve the opposite problem, minimizing KL(X || Y), the optimal value of ^2Y is simply 1.
The post KL divergence from normal to normal first appeared on John D. Cook.