Article 6G6JV KL divergence from normal to normal

KL divergence from normal to normal

by
John
from John D. Cook on (#6G6JV)

The previous post looked at the best approximation to a normal density by normal density with a different mean. Dan Piponi suggested in the comments that it would be good to look at the Kullback-Leibler (KL) divergence.

The previous post looked at the difference from between two densities from an analytic perspective, solving the problem that an analyst would find natural. This post takes an information theoretic perspective. Just is p-norms are natural in analysis, KL divergence is natural in information theory.

The Kullback-Leibler divergence between two random variablesX andY is defined as

kl_def.svg

There are many ways to interpret KL(X || Y), such as the average surprise in seeing Y when you expectedX.

Unlike the p-norm distance, the KL divergence between two normal random variables can be computed in closed form.

Let X be a normal random variable with mean X and variance ^2X and Y a normal random variable with mean Y and variance ^2Y. Then

kl_normalnormal.svg

If X = 0 and X = 1, then for fixed Y the value of^2Y that minimizes KL(X || Y) is

kl_nn2.svg

KL divergence is not symmetric, hence we say divergence rather than distance. More on that here. If we want to solve the opposite problem, minimizing KL(X || Y), the optimal value of ^2Y is simply 1.

The post KL divergence from normal to normal first appeared on John D. Cook.
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments