Article 51BGK Conditional independence notation

Conditional independence notation

by
John
from John D. Cook on (#51BGK)

Ten years ago I wrote a blog post that concludes with this observation:

The ideas of being relatively prime, independent, and perpendicular are all related, and so it makes sense to use a common symbol to denote each.

This post returns to that theme, particularly looking at independence of random variables.

History

Graham, Knuth, and Patashnik proposed using a for relatively prime numbers in their book Concrete Mathematics, at least by the second edition (1994). Maybe it was in their first edition (1988), but I don't have that edition.

Philip Dawid proposed a similar symbol a for (conditionally) independent random variables in 1979 [1].

dawid_1979.png

As explained here, independent random variables really are orthogonal in some sense, so it's a good notation.

Typography

The symbol a (Unicode 2AEB, DOUBLE TACK UP) may or may not show up in your browser; it's an uncommon character and your font may not have a glyph for it.

There's no command in basic LaTeX for the symbol. You can enter the Unicode character in XeTeX, and there are several other alternatives discussed here. A simple work-around is to use

 \perp\!\!\!\perp

This says to take two perpendicular symbols, and kern them together by inserting three negative spaces between them.

The package MsSymbol has a command \upmodels to produce a. Why "upmodels"? Because it is a 90 counterclockwise rotation of the \models symbol a from logic.

To put a strike through a in LaTeX to denote dependence, you can use \nupmodels from the MsSymbol package or if you're not using a package you could use the following.

 \not\!\perp\!\!\!\perp
Graphoid axioms

As an example of where you might see the a symbol used for conditional independence, the table below gives the graphoid axioms for conditional independence. (They're theorems, not axioms, but they're called axioms because you could think of them as axioms for working with conditional independence at a higher level of abstraction.)

graphoid_axioms.svg

Note that the independence symbol a has higher precedence than the conditional symbol |. That is, X a Y | Z means X is independent of Y, once you condition on Z.

The axioms above are awfully dense, but they make sense when expanded into words. For example, the symmetry axiom says that if knowledge of Z makes Y irrelevant to X, it also makes X irrelevant to Y. The decomposition axiom says that if knowing Z makes the combination of Y and W irrelevant to X, then knowing Z makes Y alone irrelevant to X.

The intersection axiom requires strictly positive probability distributions, i.e. you can't have events with probability zero.

More on conditional probability

[1] AP Dawid. Conditional Independence in Statistical Theory. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 41, No. 1 (1979), pp. 1-31

jX4avUlAMkw
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments