Defining zero factorial
Things are defined the way they are for good reasons. This seems blatantly obvious now, but it was eye-opening when I learned this my first year in college. Our professor, Mike Starbird, asked us to go home and think about how convergence of a series should be defined. Not how it is defined, but how it should be defined. We were not to look up the definition up but to think about what it should be. The next day we proposed our definitions. In good Socratic fashion Starbird showed us the flaws of each and lead us to arrive at the standard definition.
This exercise gave me confidence that mathematical definitions were created by ordinary mortals like myself. It also began my habit of examining definitions carefully to understand what motivated them.
One question that comes up frequently is why zero factorial equals 1. The pedantic answer is "Because it is defined that way." This answer alone is not very helpful, but it does lead to the more refined question: Why is 0! defined to be 1?
The answer to the revised question is that many formulas are simpler if we define 0! to be 1. If we defined 0! to be 0, for example, countless formulas would have to add disqualifiers such as "except when n is zero."
For example, the binomial coefficients are defined by
C(n, k) = n! / k!(n - k)!.
The binomial coefficient C(n, k) tells us how many ways one can draw take a set of n things and select k of them. For example, the number of ways to deal a hand of five cards from a deck of 52 is C(52, 5) = 52! / 5! 47! = 2,598,960.
How many ways are there to deal a hand of 52 cards from a deck of 52 cards? Obviously one: the deck is the hand. But our formula says the answer is
C(52, 52) = 52! / 52! 0!,
and the formula is only correct if 0! = 1. If 0! were defined to be anything else, we'd have to say "The number of ways to deal a hand of k cards from a deck of n cards is C(n, k), except when k = 0 or k = n, in which case the answer is 1." (See [1] below for picky details.)
The example above is certainly not the only one where it is convenient to define 0! to be 1. Countless theorems would be more awkward to state if 0! were defined any other way.
Sometimes people appeal to the gamma function for justification that 0! should be defined to be 1. The gamma function extends factorial to real numbers, and the gamma function value associated with 0! is 1. (In detail, n! = I(n+1) for positive integers n and I(1) = 1.) This is reassuring, but it raises another question: Why should the gamma function be authoritative?
Indeed, there are many ways to extend factorial to non-integer values, and historically many ways were proposed. However, the gamma function won and its competitors have faded into obscurity. So why did it win? Analogous to the discussion above, we could say that the gamma function won because more formulas work out simply with this definition than with others. That is, you can very often replace n! with I(n + 1) in a formula true for positive integer values of n and get a new formula valid for real or even complex values of n.
There is another reason why gamma won, and that's the Bohr-Mollerup theorem. It says that if you're looking for a function f(x) defined for x > 0 that satisfies f(1) = 1 and f(x+1) = x f(x), then the gamma function is the only log-convex solution. Why should we look for log-convex functions? Because factorial is log-convex, and so this is a natural property to require of its extension.
Update: Occasionally I hear someone say that the gamma function (shifting its argument by 1) is the only analytic function that extends factorial to the complex plane, but this isn't true. For example, if you add sin(Ix) to the gamma function, you get another analytic function that takes on the same values as gamma for positive integer arguments.
Related posts:
- Why are empty products 1?
- Why are natural logarithms natural?
- Another reason natural logarithms are natural
===
[1] Theorems about binomial coefficients have to make some restrictions on the arguments. See these notes for full details. But in the case of dealing cards, the only necessary constraints are the natural ones: we assume the number of cards in the deck and the number we want in a hand are non-negative integers, and that we're not trying to draw more cards for a hand than there are in a deck. Defining 0! as 1 keeps us from having to make any unnatural qualifications such as "unless you're dealing the entire deck."