Article 1TKDQ Beta reduction: The difference typing makes

Beta reduction: The difference typing makes

by
John
from John D. Cook on (#1TKDQ)

Beta reduction is essentially function application. If you have a function described by what it does to x and apply it to an argument t, you rewrite the xs as ts. The formal definition of I^2-reduction is more complicated than this in order to account for free versus bound variables, but this informal description is sufficient for this blog post. We will first show that I^2-reduction holds some surprises, then explain how these surprises go away when you add typing.

Suppose you have an expression (Ix.x + 2)y, which means apply to y the function that takes its argument and adds 2. Then I^2-reduction rewrites this expression as y + 2. In a more complicated expression, you might be able to apply I^2-reduction several times. When you do apply I^2-reduction several times, does the process always stop? And if you apply I^2-reduction to parts of an expression in a different order, can you get different results?

Failure to normalize

You might reasonably expect that if you apply I^2-reduction enough times you eventually get an expression you can't reduce any further. Au contraire!

Consider the expression (Ix.xx) (Ix.xx). Beta reduction says to replace each of the red xs with the expression in blue. But when we do that, we get the original expression (Ix.xx) (Ix.xx) back. Beta reduction gets stuck in an infinite loop.

Next consider the expression L = (Ix.xxy) (Ix.xxy). Applying I^2-reduction the first time gives (Ix.xxy) (Ix.xxy) y or Ly. Applying I^2-reduction again yields Lyy. Beta "reduction" doesn't reduce the expression at all but makes it bigger.

The technical term for what we've seen is that I^2-reduction is not normalizing. A rewrite system is strongly normalizing if applying the rules in any order eventually terminates. It's weakly normalizing if there's at least some sequence of applying the rules that terminates. Beta reduction is neither strongly nor weakly normalizing in the context of (untyped) lambda calculus.

Types to the rescue

In simply typed lambda calculus, we assign types to every variable, and functions have to take the right type of argument. This additional structure prevents examples such as those above that fail to normalize. If x is a function that takes an argument of type A and returns an argument of type B then you can't apply x to itself. This is because x takes something of type A, not something of type function from A to B. You can prove that not only does this rule out specific examples like those above, it rules out all possible examples that would prevent I^2-reduction from terminating.

To summarize, I^2-reduction is not normalizing, not even weakly, in the context of untyped lambda calculus, but it is strongly normalizing in the context of simply typed lambda calculus.

Confluence

Although I^2-reduction is not normalizing for untyped lambda calculus, the Church-Rosser theorem says it is confluent. That is, if an expression P can be transformed by I^2-reduction two different ways into expressions M and N, then there is an expression T such that both M and N can be reduced to T. This implies that if I^2-reduction does terminate, then it terminates in a unique expression (up to I-conversion, i.e. renaming bound variables). Simply typed lambda calculus is confluent as well, and so in that context we can say I^2-reduction always terminates in a unique expression (again up to I-conversion).

-mDMdpH1WrM
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments