Category theory for programmers made easier
I imagine most programmers who develop an interest in category theory do so after hearing about monads. They ask someone what a monad is, and they're told that if they really want to know, they need to learn category theory.
Unfortunately, there are couple unnecessary difficulties anyone wanting to understand monads etc. is likely to face immediately. One is some deep set theory.
A category is a collection of objects ..."
You mean like a set?"
Ah, well, no. You see, Bertrand Russell showed that ..."
There are reasons for such logical niceties, but they don't matter to someone who wants to understand programming patterns.
Another complication is morphisms.
As I was saying, a category is a collection of objects and morphisms between objects ..."
You mean like functions?"
Well, they might be functions, but more generally ..."
Yes, Virginia, morphisms are functions. It's true that they might not always be functions, but they will be functions in every example you care about, at least for now.
Category theory is a framework for describing patterns in function composition, and so that's why things like monads find their ultimate home in category theory. But doing category theory rigorously requires some setup that people eager to get into applications don't have to be concerned with.
Patrick Honner posted on Twitter recently that his 8-year-old child asked him what area is. My first thought on seeing that was that a completely inappropriate answer would be that this is a deep question that wasn't satisfactorily settled until the 20th century using measure theory. My joking response to Patrick was
Well, first we have to define -algebras. They're kinda like topologies, but closed under countable union and intersection instead of arbitrarily union and finite intersection. Anyway, a measure is a ...
It would be ridiculous to answer a child this way, and it is nearly as ridiculous to burden a programmer with unnecessary logical nuance when they're trying to find out why something is called a functor, or a monoid, or a monad, etc.
I saw an applied category theory presentation that began with A category is a graph ..." That sweeps a lot under the rug, but it's not a bad conceptual approximation.
So my advice to programmers learning category theory is to focus on the arrows in the diagrams. Think of them as functions; they probably are in your application [1]. Think of category theory as a framework for describing patterns. The rigorous foundations can be postponed, perhaps indefinitely, just as an 8-year-old child doesn't need to know measure theory to begin understanding area.
More category theory posts[1] The term contravariant functor" has unfortunately become deprecated. In more modern presentations, all functors are covariant, but some are covariant in an opposite category. That does make the presentation more slick, but at the cost of turning arrows around that used to represent functions and now don't really. In my opinion, category theory would be more approachable if we got rid of all opposite categories" and said that functors come in two flavors, covariant and contravariant, at least in introductory presentations.
The post Category theory for programmers made easier first appeared on John D. Cook.