Detecting fraud with the GRIM test
The latest episode of Erik Seligman's podcast is entitled The Grim State of Modern Pizza. Although you might not realize it from the title, the post is about fraud detection.
GRIM stands for Granularity-Related Inconsistency of Means. In a nutshell, the test looks for means (averages) that are not possible on number theoretic grounds. If you average n integers, the result should be a multiple of 1/n. Ridiculously simple, but effective.
As with any test for fraud, it also catches errors and omissions that are not fraudulent. For example, suppose someone reports that they averaged 19 integers and got a result that has 22 after the decimal point. It's possible that the fractional part was 4/19 = 0.2105 and they mistakenly rounded up rather than down. Or maybe they actually computed the average of 18 integers, removing a value for some reason not made explicit in the paper.
I've made these kinds of observations occasionally, as many people have, but only when they jumped out at me. I've noticed things like a reported average of five integers ending in .3, which of course cannot happen, but it's not so obvious with larger denominators. Even though the GRIM test is trivial, the authors deserve credit for applying the idea systematically and effectively.
Kent Beck, creator of the extreme programming" methodology for software development, would ask Have you tried the simplest thing that might possibly work?" Often the answer is no. We assume simple things won't work without giving them a try first.
It takes experience to see the value in asking very simple questions. It also takes some confidence/courage because you risk looking naive if you ask a simple question that doesn't reveal a problem. And it takes some diplomacy since you risk embarrassing someone else if your simple question does reveal a problem.
Related postsThe post Detecting fraud with the GRIM test first appeared on John D. Cook.