The valley of medium reliability
Last evening my electricity went out and this morning it was restored. This got met thinking about systems that fail occasionally [1]. Electricity goes out often enough that we prepare for it. We have candles and flashlights, my work computer is on a UPS, etc.
A residential power outage is usually just an inconvenience, especially if the power comes back on within a few hours. A power outage to a hospital could be disastrous, and so hospitals have redundant power systems. The problem is in between, if power is reliable enough that you don't expect it to go out, but the consequences of an outage are serious [2].
If a system fails occasionally, you prepare for that. And if it never fails, that's great. In between is the problem, a system just reliable enough to lull you into complacency.
Dangerously reliable systemsFor example, GPS used to be unreliable. It made useful suggestions, but you wouldn't blindly trust it. Then it got a little better and became dangerous as people trusted it when they shouldn't. Now it's much better. Not perfect, but less dangerous.
For another example, people who live in flood planes have flood insurance. Their mortgage company requires it. And people who live on top of mountains don't need flood insurance. The people at most risk are in the middle. They live in an area that could flood, but since it hasn't yet flooded they don't buy flood insurance.
So safety is not an increasing function of reliability, not always. It might dip down before going up. There's a valley between unreliable and highly reliable where people are tempted to take unwise risks.
Artificial intelligence risksI expect we'll see a lot of this with artificial intelligence. Clumsy AI is not dangerous; pretty good AI is dangerous. Moderately reliable systems in general are dangerous, but this especially applies to AI.
As in the examples above, the better AI becomes, the more we rely on it. But there's something else going on. As AI failures become less frequent they also become weird.
Adversarial attacksYou'll see stories of someone putting a tiny sticker on a stop sign and now a computer vision algorithm thinks the stop sign is a frog or an ice cream sundae. In this case, there was a deliberate attack: someone knew how to design a sticker to fool the algorithm. But strange failures can also happen unprompted.
Unforced errorsAmazon's search feature, for example, is usually very good. Sometimes I'll get every word in a book title wrong and yet it will figure out what I meant. But one time I was searching for the book Universal Principles of Design.
I thought I remembered a "25" in the title. The subtitle turns out to be "125 ways to enhance reliability "" I searched on "25 Universal Design Principles" and the top result was a massage machine that will supposedly increase the size of a woman's breasts. I tried the same search again this morning. The top result is a book on design. The next five results are
- a clip-on rear view mirror
- a case of adult diapers
- a ratchet adapter socket
- a beverage cup warmer, and
- a folding bed.
The book I was after, and whose title I remembered pretty well, was nowhere in the results.
Because AI is literally artificial, it makes mistakes no human would make. If I went to a brick-and-mortar book store and told a clerk "I'm looking for a book. I think the title is something like '25 Universal Design Principles," the clerk would not say "Would you like to increase your breast size? Or maybe buy a box of diapers?"
In this case, the results were harmless, even entertaining. But unexpected results in a mission-critical system would not be so entertaining. Our efforts to make systems fool-proof has been based on experience with human fools, not artificial ones.
[1] This post is an elaboration on what started as a Twitter thread.
[2] I'm told that in Norway electrical power is very reliable, but also very dependent on electricity, including for heating. Alternative sources of fuel such as propane are hard to find.