2015-01-09

The Memory Mavens, Part 4: The Analytical Power of Failure

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

by Tim Widowfield

Another lifetime ago, back when I was a U.S. Air Force field training detachment commander, one of our instructors came into my office with a worried look. He told me he had been teaching basic circuitry to a group of enlisted students. “Lieutenant,” he asked, “when you were in school what did they teach you about the flow of electricity? That it goes from the negative terminal to the positive, right?”

When I agreed, he continued, “Well, I’ve got this squid in my class, and he said in the Navy they taught him it goes from positive to negative!” He was flummoxed. (At the time our detachment on Beale AFB was the only certified DoD training facility from Sacramento up through Oregon, so we often played host to reservists and military members from other branches.)

I said, “But the math works both ways, right? I mean in circuit models it doesn’t really matter.” He found the whole thing terribly unsettling. It was as if I’d told him up was down and down was up.

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

— George E. P. Box

George E.P. Box
Statistician George E. P. Box (1919-2013)

All models are wrong

Often while trying to understand how processes work, we build representational, mental constructs or “models” to help us understand them better. These models don’t correspond identically to the real world; instead, they’re subsets of the world — small enough to fit inside our brains. Our models of simple electronics are like that.

What can we can learn from our little story above? First, the fact that we can swap logical current flow in a circuit diagram and still make it “work” (for our purposes) might suggest that our model doesn’t fully correspond with reality. It’s just a representational subset, after all. It’s fiction. But that’s all right, as long as our model gives us the answers we need.

Sometimes a model we know is wrong around the edges can still serve us adequately in general circumstances. We’ve refined the standard model of gravitation quite a bit since Newton’s day. However, if our only task is to launch a projectile at a castle wall, then the older, simpler model will probably suffice. On the other hand, if we want to launch and maintain an array of geosynchronous satellites for precise global positioning, we’re going to have to take into account the effects of relativity — trading in Newton for Einstein, so to speak.

Whenever we use a scientific or mathematical model to help us make real-world predictions, we need to be aware of its limits. We need to know the range of conditions within which it works reliably. And we need to know whether and how its performance degrades as it approaches those limits.

Actually, we can apply that last lesson to the real world, too. That’s why car manufacturers slam their vehicles into walls. We can’t fully understand a system’s range of acceptable behavior until we find the points at which it fails. Moreover, we can learn a great deal from discovering where and how a system begins to degrade. We don’t smash cars because we want their safety systems to fail; we do it to find out where those failure points are. Continue reading “The Memory Mavens, Part 4: The Analytical Power of Failure”