When I heard about IBM’s free iPad app on the history of math, Minds of Modern Mathematics, it reinforced my strong belief in why we must understand the roots of technology in our Twitter-driven world.
History places us in time. The computer has altered the human experience, and changed the way we work, what we do at play, and even how we think. A hundred years from now, generations whose lives have been unalterably changed by the impact of automating computing will wonder how it all happened – and who made it happen. If we lose that history, we will lose our cultural heritage.
Compared to historians in other fields, we have an advantage with computers: our subject
is new, and many of our pioneers are still alive. Imagine if someone had done a videotaped interview of Michelangelo just after he painted the Sistine Chapel. We can do that. Generations from now, the thoughts, memories, and voices of those at the dawn of computing will be as valuable.
But we also have a disadvantage: history is easier to write when the participants are dead and will not contest your version. For us, fierce disagreements rage among people who were there about who did what, who did it when, and who did it first. There are monumental ego clashes and titanic grudges. But that’s fine, because it creates a rich goldmine of information that we, and historians who come after us, can study. Nobody said history was supposed to be easy.
It’s important to preserve the “why” and the “how,” not just the “what.” Modern computing is the result of thousands of human minds working simultaneously on solving problems. It’s a form of parallel processing, a strategy we borrowed to use for computers. Ideas combined in unexpected ways as they built on each other’s work.
Even “simple” historical concepts aren’t really simple. What’s an invention? Breakthrough ideas sometimes seem to be “in the air” and everyone knows it. Take the integrated circuit. At least two teams of people invented it, and each produced a working model. They were working thousands of miles apart. They’d never met. It was “in the air.”
Often the process and the result are accidental. “I wasn’t trying to invent an integrated circuit,” Bob Noyce, co-inventor of the integrated circuit, was quoted as saying about the breakthrough. “I was trying to solve a production problem.” The history of computing is the history of open, inquiring minds solving big, intractable problems – even if sometimes they weren’t trying to.
Besides, computer history can be fun. An elegantly designed classic machine or a well-written software program embodies a kind of truth and beauty that give the qualified appreciative viewer an aesthetic thrill. As Albert Einstein observed, “The best scientists are also artists.”
Engineers have applied incredible creativity to solve the knotty problems of computing. Some of their ideas have worked. Some haven’t. That’s more than okay; it’s worth celebrating.
Silicon Valley understands that innovation thrives when it has a healthy relationship with failure. Technical innovation is lumpy. It’s non-linear. Long periods of the doldrums are smashed by bursts of insight and creativity. And, like artists, successful engineers are open to happy accidents.
In other cultures, failure can be shameful. Business failure can even send you to prison. But here, failure is viewed as a possible prelude to success. Many great technology breakthroughs are inspired by crazy ideas that bombed. We need to study failures, and learn from them to create our future successes.
Leonard J. Shustek is co-founder and Chairman of the Board of Trustees of the Computer History Museum. This blog posting was adapted from a previously published article.