The 4 Most Mind-Boggling Computer Blunders in History (and How Not to Repeat)
As we hand over more and more responsibilities and workloads to trustworthy and reliable computer systems, it’s a good idea to scroll back through the old history books and remind each other that no computer or software product is any more reliable than the silly, fallible humans that program these systems. Sometimes, computer bugs are as simple as leaving out a particular piece of punctuation, but can lead to devastating disasters costing hundreds of millions of dollars and even human lives. These phenomenal blunders highlight the importance of checking, double-checking, and triple-checking your work — and then having your coworker look it over for you again.
1. The Mars Climate Orbiter
Unfortunately, nobody was around to barbecue for the most expensive fireworks show in history.
In 1998, the US sent a $327.6 million space probe to Mars to orbit the planet and study its atmosphere. Built in NASA’s jet propulsion lab, multiple teams worked on various aspects of the design, launch, assent to orbit around Mars, and other elements of the mission. Though the teams worked and worked to assure the math was right, they failed to make sure that they were all using the same system of measurement.
One team conducted their work using English pound-force seconds, while another calculated using the metric Newton-seconds. The result was thrusters that were greater than four times more powerful than necessary. The conclusion was an extraordinarily expensive firecracker exploding over Mars, much to the chagrin of engineers and the American taxpayers.
2. Toyota Prius
Smart cars are only as smart as their programming.
Smart cars are actually only as smart as their engineers and programmers. The Toyota Prius, for example, began having some dumb problems in 2005, as warning lights started coming on for no apparent reason and the gasoline engines occasionally stalled, also for no discernible reason. Mechanically, the vehicles were completely sound — it was a computer error causing the problem. It ended with the recall of about 160,000 of the cars, and a chunk of change out of Toyota’s pocket. As developers work on even smarter cars that drive themselves, using new tools like Hadoop and NoSQL, let’s hope they get the data right and don’t introduce an even more serious glitch into the programming.
3. Soviet Gas Pipeline
Most of the boo boo’s on this list are unintentional bugs that nobody intended to happen. This one, if the stories are true, was quite deliberate. During the height of the Cold War, Soviets began purchasing U.S. technologies in secret, and stealing what they couldn’t manage to buy. In 1982, the CIA reportedly got word that the Soviets were purchasing plans for a gas pipeline from some Canadians. The CIA made sure that the plans contained errors that would pass Soviet inspection but would fail in actual operation.
Whether an intentional act by the CIA or something entirely different (and as of yet, undiscovered), the result was a massive explosion in Siberia that caused concern all the way back in Washington D.C. Fortunately, the incident did not cause any human casualties.
4. AT&T Network Outage
What happens when you introduce a bug that introduces a bug? AT&T programmers had to overcome this particular obstacle in 1990 when a new software update caused their long distance switches to crash every time a neighboring switch sent it a message that it had just recovered from a crash. Predictably, a switch crashed, sending the others messages when it recovered, which in turn caused those switches to crash. About 60,000 AT&T customers were left without phone service for around 9 hours while engineers scrambled to fix the problem. In the end, they had to reload the previous version of the software to get the systems back up and running again.
As big data becomes more widely used, the potential for more significant disasters is very real. It’s important for programmers and engineers and data scientists to dot their I’s, cross their T’s and make sure everyone is on the same page. Care and attention to detail from the offloading process through the analysis will assure that your disaster isn’t on a list similar to this a couple of decades from now.