Sign up FAST! Login

No technology -- not even Tesla Autopilot -- can be completely safe.


Stashed in: Engineers!, History of Tech!, @elonmusk, Nate Silver, Tesla, Tesla!, Self-driving Trucks

To save this post, select a stash from drop-down menu or type in a new one:

No such thing as 100% when it comes to technology. 

As the company would very much like you to note, self-driving cars are doing better, statistically speaking, than human drivers. Tesla says autopilot was used for 130 million miles worth of driving before this fatal crash. Human-driven cars in the U.S. have 1.08 fatal crashes for every 100 million miles.

Instead, what a car crash and a nuclear power plant have in common is an engineering conundrum shared by many other complex technological systems. You can engineer a system to be safer, but you can’t engineer it to be completely safe. It’s like casual sex, but with even more moving parts.

The safer you make a technological system — the more obvious, known risks you eliminate — the more likely it is that you’ve accidentally inserted unlikely, unpredictable failure points that could, someday, come back to bite you in the ass. All the more so because the people operating the system don’t expect the failure.

That seems to be what happened in the Tesla crash, based on what we know. A 40-year-old man named Joshua Brown was killed in May on a divided highway when a tractor-trailer made a left turn in front of his car at an intersection and Brown’s car didn’t stop, perhaps because it read the trailer as a highway sign. Autopilot is programmed to ignore highway signs that hang above the road — nobody wants a car that lays on the brakes in the middle of the interstate.

You May Also Like: