Sign up FAST! Login

How Tesla's autopilot learns...


Stashed in: Software!, Self-driving Cars, Technology, My Cold Dead Fingers, Tesla, Tesla!, Accelerating Returns, GPU

To save this post, select a stash from drop-down menu or type in a new one:

This is actually way cooler than I imagined. I'm looking forward to an autopilot world!

While most car companies might not be building learning systems, Google’s self-driving cars operate in a similar manner.

In that way, Tesla’s cars are more similar to smart connected gadgets like Nest’s learning thermostat (now owned by Google’s Alphabet), than they are to traditional cars. Nest’s thermostat, using sensors and algorithms, learns its owner’s behavior over time, and through software updates offers increasingly useful services, or even informs Nest’s decisions about its next-generation of hardware.

So, how does Tesla’s autopilot system, and its cars in general, learn? It all starts with data.

Companies building these types of driver-assistance services, as well as full-blown self-driving cars like Google’s, need to teach a computer how to take over key parts (or all) of driving using digital sensor systems instead of a human’s senses. To do that companies generally start out by training algorithms using a large amount of data.

You can think of it how a child learns through constant experiences and replication, explained Nvidia’s Senior Director of Automotive, Danny Shapiro in an interview with Fortune. Nvidia sells high performance chips that enable computers to process large amounts of data, and more recently started selling a computing system, called Drive PX, for self-driving cars and driver-assist applications.

To create a self-driving car, companies feed hundreds of thousands, or even millions, of miles of driving videos and data into a computer’s data model to basically create a massive vocabulary around driving. The algorithms use visual techniques to break down the videos and to understand them. The goal is that when something unexpected happens — a ball rolls into the street — the car can recognize the pattern and react accordingly (slow down because a child could be running into the street after it).

For Nvidia, the company loads this “driving dictionary,” as Shapiro calls it, onto powerful but compact computing hardware that can be used on the car. After that, companies like Google and Tesla add various types of data from different sources to continue to inform the model over time.

I'm surprised:

1. Tesla and Google aren't collaborating on this.

2. Google hasn't bought Nvidia for its GPU chip technology.

You May Also Like: