Sign up FAST! Login

How to create an AI startup: Convince some humans to be your training set.


Stashed in: @reidhoffman, Microsoft, Awesome, AI, Self-driving Cars, Machine Learning, Software is eating the world., Artificial Intelligence, Training, Deep Learning

To save this post, select a stash from drop-down menu or type in a new one:

Jeff Leek describes the typical trajectory for an AI business:

  1. Get a large collection of humans to perform some repetitive but possibly complicated behavior (play thousands of games of Go, or answer requests from people on Facebook messenger, or label pictures and videos, or drive cars.)
  2. Record all of the actions the humans perform to create a training set.
  3. Feed these data into a statistical model with a huge number of parameters - made possible by having a huge training set collected from the humans in steps 1 and 2.
  4. Apply the algorithm to perform the repetitive task and cut the humans out of the process.

The question is how do you get the humans to perform the task for you? One option is to collect data from humans who are using your product (think Facebook image tagging). The other, more recent phenomenon, is to farm the task out to a large number of contractors (think gig economy jobs like driving for Uber, or responding to queries on Facebook).

The interesting thing about the latter case is that in the short term it produces a market for gigs for humans. But in the long term, by performing those tasks, the humans are putting themselves out of a job. This played out in a relatively public way just recently with a service called GoButler that used its employees to train a model and then replaced them with that model.

Jeff Leek says the latest trend in data science is artificial intelligence

It has been all over the news for tackling a bunch of interesting questions. For example:

Almost all of these applications are based (at some level) on using variations on neural networks and deep learning. These models are used like any other statistical or machine learning model. They involve a prediction function that is based on a set of parameters. Using a training data set, you estimate the parameters. Then when you get a new set of data, you push it through the prediction function using those estimated parameters and make your predictions.

So why does deep learning do so well on problems like voice recognition, image recognition, and other complicated tasks? The main reason is that these models involve hundreds of thousands or millions of parameters, that allow the model to capture even very subtle structure in large scale data sets. This type of model can be fit now because (a) we have huge training sets (think all the pictures on Facebook or all voice recordings of people using Siri) and (b) we have fast computers that allow us to estimate the parameters.

Almost all of the high-profile examples of “artificial intelligence” we are hearing about involve this type of process. This means that the machine is “learning” from examples of how humans behave. The algorithm itself is a way to estimate subtle structure from collections of human behavior.

Here's the Microsoft Azure AI cheat sheet:

Humans training set machine learning

In addition:

First off, let’s talk about training data. There’s a reason that those big players I mentioned above open-sourced their algorithms without worrying too much about giving away any secrets: it’s because the actual secret sauce isn’t the algorithm, it’s the data. Just think about Google. They can release TensorFlow without a worry that someone else will come along and create a better search engine because there are over a trillion searches on Google each year.

Those searches are training data and that training data comes from people; no algorithm can learn without data. After all, it’s not that machine learning models are smarter than people, it’s that they can parse and learn from near unfathomable amounts of data. But those models can’t figure out what to do with new data or how to make judgments on it without training data, created by humans, to actually inform their learning process.

In other words, machines learn from the data humans create. Whether it’s you tagging your friends in images on Facebook, filling out a CAPTCHA online, keying in a check amount at the ATM, those all end up in a dataset that a machine learning algorithm will be trained on. Machine learning simply can’t exist without this data.

Source: http://insidebigdata.com/2016/01/11/human-in-the-loop-is-the-future-of-machine-learning/

OK, maybe the singularity IS near: @DeepMindAI Neural Net helps 2nd Neural Net learn faster

HT: @jeremyphoward

#2MA

http://arxiv.org/pdf/1606.04474v1.pdf

Source: https://twitter.com/erikbryn/status/743524144449466372

You May Also Like: