Computers Are Getting a Dose of Common Sense
Rich Hua stashed this in Technology
Siri needs this!
MetaMind’s approach combines two forms of memory with an advanced neural network fed large quantities of annotated text. The first is a kind of database of concepts and facts; the other is short-term, or “episodic.” When asked a question, the system, which the company calls a dynamic memory network, will search for relevant patterns in the text that it has learned from; after finding associations, it will use its episodic memory to return to the question and look for further, more abstract patterns. This enables it to answer questions that require connecting several pieces of information.
Because Socher’s system was trained using data sets that covered sentiment and structure, the system can answer questions about the sentiment, or emotional tone, of text, as well as basic questions about its structure. “The cool thing is that it learns that from example,” Socher says. “It wires the episodic memory for itself.”
MetaMind tested its system using a data set released by Facebook for measuring machine performance at question-and-answer tasks. The startup’s software outperformed Facebook’s own algorithms by a narrow margin.
Making computers better at understanding everyday language could have significant implications for companies such as Facebook. It could provide a much easier way for users to find or filter information, allowing them to enter requests written as normal sentences. It could also enable Facebook to glean meaning from the information its users post on their pages and those of their friends. This could offer a powerful way to recommend information, or to place ads alongside content more thoughtfully.
The work is a sign of ongoing progress toward giving machines better language skills. Much of this work now revolves around an approach known as deep learning, which involves feeding vast amounts of data into a system that performs a series of calculations to help identify abstract features in, say, an image or an audio file.
“One thing that’s promising here is that the architecture clearly separates modules for ‘episodic’ and ‘semantic’ memory,” says Noah Smith, an associate professor at Carnegie Mellon University who studies natural language processing. “That’s been a shortcoming of many neural-network-based architectures, and in my view it’s great to see a step in the direction of models that can be inspected to allow engineering of further improvements.”