Google's DeepMind AI can now learn from its own memory independently.
Adam Rifkin stashed this in Deep Learning
The DeepMind artificial intelligence (AI) being developed by Google's parent company, Alphabet, can now intelligently build on what's already inside its memory, the system's programmers have announced.
Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.
What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain.
"These models... can learn from examples like neural networks, but they can also store complex data like computers," write DeepMind researchers Alexander Graves and Greg Wayne in a blog post.
At the heart of the DNC is a controller that constantly optimises its responses, comparing its results with the desired and correct ones. Over time, it's able to get more and more accurate, figuring out how to use its memory data banks at the same time.
Take a family tree: after being told about certain relationships, the DNC was able to figure out other family connections on its own – writing, rewriting, and optimising its memory along the way to pull out the correct information at the right time.
Another example the researchers give is a public transit system, like the London Underground. Once it's learned the basics, the DNC can figure out more complex relationships and routes without any extra help, relying on what it's already got in its memory banks.
In other words, it's functioning like a human brain, taking data from memory (like tube station positions) and figuring out new information (like how many stops to stay on for).
Of course, any smartphone mapping app can tell you the quickest way from one tube station to another, but the difference is that the DNC isn't pulling this information out of a pre-programmed timetable – it's working out the information on its own, and juggling a lot of data in its memory all at once.
The approach means a DNC system could take what it learned about the London Underground and apply parts of its knowledge to another transport network, like the New York subway.
The system points to a future where artificial intelligence could answer questions on new topics, by deducing responses from prior experiences, without needing to have learned every possible answer beforehand.
An AI doesn't have a memory, like our memory. We are seriously screwing things up by using "memory" as a metaphor for what a computer does, and then we make it worse by believing that our memory is similar, with a computer memory. This is bad science and useless philosophy. Anyone who studies their own mind, knows it is not a computer, and that a computer can't do what a mind does. A computer has digital storage of ones and zeros. It lacks a reflective capacity to make any of that have "meaning" the same way we find "meaning" in existence. I could go on, but basically I am just tired of all the weak and lame thinking going on with these kinds of questions. "I am not my brain, the map is not the territory" http://ipwebdev.com/hermit/brain.html
More Reddit comments: