Everything You Know About Artificial Intelligence is Wrong
Geege Schuman stashed this in AI
Stashed in: Meaning of Life, Turing, Singularity!, Soul, Robot Jobs, Artificial Intelligence, Machines Writing Code
Good article. Artificial intelligence is not necessarily consciousness.
Reality: A common assumption about machine intelligence is that it’ll be conscious—that is, it’ll actually think the way humans do. What’s more, critics like Microsoft co-founder Paul Allen believe that we’ve yet to achieve artificial general intelligence (AGI), i.e. an intelligence capable of performing any intellectual task that a human can, because we lack a scientific theory of consciousness. But as Imperial College of London cognitive roboticist Murray Shanahan points out, we should avoid conflating these two concepts.
“Consciousness is certainly a fascinating and important subject—but I don’t believe consciousness is necessary for human-level artificial intelligence,” he told Gizmodo. “Or, to be more precise, we use the word consciousness to indicate several psychological and cognitive attributes, and these come bundled together in humans.”
It’s possible to imagine a very intelligent machine that lacks one or more of these attributes. Eventually, we may build an AI that’s extremely smart, but incapable of experiencing the world in a self-aware, subjective, and conscious way. Shanahan said it may be possible to couple intelligence and consciousness in a machine, but that we shouldn’t lose sight of the fact that they’re two separate concepts.
And just because a machine passes the Turing Test—in which a computer is indistinguishable from a human—that doesn’t mean it’s conscious. To us, an advanced AI may give the impression of consciousness, but it will be no more aware of itself than a rock or a calculator.
Well I'm not sure we understand consciousness in humans that awesomely either. For example, they find that people who get injuries to their brains' emotional centers can also completely lose the ability to make decisions -- even 100% "rational-seeming" decisions -- because without emotions you don't have preferences, and without preferences you literally cannot give a damn. So, Buddhist doctrine to the contrary, desires might be the thing driving our species' problem-solving abilities aka intelligence.
For a lot of technologists, the key difference between AlphaGo and previous computers -- the thing that makes this advance both scarier and more poignant -- is that they couldn't just brute-force Go programmatically by grinding through nearly-endless combinations to learn what works. They had to make AlphaGo want to play AND WIN Go. It wants to win. It will tirelessly work and learn (by playing itself) to win.
Does it know that it wants to win? Not sure. But if you took out that part of the code, would it win? No program without that motivation ever did this. How is that different from taking out the human brain's desire feedback loops that we call consciousness?
I see your point. The difference between the feedback loops seems to be small. Hmmm.
9:14 PM Mar 14 2016