Elon Musk Worried About Artificial Intelligence
Geege Schuman stashed this in AI
Stashed in: Awesome, The Future, Consequences, Turing, Singularity!, @elonmusk, Intelligence, Unintended Consequences, big thinkers, uh oh, Elon Musk, Artificial Intelligence
Unintended consequences
Artificial intelligences could be created with the best of intentions—to conduct scientific research aimed at curing cancer, for example. But when AIs become superhumanly intelligent, their single-minded realization of those goals could have apocalyptic consequences.
“The basic problem is that the strong realization of most motivations is incompatible with human existence,” Daniel Dewey, a research fellow at the Future of Humanity Institute, said in an extensive interview with Aeon magazine. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”
Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."
According to theorists, once the AI is able to make itself smarter, it would quickly surpass human intelligence.
What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.
Eh, artificial intelligence is no match for natural stupidity.
The dire scenarios listed above are only the consequences of a benevolent AI, or at worst one that’s indifferent to the needs and desires of humanity. But what if there was a malicious artificial intelligence that not only wished to do us harm, but that retroactively punished every person who refused to help create it in the first place?
This theory is a mind-boggler, most recently explained in great detail by Slate, but it goes something like this: An omniscient evil AI that is created at some future date has the ability to simulate the universe itself, along with everyone who has ever lived. And if you don’t help the AI come into being, it will torture the simulated version of you—and, P.S., we might be living in that simulation already.
This thought experiment was deemed so dangerous by Eliezer “The AI does not love you” Yudkowsky that he has deleted all mentions of it on LessWrong, the website he founded where people discuss these sorts of conundrums. His reaction, as highlighted by Slate, is worth quoting in full:
Listen to me very closely, you idiot.YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.You have to be really clever to come up with a genuinely dangerous thought.
Yes, very dangerous: thoughts are things!
...and genuinely dangerous thoughts manifest from really clever intentions.
It's the unintended consequences that really get us.
Main problem with AI is that more and more people will trust the machines to make decisions. Thus and error in a machines can cause systemic crisis 2008 is an example.
Or the machines will reject you. "Her" is an example.
I'd rather have machines make bad decisions than reject us.
I'd rather have people use critical thinking to override the machine.
8:02 PM Aug 04 2014