The AI Revolution: Road to Superintelligence - Wait But Why
JP Schneider stashed this in Humans
Stashed in: Awesome, Consequences, The Matrix, Bill Gates, life, science, Singularity!, @elonmusk, AI, The Singularity, Stephen Hawking, WHY, Future, big thinkers, humanity, the new, Big Hero 6, Ray Kurzweil!, Ray Kurzweil, Accelerating Returns, Boy, That Escalated Quickly, Iron Giant!, Machine Learning, Artificial Intelligence, Wait But Why, Training, Deep Learning
This sounds like something out of the Spike Jonze movie "Her":
And here’s where we get to an intense concept: recursive self-improvement. It works like this—
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make evenbigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11and it’s the ultimate example of The Law of Accelerating Returns.
There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:
It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.
What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happenin the next few decades.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:
Will it be a nice God?
I really, really hope it is a nice God.
I'm counting on having some nice cybernetic add-ons by that point, so that I'm as smart as the machines, another thing we could do, is program all machines to never harm a human.
Aren't you worried the add-on will have a mind of its own and not listen to you?
I mean, even if we program the machines if they're smart they can reprogram themselves right?
We have to design these things the best we can and move forward, I feel the only alternative is to make AI illegal, and that shouldn't be an option, it will stunt us. We have to figure out how to make it work.
Perhaps we have to teach the machines a sense of right and wrong, and a moral fiber?
That might be a good way to do it, and also don't let them know, but program them so they can not hurt a human. But if we eventually aren't human, or have very little human remaining in us, maybe it will not matter at some point?
Part of this might be able to be solved by not teaching them violence in the first place, but that will probably be impossible, as we will want to use them for dangerous jobs like policeman, soldier, maybe even football players, and we will not be able to give up being violent, though we might be able to breed it down a bit.
It really makes me think about what it means to think.
The machines might have the brain's storage capacity but can they have a moral compass?
Well, isn't a moral compass but a series of thoughts? And if it is in fact a series of thoughts, and machines get to the point by about 2030 (according to Kurzweil http://pandawhale.com/post/57373/ray-kurzweils-predictions-for-the-next-25-years?utm_source=jdotp), we really will get quite the new viewpoint on Nature vs. Nurture. "Aren't you worried the add-on will have a mind of its own and not listen to you?"It will come with a free plugin to make you believe it was all your choice, and you are happy about it.
If anything a machine should be better at following a moral code than a human.
Many humans choose to do wrong even though they know the difference between right and wrong.
it'll be a nice god. it has to be. intelligence is not mean or greedy... is it?
i'd like to think it's our lack of intelligence that makes us humans such aholes.
I've seen mean and greedy intelligent people.
But robots might optimize over something (say total happiness in the world) while also doing despicable things (killing all the people who bring unhappiness) because logically that might make sense.
It's important to figure out how to teach the robots a way to learn ethics and morals.
"The first matrix I designed was quite naturally perfect. It was a work of art. Flawless. Sublime. A triumph only equaled by its monumental failure. "
There are likely to be unintended consequences. We don't know what we don't know.
YES! More good robots, please.
Add Bill Gates to the list of people who worry along with Elon Musk and Stephen Hawking:
Despite his longstanding interest in AI, however, Gates also says he is "in the camp that is concerned about super intelligence," which also includes Stephen Hawking and Elon Musk. Hawking believes AI could eventually "outsmart financial markets" and "out-invent human researchers," while Musk similarly believes we could see a "Terminator-like scenario" if we aren't careful about how we manage this technology.
Read more: http://businessinsider.com/bill-gates-if-microsoft-didnt-work-out-2015-1
and then there's vegas!! what'll happen to vegas?! poker and blackjack will be a thing of the past!
That's not necessarily a bad thing. Gambling is not a constructive hobby.
More on Baymax from Big Hero 6 being the future of robots:
The worst combination for society is intelligence with psychopathy, those are the dangerous ones. Intelligence does not give us a guarantee of niceness, it would be nice if it worked like that ;)
but intelligence itself is just intelligence. do these computers develop egos and psychoses?
So far no computer has developed an ego or a psychosis.
Then again, look at the animated gif above and you'll see that there's not enough computer capacity to mimic thinking yet -- let alone emotions or personality.
That's what we'll find out in the next 10-20 years when computer chips become powerful enough.
what an exciting time to be alive!!
It really is. We get to live through this thing some call the Singularity.
sounds like the ultimate!
11:17 AM Jan 29 2015