Sign up FAST! Login

Are the robots about to rise? Google's new director of engineering thinks so…


Are the robots about to rise? Google's new director of engineering thinks so⦠| Technology | The Observer

Stashed in: Google!, Robots!, Awesome, Singularity!, The Internet is my religion., Technology

To save this post, select a stash from drop-down menu or type in a new one:

Just gotta hang in there 15 more years:

It's hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione?

With the fact that he believes that he has a good chance of living for ever? He just has to stay alive "long enough" to be around for when the great life-extending technologies kick in (he's 66 and he believes that "some of the baby-boomers will make it through"). Or with the fact that he's predicted that in 15 years' time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.

This is so BSG and Caprica to me; which maybe is what those shows are based on, and it is getting close to coming to pass.  Don't let them get smarter than us! or don't let them feel, or we'll be doomed! lol

It could be for the better. It doesn't have to be like the Terminator series.

If we could be one with them, enmeshed, maybe it would be Ok?  Otherwise, at some point they have the potential to see themselves as slaves, and you get into all the ethical questions, is it OK to  treat a robot like a slave?  Do they have no rights, because they are a machine?

Or... 

Maybe they DO have rights, and we learn to treat them like we treat other sentient beings: with respect.

He's got the brains, Google has the money:

Ray Kurzweil who believes that we can live for ever and that computers will gain what looks like a lot like consciousness in a little over a decade is now Google's director of engineering. The announcement of this, last year, was extraordinary enough. To people who work with tech or who are interested in tech and who are familiar with the idea that Kurzweil has popularised of "the singularity" – the moment in the future when men and machines will supposedly converge – and know him as either a brilliant maverick and visionary futurist, or a narcissistic crackpot obsessed with longevity, this was headline news in itself.

But it's what came next that puts this into context. It's since been revealed that Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an "undisclosed" but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who's probably the world's leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was "a Manhattan project of AI". If artificial intelligence was really possible, and if anybody could do it, he said, "this will be the team". The future, in ways we can't even begin to imagine, will be Google's.

Wow, Wow, Wow, this is big!  That's what they have been up to over there ;)

One of many things they've been up to. They're also very much into self-driving cars, for example.

Who ever gets the head start in advanced Artificial Intelligence is going to have a jump on the whole game, seems like Google is going for a big push forward.

Yes. No one else is even close, unless you count universities like MIT and Stanford.

And MIT and Stanford do not have the budget that Google could commit to such endeavors.

By 2045 computers will be a billion times more powerful than human brains:

His critics point out that not all his predictions have exactly panned out (no US company has reached a market capitalisation of more than $1 trillion; "bioengineered treatments" have yet to cure cancer). But in any case, the predictions aren't the meat of his work, just a byproduct. They're based on his belief that technology progresses exponentially (as is also the case in Moore's law, which sees computers' performance doubling every two years). But then you just have to dig out an old mobile phone to understand that. The problem, he says, is that humans don't think about the future that way. "Our intuition is linear."

When Kurzweil first started talking about the "singularity", a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human – will be passed in 2029. The difference is that when he began saying it, the fax machine hadn't been invented. But now, well… it's another story.

"My book The Age of Spiritual Machines came out in 1999 and that we had a conference of AI experts at Stanford and we took a poll by hand about when you think the Turing test would be passed. The consensus was hundreds of years. And a pretty good contingent thought that it would never be done.

"And today, I'm pretty much at the median of what AI experts think and the public is kind of with them. Because the public has seen things like Siri [the iPhone's voice-recognition technology] where you talk to a computer, they've seen the Google self-driving cars. My views are not radical any more. I've actually stayed consistent. It's the rest of the world that's changing its view."

And yet, we still haven't quite managed to get to grips with what that means. The Spike Jonze film, Her, which is set in the near future and has Joaquin Phoenix falling in love with a computer operating system, is not so much fantasy, according to Kurzweil, as a slightly underambitious rendering of the brave new world we are about to enter. "A lot of the dramatic tension is provided by the fact that Theodore's love interest does not have a body," Kurzweil writes in a recent review of it. "But this is an unrealistic notion. It would be technically trivial in the future to provide her with a virtual visual presence to match her virtual auditory presence."

But then he predicts that by 2045 computers will be a billion times more powerful than all of the human brains on Earth. And the characters' creation of an avatar of a dead person based on their writings, in Jonze's film, is an idea that he's been banging on about for years. He's gathered all of his father's writings and ephemera in an archive and believes it will be possible to retro-engineer him at some point in the future.

This was spurred by watching Morgan Spurlock's Inside Man episode: "Future", very interesting.

Morgan Spurlock â Inside Man - CNN.com Blogs

I've never seen that show. Any good?

This was the first show I watched, very good, would definitely recommend this episode :)

I went looking for it on YouTube but couldn't find it.

Found this instead when I searched for Morgan Spurlock Inside Man:

Most of my professional life has been in the AI/machine learning realm of natural language processing, which is definitely conflating into cognitive computing as the field matures. This excerpt from the article is apt:

> Language, he believes, is the key to everything. "And my project is ultimately to base search on really understanding what the language means. When you write an article you're not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world's information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions."

Natural language is *the* way humans communicate and our utterances pre-date any other kind of communication you can imagine except for perhaps body language, and it's so challenging to have a machine do it well because of all of the context that's required to *really* *understand* even a small snippet of text and do something actionable with it. As amazing as search engine tech like Google is, it's still mostly a document-oriented system where we use our human brains to disambiguate the page of document-oriented results, skim the documents, and find the snippet of context we need. 

This is why Google and any other company that is paying attention is so interested in a general artificial intelligence: imagine a typical experience where you ask a question and just get the answer you were expecting (even 60% of the time). That's essentially one plausible explanation of what it means to pass the Turing Test in the context we're talking about...and the accuracy doesn't even need to be perfect. After all, you can ask another human being a question and they won't give you perfectly correct information as often as you'd think; the response may be incorrect, involves biases, etc. The answer that the machine gives you just has to be "good enough" to be at parity with human judgment to make that next order of magnitude leap in "productivity" gain by not having to read the documents and skim for the answers. It's sort of like a Watson or a Wolfram Alpha that can operate on any domain.

Now, with all that said, skim this article about "Why Chinese Is So Damn Hard" that's written from the standpoint of humans learning it to temper the conversation just a bit in terms of automated understanding of natural language - http://pinyin.info/readings/texts/moser.html

Lengthy article, but well worth the read ;)  I now understand the "why's" of why the Chinese language is so hard to learn.

You May Also Like: