When Is the Singularity? Probably Not in Your Lifetime
Geege Schuman stashed this in The Singularity
So we don't have bored robots to look forward to in our lifetime?
The robot on the lawn mower looks bored.
It is mimicking the human. OR MOCKING.
Robots don't mock so I choose to believe mimicking.
I couldn't read the article because I've hit NYT paywall limit. But this has been evident to me since I learned how Kurzweil -- the ring leader of this nonsense -- probably underestimated the computational power of the brain by a couple of orders of magnitude. Of course that assumes computational power (potential) is the real bottleneck, rather than design/training of systems that work on such scales. And then, we know how generally poor the track records people have been at predicting the future beyond a single human lifespan (with a few notable exceptions).
The idea of the singularity belong to scientism (techno-religion), not science.
Can enough computational power overcome bottlenecks of design and training?
Sometimes these things are hard to predict.
Scientists thought self driving cars were impossible 15 years ago.
And no one thought AI would master Go this quickly.
So we just don't know when heuristics can take a massive leap forward.
Good point about techno-religion. That's why it's as much faith and belief as anything.
a better religion than most?
Nice response, Adam. My work on genetic algorithms (which apply evolutionary principles to have programs design things) has certainly convinced me that computational power can do with design things that would surprise many people.
Do you have any evidence that scientists thought self-driving cars impossible 15 years ago. Having read many of the earlier AI works going back to the 50's and 60's, I don't think it's an accurate generalization about what early computer scientists would have thought -- not that I recall anything much on driving, specifically, but there have certainly been some optimistic plans for machine control systems for a long while.
It's different to predict that one of our machines could operate another machine. Claiming that we'll out-optimize human capabilities in some general way is a waaaay bigger claim.
"So we just don't know when heuristics can take a massive leap forward." There are certainly non-linearities. Lack of knowledge isn't a reason to support some specific scenario.
Now some neuroscientists are saying each neuron may be a quantum computer with, individually, each neuron potentially having something like the entire computational power of the entire brain, as Kurzweil understood it when he predicted the immanence of the singularity.
Two MIT experts in 2004 thought self driving cars were impossible:
Admittedly, one of them was from Sloan. :)
Kurzweil's prediction is at least a little arrogant because not only us he convinced that the singularity is imminent, but he's also convinced that it will be good.
Thank you. And agreed.
You are very welcome. I've come to appreciate the danger of unintended consequences.