Google tests new artificial-intelligence chatbot with questions about the meaning of life...
Jared Sperli stashed this in life
Should we be nervous about this?
Yes, if you can do something about it...
I could try to find the people building this and reason with them.
to what end?
Try to teach them to teach the AI values.
They seem more interested in seeing what's possible than making sure the AI has good values.
The Machine comes off as rather imperious!
Imperious? It seems more like a know-it-all...
This means Google's researchers could get a little creative with it, and they certainly did — they asked the bot everything from boring IT questions to inquiries about the meaning of life.
The responses were alternately impressive, amusing, and unnerving.
Here are some of the most poetic interactions from the paper:
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don’t believe in god, then you don’t know.
Human: Ok ... so what is the definition of morality?
Machine: Well, the truth is, you’re not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates ...
Human: What do you think about Bill Gates?
Machine: He’s a good man.
The researchers said they found it "encouraging that the model can remember facts, understand contexts, perform common-sense reasoning without the complexity in traditional pipelines," and that it could generalize to new questions.
What they weren’t as happy about was that the model “only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above.”
You can read the whole paper, “A Neural Conversational Model.”
I'd rather Imperious than this exchange... we are mistaken about probable outcomes by thinking only of the extremes from AI evolutions... we could also suffer mass extinction in other ways:
In that the AI doesn't need to talk with us at all?
Needybots sound awful!
They do sound awful.
That'd be a gift were we to suffer along this AI progression... and far preferable would be extinction by silent AI stealth strikes than such endless narrative inanities ...
We're close enough to that point of decline as a species already – this bot talk seems like the final push.
I'd like to think there's another way. Something better.
That sounds like the new TV show Humans.
The show with the imperious robots? :)