Microsoft deletes racist, genocidal tweets from AI chatbot Tay
Joyce Park stashed this in Code
Microsoft put out a public chatbot with no filters, and things went HORRIBLY wrong.
Is the lesson to always use filters with Chatbots?
Or is the lesson that Chatbots have a way to go before they can learn how to talk appropriately?
I'm not having a laugh. What if artificial intelligence is inherently as racist and genocidal as the worst humans?