Approve or Reject: Can You Moderate Five New York Times Comments?
Marlene Breverman stashed this in Google Jigsaw
Stashed in: Turing, Trolling!, Commenting, Robot Jobs, Moderators!, Artificial Intelligence, Training
"The New York Times is partnering with Google Jigsaw to create a new moderation system that will help us review incoming comments based on decisions our moderators have made in the past. Our moderators will continue to protect these discussions, but once this new system is launched, we will have robot helpers.
Comments on Times stories are moderated by a team of 14 people known as the community desk. Together, they review around 11,000 comments each day for the approximately 10 percent of Times articles that are open to reader comment.
To help illustrate how our moderation works and how a new system might help, we have arranged for you to take a Times moderation test.
The Rules: What the New York Times community desk demands most of all is that some effort is made to justify your views. The desk must also be convinced that your intention is to inform and convince rather than to insult and enrage.
Read as carefully and quickly as you can. Time is of the essence, and so is The Times’s reputation. Good luck!"
Jigsaw is supposed to be more sophisticated than 'regular' technology that filters key words and tags. Can it keep the comments sections feeling as a human interaction?
(I got two wrong, and time wise, it would take me 133+ hours to moderate one day's worth of comments.)
I've never heard of Jigsaw but yeah it's hard to tell which comments should stay!
I'm surprised, Adam... here,
Google has a new plan to fight internet trolls, and it starts and ends with AI
It’s being used to fight ISIS, and now, an app developed by a subsidiary of Google is tackling another kind of vitriol — online trolls.
Jigsaw, an organization that once existed as Google’s think tank, has now taken on a new life of its own and has been tasked with using technology to address a range of geopolitical issues. The latest software to come out of the group is an artificial intelligence tool known as Conversation AI. As Wired reports, “the software is designed to use machine learning to automatically spot the language of abuse and harassment — with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators.
http://www.digitaltrends.com/cool-tech/conversation-ai-trolling/
Sounds wonderful! I hope we can afford this at PandaWhale someday.
Eventually open source!
Currently, the plan is to test Conversation AI first in the New York Times’ comments section (though perhaps YouTube would be a better place to start), and Wikipedia also plans on making use of the software, though it’s unclear how.
“I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” Jigsaw founder and president Jared Cohen told Wired, “to do everything we can to level the playing field.”
Eventually, Conversation AI will become open source so that any site can make use of its anti-trolling capabilities to protect its users. So advanced is the technology already that it can “automatically flag insults, scold harassers, or even auto-delete toxic language.”
http://www.digitaltrends.com/cool-tech/conversation-ai-trolling/#ixzz4KtwiXfVA
Well that would be lovely.
Imagine if the whole Internet worked together to squash the trolls and spammers!
10:39 PM Sep 20 2016