Sign up FAST! Login

Artificial Intelligence Has a ‘Sea of Dudes’ Problem


Stashed in: Women, Microsoft, Awesome, Singularity!, AI, XX, Cognitive Bias, Extraordinary People, Girls Who Code, Women in Tech, Machine Learning, Artificial Intelligence, Chatbots, Training, Cognitive Bias, STEM

To save this post, select a stash from drop-down menu or type in a new one:

Machine learning can very quickly and non-accountably amplify relatively small biases in its reward structures and training sets. The problem is that the quite small set of people who are experts in machine learning are one of the most homogeneous groups you can imagine: white and Asian male geeks from a tiny set of elite universities.

Based on my own career, I truly believe there is no malice in most of these cases -- it is sheer lack of relevant life experience. Here's an example: a few days ago I sat in a brainstorming session for a very mass-market website -- one whose typical customer is a middle-aged woman from a suburb who has kids and health problems -- and the young, mostly male devs thought Google News was a more important way to reach their customers than local television news. That kind of disconnect is fairly easy to correct when you can  see it and talk it out with a developer... but machine learning done right quickly makes it almost impossible to correct individual biases in a wave of data.

I wonder if any companies will have a breakthrough in using machine learning that trains from diverse populations. 

Kate Crawford on AI's white guy problem:

http://nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html

http://pandawhale.com/post/72408/nyt-opinion-artificial-intelligences-white-guy-problem-by-kate-crawford

Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

This:

Tay, Microsoft's chatbot released earlier this year, had a lot of un-diverse stuff going in. Within 24 hours of being exposed to the public, Tay took on a racist, sexist, homophobic personality. It did that because internet users realized that Tay would learn from its interactions, so they tweeted insulting, racist, nasty things at it. Tay incorporated that language into its mental model and start spewing out more of the same.

 Companies like Microsoft are grappling with how to assemble better, more diverse data sets. "How do we make sure that the data sets we're training with think about gender?" asked Lili Cheng, who led the Microsoft team that developed Tay. "The industry as a whole, ourselves included, need to do a better job of classifying gender and other diversity signals in training data sets."

There's already evidence that gender disparity has crept into AI job listings. Textio is a startup that helps companies change job posting language to increase the number and diversity of people that apply. It performed an analysisof 1,700 AI employment ads and compared those to over 70,000 listings spread across six other typical IT roles. The analysis found that AI job ads tend to be written in a highly masculine way, relative to other jobs.

An ad from Amazon.com for a software development engineer merited a high masculine score because it uses language commonly associated with men, like "coding ninja," "relentlessly" and "fearlessly," said Textio CEO Kieran Snyder. Those words tend to lead to fewer women applying, and the ad also lacked any kind of equal opportunity statement, she said. "It's amazing how many companies are, on the one hand, disappointed with the representation of women in these roles and, on the other hand, happily pushing out hiring content like this,"said Snyder, who is a former Amazon employee. Amazon declined to comment.

More on the sea of dudes:

Melinda Gates patiently waited for her husband to finish extolling the virtues of machines that can solve problems scientists haven't programmed them to know. Then it was her turn. "The thing I want to say to everybody in the room is: We ought to care about women being in computer science," she said. "You want women participating in all of these things because you want a diverse environment creating AI and tech tools and everything we're going to use." She noted that just 17 percent of computer science graduates today are women, down from a peak of 37 percent.

The figures are actually worse in AI.  At one of 2015's biggest artificial intelligence conferences—NIPS, held in Montreal—just 13.7 percent of attendees were women, according to data the conference organizers shared with Bloomberg.

That's not so surprising, given how few women there are in the field, said Fei-Fei Li, who runs the computer vision lab at Stanford University. Among the Stanford AI lab's 15 researchers, Li is the only woman. She's also one of only five women professors of computer science at the university. "If you were a computer and read all the AI articles and extracted out the names that are quoted, I guarantee you that women rarely show up," she said. "For every woman who has been quoted about AI technology, there are a hundred more times men were quoted."

...

Much has been made of the tech industry's lack of women engineers and executives. But there's a unique problem with homogeneity in AI. To teach computers about the world, researchers have to gather massive data sets of almost everything. To learn to identify flowers, you need to feed a computer tens of thousands of photos of flowers so that when it sees a photograph of a daffodil in poor light, it can draw on its experience and work out what it's seeing.

If these data sets aren't sufficiently broad, then companies can create AIs with biases. Speech recognition software with a data set that only contains people speaking in proper, stilted British English will have a hard time understanding the slang and diction of someone from an inner city in America. If everyone teaching computers to act like humans are men, then the machines will have a view of the world that's narrow by default and, through the curation of data sets, possibly biased.

"I call it a sea of dudes," said Margaret Mitchell, a researcher at Microsoft. Mitchell works on computer vision and language problems, and is a founding member—and only female researcher—of Microsoft's "cognition" group. She estimates she's worked with around 10 or so women over the past five years, and hundreds of men. "I do absolutely believe that gender has an effect on the types of questions that we ask," she said. "You're putting yourself in a position of myopia."

There have already been embarrassing incidents based on incomplete or flawed data sets. Google developed an application that mistakenly tagged black people as gorillas and Microsoft invented a chatbot that ultimately reflected the inclinations of the worst the internet had to offer.

From Kate Crawford:

Take a small example from last year: Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.

But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Police departments across the United States are also deploying data-driven risk-assessment tools in “predictive policing” crime prevention efforts. In many cities, including New York, Los Angeles, Chicago and Miami, software analyses of large sets of historical crime data are used to forecast where crime hot spots are most likely to emerge; the police are then directed to those areas.

At the very least, this software risks perpetuating an already vicious cycle, in which the police increase their presence in the same places they are already policing (or overpolicing), thus ensuring that more arrests come from those areas. In the United States, this could result in more surveillance in traditionally poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are scrutinized even less. Predictive programs are only as good as the data they are trained on, and that data has a complex history.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems. Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.

You May Also Like: