July 4, 2016

The question of diversity within machine learning

picture courtesy of #WOCinTechChat

Written by Camille Eddy


 

In my role as a machine learning intern, I go to work every day and start my job. I turn on my computer and start looking at my next tasks. But what was quickly unavoidable is the realization that the field of Machine Learning is not very diverse. In this article, I hope to outline why as a black woman, helping to make the next intelligent robot is a massive deal. And why we need to bring more underrepresented groups into this ever important field.

Artificial intelligence and machine learning is rapidly being incorporated into our daily lives, but there’s an underlying problem that’s not being addressed. The area of machine learning is not very diverse. Artificial intelligence is about teaching machines to think, come to conclusions, and execute actions on their own. But what information forms these actions? How does the machine decide between two different choices and can the bias of a creator infiltrate a computer’s thinking?

The answer to that last question is yes.

One shocking example of this bias was when Google’s image service mistakenly categorized Black people as gorillas. Most artificial intelligence is tested on data provided by the researchers. While Google stated that their algorithm was tested with images by employees of different races, there was obviously a lack of data to detect and distinguish darker skin tones from lighter ones. AI expert Vivienne Ming told the Wall Street Journal that some systems struggle to recognize non-white people because they are trained on Internet images that are overwhelmingly white. Would this have happened if more ‘people of color’ were on the development and research teams? Diversity in the machine learning field may minimize these types of incidents by ensuring wholly represented data sets are used during the most critical states of AI development.

Overall, diversity in artificial intelligence prevents the technology from becoming a tool that perpetuates biases and stereotypes. In a recent New York Times article, Kate Crawford [Principle Researcher at Microsoft and Visiting Professor at the MIT Center for Civic Media] stated that unless we are vigilant about how we design/train machine learning systems, we will “see ingrained forms of bias built into the artificial intelligence of the future.” And a Quartz Daily article on this subject reminds us that outdated ideas about gender and race tend to take root in places where there is a lack of diverse perspectives.

That brings us back to my story. I recognized that I have landed in a great spot to influence change in Artificial Intelligence, by bringing my diverse experience to the research I am working on and encouraging my fellow peers to take an interest in the field as well. Additionally, Microsoft CEO Satya Nadella recently penned an article on Slate stating that “we need to stop predicting the future of AI and create it.” And that is what I hope to do in the continuing years of my career.

In summary

  • AI would greatly benefit from a diverse group of researchers who provide good data sets to test from.
  • There is a lack of diversity, but we can and must change it to prevent stereotypes from taking root in the software applications we use every day
Camille Eddy

Camille Eddy is a machine learning engineer at HP Labs in Palo Alto, helping to bring in the next generation of robotics.

Leave a Reply

Your email address will not be published.