Diversity Alone Will Not Be The Solution To Bias In AI
AI ethics: Diverse teams are a great start but we need a wider cultural change in tech
AI ethics is a hot topic in the tech industry. As a result of work by pioneering researchers like Joy Buolamwini we’re learning more about how algorithms can discriminate against underrepresented groups, most alarmingly ethnic and gender minorities.
While AI and machine learning hold great promise, many are concerned about the impact new technology will have on society. Giants of the tech industry like Google and Facebook, government and academia are all trying to figure out how to ensure AI is used ethically and responsibly, with varying degrees of success.
Increasing the diversity of the workforce in tech is regularly suggested as a way to mitigating against AI bias. Representation for people of color and women in large tech companies are woeful. Less than 5% of employees in tech roles at Facebook and Google are black. At Twitter 30% of its leadership are women but only 15% of women are in engineering roles. The UK also has a problem with underrepresentation with only 8.5% of senior leaders being from ethnic minority backgrounds.
Increasing diversity is a long overdue first step, but not enough to mitigate against bias
Better diversity in tech is long overdue. However, diversity alone will not be the solution to AI bias, ethics, and irresponsible tech development.
We can’t expect people of color and gender minorities to bear the weight of being the moral and ethical center for tech companies. This expectation becomes a bigger problem if people from underrepresented backgrounds are only hired into entry or mid-level positions with little power over company culture or the overall direction of product development.
The tech industry also needs to be careful of ‘diversity washing’. This would be a scenario where the overall diversity statistics of tech companies improve but people of color and gender minorities not in senior roles and little change is made to non-inclusive working cultures.
Grace Callcott from Projects By IF summarised the problem perfectly on Twitter:
‘Diversity in tech’ is not the solution to biased AI. It’s too much to ask minorities and women to educate powerful white men about race & gender.
— Grace Annan Callcott (@GAnnanCallcott) February 6, 2019
We need a wider cultural change in the tech industry that supports openness, inclusivity, and ethical product development.
We need tech companies to think about the stress cases and unintended consequences of their products much earlier on.
Tech companies need to recognize the ethical trade-offs they are making throughout the product development cycle and get better at being open about them. Unethical tech is rarely the result of one big decision but a series of small ones made over time.
I’ve worked in product teams for the last couple of years. It’s easy to see how technologists like data scientists and software engineers can become preoccupied with getting a product in a decent shape to ship over thinking about the long term consequences of their work.
We need to ensure that the creators of technology have the time and space to critically reflect on what they are building.
We also need to create a culture in the tech industry where this reflection is actively encouraged and people across specialisms and pay grades to feel comfortable challenging ethical consequences of decisions.
We need to involve product leads, engineers and data scientists in the conversation about tech ethics
I’ve attended many good and thoughtful events about AI where most social scientists and non-tech practitioners discuss ethics.
I’d love to see more software developers, data scientists, data engineers, and product leads at these events, sharing their own experiences building products and understanding the latest thinking in ethics.
Support communities and start-ups that are doing things differently
Finally, there’s an emerging group of organizations and communities working differently, embracing a culture of openness and beginning to change ways of working in tech.
At Wellcome Data Labs we’ve embedded an ethics and user researcher in our product team, helping us consider unintended consequences of our products and mitigate against them.
The energy company Bulb is one of the few startups I’ve seen take time to be open about the tech ethics and business trade-offs they’re making as their company grows.
Projects by IF and Comuzi are using design principles and prototyping to explore practical ways of informing people about automated decision making; and YSYS are creating a community of diverse start-up founders, creatives and technologists.
We need to support organizations like these that are doing things differently and push for wider cultural change in tech so we see more inclusive, open and ethical tech development in the future.