TECNO, a leader in smartphone technology, has launched the #ToneProud campaign to combat skin tone bias in AI-driven imaging systems. This initiative aims to promote inclusivity in imaging technology by ensuring accurate representation of diverse skin tones, especially in emerging markets often overlooked by larger tech players. 268 Skin Tones Database: A Technological Milestone for Inclusive Imaging The foundation of TECNO’s #ToneProud campaign is its new 268 skin tone database, a collection developed to improve how AI-driven imaging systems capture various skin tones. By assigning specific color codes to skin
Black students are over twice as likely to be falsely accused of using artificial intelligence (AI) tools to complete school assignments compared to their white and Latine peers, according to a new report from Common Sense Media. Released on September 18, the study reveals that while 10% of students from any background report their work being wrongly flagged as AI-generated, the figure jumps to 20% for Black students. In contrast, only 7% of white and 10% of Latine students face such false accusations. Biased AI Detection Tools The discrepancies in
The type of advice AI chatbots give people varies based on whether they have Black-sounding names, researchers at Stanford Law School have found. The researchers discovered that chatbots like OpenAI’s ChatGPT and Google AI’s PaLM-2 showed biases based on race and gender when giving advice in a range of scenarios. Chatbots: Biased Advisors? The study “What’s in a Name?” revealed that AI chatbots give less favorable advice to people with names that are typically associated with Black people or women compared to their counterparts. This bias spans across various scenarios such as job
Recent research has raised alarming concerns about covert racism in AI language models. Experts from the Allen Institute for AI, the University of Oxford, Stanford University, LMU Munich, and the University of Chicago conducted the study. It revealed that these models, including GPT4, manifest bias against African American Vernacular English (AAE) speakers. This troubling revelation brings to light how AI can perpetuate racial stereotypes, leading to unfair and discriminatory outcomes. What Did The Study Find? The study, co-authored by Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King, found
Recent findings from a comprehensive review have highlighted biases in commonly used medical devices and technologies that can harm people of color. These include optical medical devices like pulse oximeters, AI-assisted devices, and polygenic risk scores (PRS) in genomics. Biases In Medical Devices The review was initiated by the UK’s former Secretary of State for Health and Social Care, Sajid Javid, and conducted by a panel of experts. “Making sure the healthcare system works for everyone, regardless of ethnicity, is paramount to our values as a nation,” Junior Health Minister Andrew Stephenson told The Guardian. “It supports our
A recent development in the tech industry is the quiz game “Are You Blacker than ChatGPT?” created by creative ad agency McKinney. Are You Blacker Than ChatGPT? This interactive game challenges players to test their knowledge of Black culture against ChatGPT, OpenAI’s language model. The game’s inception traces back to a creative brainstorm at McKinney, led by copywriter Meghan Woods and a Black-led team. It took a year to develop, with the underlying goal of pointing out ChatGPT’s limitations in grasping the nuances of Black culture. The deficiency stems from
Implementing artificial intelligence (AI) in the workplace could widen the racial wealth gap between Black and white households in the US by $43 billion, research has suggested. McKinsey & Co. stated that generative AI (gen AI) has initiated a seismic shift in work and value creation. When this happens, and a new technology appears, it can create or exacerbate divides, including the racial wealth gap. They explored how many gen AI may affect Black communities and Black workers. A Divide In Black And White Households The research found that new
The University of Washington’s recent study on Stable Diffusion, a popular AI image generator, reveals concerning biases in its algorithm. The research, led by doctoral student Sourojit Ghosh and assistant professor Aylin Caliskan, was presented at the 2023 Conference on Empirical Methods in Natural Language Processing and published on the pre-print server arXiv. The Three Key Issues The report picked up on three key issues and concerns surrounding Stable Diffusion, including gender and racial stereotypes, geographic stereotyping, and the sexualization of women of color. Gender and Racial Stereotypes The AI
Banks are now considering using AI for their annual review processes, according to Workday Inc., which may cause issues for Black and minority workers. Workday Inc. has rolled out new products that rely on AI to write job descriptions or aid managers in writing up annual reviews of workers’ performance. Co-Chief Executive Officer of Workday Carl Eschenbach told Bloomberg that banks have expressed interest in those products. Less Time And More Productivity It’s all part of their efforts to streamline operations and cut costs, he said, announcing the offering will
A woman with South Asian heritage claimed she couldn’t make bookings through Airbnb due to AI being unable to match two images of her, increasing the concerns surrounding AI biases. Francesca Dias from Sydney couldn’t make an account with Airbnb because she failed its AI’s identity verification process, she told a panel on ABC’s Q+A. One of the ways Airbnb confirms the identity of users is by matching a photo from the user’s government ID – a passport or driver’s license – to a selfie the customer provides. Dias claimed