AI-driven hiring tools overwhelmingly prefer resumes with names associated with white men, a new University of Washington (UW) study has found. Resumes with white male names were selected 85% of the time, while those with female-associated names were chosen only 11% of the time. By contrast, resumes with names associated with Black men fared the worst, with models passing them over in favor of other groups in nearly 100% of cases. Biases in AI Resume Screening AI-powered tools are becoming staples in the hiring process. For example, large language model
Ride-hailing apps like Uber and Lyft have helped to mitigate racial discrimination against Black passengers—at least when it comes to wait times, a new study from Carnegie Mellon University (CMU) has revealed. Researchers found that the technology’s ability to rapidly rematch canceled rides plays a key role in lessening the impact of discriminatory behavior. A Complex Issue with Tech-Led Solutions Historically, Black passengers hailing taxis faced rampant discrimination, often enduring longer wait times or outright rejections. An academic study had revealed a troubling pattern of discrimination by Uber and Lyft
Biases in current voice technologies for non-white users has the potential to cause lower self esteem, psychological and physical harm researchers have found. In a new study published in the Proceedings of the CHI Conference on Human Factors in Computing Systems, HCII Ph.D. student Kimi Wenzel and Associate Professor Geoff Kaufamn identified six downstream harms caused by voice assistant errors. The pair also devised strategies to reduce them in their work which won a Best Paper award at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems.
OpenAI’s ChatGPT chatbot shows racial bias when advising home buyers and renters, Massachusetts Institute of Technology (MIT) research has found. Today’s housing policies are shaped by a long history of discrimination from the government, banks, and private citizens. This history has created racial disparities in credit scores, access to mortgages and rentals, eviction rates, and persistent segregation in US cities. If widely used for housing recommendations, AI models that demonstrate racial bias could potentially worsen residential segregation in these cities. Racially biased housing advice Researcher Eric Liu from MIT examined
Professor Rose-Margaret Ekeng-Itua has become the world’s first Black woman to hold a PhD in Cybernetics, according to The Citizen. A PhD In Cybernetics Ekeng-Itua earned a degree in Cybernetics from the University of Reading in the UK under the supervision of her first PhD supervisor, Prof. Kevin Warwick, also known as Captain Cyborg. Cybernetics is the science of how information is communicated in machines and pieces of electronic equipment, compared with how information is communicated in the brain and nervous system. “Every challenge became fuel for my determination,” she said in an
Hispanic and Latine employees feel pressured to conform to mainstream office norms at the expense of their authentic selves and cultural heritage, a new study by Coqual has revealed. Hispanic and Latine professionals represent a rapidly growing demographic in the U.S. workforce, yet they continue to navigate several stereotypes, colorism, and cultural invisibility. The Coqual study used a mixed-method approach, including surveys, focus groups, and expert interviews, involving more than 2,300 full-time professionals across the United States. The pressure to assimilate Key findings indicate that 68% of Hispanic and Latine professionals with sponsors
The type of advice AI chatbots give people varies based on whether they have Black-sounding names, researchers at Stanford Law School have found. The researchers discovered that chatbots like OpenAI’s ChatGPT and Google AI’s PaLM-2 showed biases based on race and gender when giving advice in a range of scenarios. Chatbots: Biased Advisors? The study “What’s in a Name?” revealed that AI chatbots give less favorable advice to people with names that are typically associated with Black people or women compared to their counterparts. This bias spans across various scenarios such as job
There is a large digital divide affecting low-income and Black or Indigenous majority schools, a recent report by Internet Safety Labs (ISL) has found. Ads and trackers The report “Demographic Analysis of App Safety, Website Safety, and School Technology Behaviors in US K-12 Schools” explores technological disparities in American schools, focusing mainly on marginalized demographics. This research expands on ISL’s previous work on the safety of educational technology across the country and is supported by the Internet Society Foundation. It reveals how schools of different backgrounds use technology and the risks involved. One
Black women in teams with a more significant number of white peers may have worse job outcomes, a new study has found. Elizabeth Linos, the Emma Bloomberg Associate Professor for Public Policy and Management, along with colleagues Sanaz Mobasseri from Boston University and Nina Roussille from MIT, conducted the study. Underrepresentation of People of Color In Leadership According to the study, the underrepresentation of people of color in high-wage jobs, especially leadership positions, still needs to be solved. To better understand and reduce racial inequalities, researchers have often focused on
Some of the most high-profile AI chatbots generate responses that perpetuate false or debunked medical information about Black people, a new study has found. While large language models (LLMs) are being integrated into healthcare systems, these models may advance harmful, inaccurate race-based medicine. Perpetuating debunked race myths A study by some Stanford School of Medicine doctors assessed whether four AI chatbots responded with race-based medicine or misconceptions around race. They looked at OpenAI’s ChatGPT, OpenAI’s GPT-4, Google’s Bard, and Anthropic’s Claude. All four models used debunked race-based information when asked