Posts in Tag

Racial Bias

Byron Allen’s $10 billion lawsuit accusing McDonald’s of racial discrimination is heading to trial, following a federal judge’s ruling, according to Variety. The media mogul alleges that the fast-food giant denied advertising opportunities to his Black-owned media outlets while reserving substantial budgets for general-market advertising. Court Allows Jury to Decide United States District Judge Fernando M. Olguin found sufficient grounds for the case to be heard by a jury.  In his decision, he noted that this type of case benefits from a full hearing.  Allen’s lawsuit claims that McDonald’s created

AI-driven hiring tools overwhelmingly prefer resumes with names associated with white men, a new University of Washington (UW) study has found. Resumes with white male names were selected 85% of the time, while those with female-associated names were chosen only 11% of the time. By contrast, resumes with names associated with Black men fared the worst, with models passing them over in favor of other groups in nearly 100% of cases. Biases in AI Resume Screening AI-powered tools are becoming staples in the hiring process. For example, large language model

Hundreds of Americans have faced arrest after being linked to crimes by facial recognition software, according to a Washington Post investigation.  However, the use of this technology is often not disclosed to defendants, depriving them of the opportunity to challenge its reliability in court. This finding is especially concerning for Black people who have been disproportionately subjected to wrongful arrests because of facial recognition tech. Lack of Transparency in Investigations The investigation found that police departments in 15 states used facial recognition in over 1,000 criminal cases over the last

The American Civil Liberties Union (ACLU) is intensifying its efforts to combat the use of facial recognition technology (FRT) by law enforcement in California, Maryland, and Minnesota. This move comes amidst growing concerns over racial bias and wrongful arrests, particularly among Black communities. Facial Recognition: A Threat to Civil Liberties? In recent years, facial recognition technology has been embraced by police departments across the United States, described as a powerful tool for identifying suspects.  However, the technology has also come under fire due to its potential for racial bias and

The Mall of America is facing backlash following the implementation of its new facial recognition technology.  The use of the technology has raised privacy concerns among lawmakers, civil liberties advocates, and the general public. Concerns Over Privacy and Misuse Minnesota State Senators Eric Lucero (Republican) and Omar Fateh (Democrat) have united in urging the Mall of America to halt its facial recognition operations.  “Public policy concerns surrounding privacy rights and facial recognition technologies have yet to be resolved, including the high risks of abuse, data breaches, identity theft, liability and

OpenAI’s ChatGPT chatbot shows racial bias when advising home buyers and renters, Massachusetts Institute of Technology (MIT) research has found. Today’s housing policies are shaped by a long history of discrimination from the government, banks, and private citizens. This history has created racial disparities in credit scores, access to mortgages and rentals, eviction rates, and persistent segregation in US cities. If widely used for housing recommendations, AI models that demonstrate racial bias could potentially worsen residential segregation in these cities. Racially biased housing advice Researcher Eric Liu from MIT examined

The type of advice AI chatbots give people varies based on whether they have Black-sounding names, researchers at Stanford Law School have found. The researchers discovered that chatbots like OpenAI’s ChatGPT and Google AI’s PaLM-2 showed biases based on race and gender when giving advice in a range of scenarios. Chatbots: Biased Advisors? The study “What’s in a Name?” revealed that AI chatbots give less favorable advice to people with names that are typically associated with Black people or women compared to their counterparts. This bias spans across various scenarios such as job

AI’s inability to detect signs of depression in social media posts by Black Americans was revealed in a study published in the Proceedings of the National Academy of Sciences (PNAS). This disparity raises concerns about the implications of using AI in healthcare, especially when these models lack data from diverse racial and ethnic groups. The Study The study, conducted by researchers from Penn’s Perelman School of Medicine and its School of Engineering and Applied Science, employed an “off the shelf” AI tool to analyze language in posts from 868 volunteers.  These participants, comprising equal

Amazon-owned Ring will stop allowing police departments to request user doorbell camera footage without a warrant or subpoena following concerns over privacy and racial profiling. Ring’s police partnerships The Ring Doorbell Cam is a wire-free video doorbell that can be installed into people’s front doors and homes. Amazon acquired Ring in 2015 for a reported $1 billion. In 2019, Amazon Ring partnered with police departments nationwide through their Neighbors app. Police could access Ring’s Law Enforcement Neighborhood Portal, which allowed them to view a map of the cameras’ locations and directly

The University of Washington’s recent study on Stable Diffusion, a popular AI image generator, reveals concerning biases in its algorithm.  The research, led by doctoral student Sourojit Ghosh and assistant professor Aylin Caliskan, was presented at the 2023 Conference on Empirical Methods in Natural Language Processing and published on the pre-print server arXiv. The Three Key Issues The report picked up on three key issues and concerns surrounding Stable Diffusion, including gender and racial stereotypes, geographic stereotyping, and the sexualization of women of color. Gender and Racial Stereotypes The AI

1 2 Page 1 of 2