This week on Techish, Michael and TechCrunch’s Dominic-Madori dive into how new technologies are affecting privacy, Apple’s take on AI limits, and new media’s role in politics with elections around the corner. They also chat about how consumer habits are shaking up companies like Starbucks and wrap up with a look at the reparations debate. Doxing Strangers With Meta’s Smart Glasses (00:00) Apple Dunks on Large Language Models (06:50) The Podcast Election: How New Media Is Shaping the Trump-Harris Election (11:05) Starbucks is struggling (23:35) UK Prime Minister Says No
Meta is reintroducing facial recognition technology across Facebook and Instagram to fight against scammers who use celebrity images in fraudulent ads. This move comes after the company abandoned facial recognition in 2021 amid to privacy, accuracy, and racial bias concerns. A Return to Controversial Technology In its latest effort to combat fraudulent ads, Meta’s facial recognition system will compare flagged images with the profile pictures of celebrities on Facebook and Instagram. If a match is found, the ad will be automatically removed. Meta’s initial rollout focuses on 50,000 high-profile public
Hundreds of Americans have faced arrest after being linked to crimes by facial recognition software, according to a Washington Post investigation. However, the use of this technology is often not disclosed to defendants, depriving them of the opportunity to challenge its reliability in court. This finding is especially concerning for Black people who have been disproportionately subjected to wrongful arrests because of facial recognition tech. Lack of Transparency in Investigations The investigation found that police departments in 15 states used facial recognition in over 1,000 criminal cases over the last
YouTube has announced new AI detection tools designed to protect creators from the unauthorized use of their faces and voices in AI-generated content. Expanding its existing Content ID system, YouTube will introduce tools that can identify when someone’s likeness or voice has been simulated by artificial intelligence, with a particular focus on music and facial simulations. This move comes as AI-generated media continues to rise, posing significant concerns for artists, actors, and musicians who risk their work being used without consent. Tackling Unauthorized Use of AI-Generated Content As AI technology
The American Civil Liberties Union (ACLU) is intensifying its efforts to combat the use of facial recognition technology (FRT) by law enforcement in California, Maryland, and Minnesota. This move comes amidst growing concerns over racial bias and wrongful arrests, particularly among Black communities. Facial Recognition: A Threat to Civil Liberties? In recent years, facial recognition technology has been embraced by police departments across the United States, described as a powerful tool for identifying suspects. However, the technology has also come under fire due to its potential for racial bias and
The Mall of America is facing backlash following the implementation of its new facial recognition technology. The use of the technology has raised privacy concerns among lawmakers, civil liberties advocates, and the general public. Concerns Over Privacy and Misuse Minnesota State Senators Eric Lucero (Republican) and Omar Fateh (Democrat) have united in urging the Mall of America to halt its facial recognition operations. “Public policy concerns surrounding privacy rights and facial recognition technologies have yet to be resolved, including the high risks of abuse, data breaches, identity theft, liability and
The city of Detroit has agreed to pay $300,000 to Robert Williams, a Black man wrongly arrested for shoplifting due to flawed facial recognition technology. As part of the settlement, the city will also make changes to how police use facial recognition software when making arrests. A Case of Mistaken Identity Robert Williams’ wrongful arrest stems from a misidentification by facial recognition software. According to The Guardian, the software incorrectly matched Williams’ driver’s license photo to a suspect seen in a 2018 security video from a Shinola watch store. Despite
Black British anti-knife crime activist Shaun Thompson, 38, has launched a legal challenge against the Metropolitan Police. The police detained the 38-year-old after live facial recognition technology wrongly identified him as a suspect. ‘Stop and search on steroids’ Thompson, who volunteers with the Street Fathers youth outreach group, described the system as ‘stop and search on steroids’ following his 20-minute detention at London Bridge station earlier this year. Returning from a volunteer shift in south London, Thompson was wrongly flagged as a suspect on the Met’s facial recognition database, leading
Pa Edrissa Manjang, a Black Uber Eats driver in Oxfordshire, UK, received a payout after facial-recognition checks prevented him from accessing the app, the BBC reported. Racially Discriminatory Facial Recognition Checks Initially, when Manjang began working for Uber Eats in November 2019, the Microsoft-powered app didn’t frequently request facial verification. However, as the app’s AI-driven checks increased, Manjang faced an unexpected hurdle. Manjang said he was asked to take photos of himself “multiple times a day” because the system failed to recognize him. He told Uber Eats: “Your algorithm, by the looks of things, is racist.”
A group of 18 senators sent the Department of Justice (DOJ) a letter raising concerns about the agency’s funding and oversight of what they called “frequently inaccurate” facial recognition software. The group highlighted that law enforcement has widely used facial recognition and other biometric technologies. However, they stated that these technologies can be unreliable and inaccurate, especially concerning race and ethnicity. The senators, led by Senate Judiciary Committee Chair Dick Durbin and Sen. Raphael Warnock, suggested that the DOJ funding for the deployment of the technology is potentially a problem. They also