Meta CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan have announced the company is ending its fact-checking system in favor of a community notes model, much like the system on X. However, critics say the shift is simply an attempt to appease the incoming Trump administration and cause real-world harm to already marginalized communities through increased hate speech and misinformation. Why is Meta scrapping the fact-based system? “In recent years, we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political
Meta has deleted its AI-powered Facebook and Instagram profiles following backlash over racist and offensive characters. The tech giant had quietly rolled out the AI-driven accounts that could interact with other users via direct messages in 2023 alongside its lauch of Celebrity AI characters. “Proud Black Queer Momma” These accounts lived under the radar until the Financial Times published a story on December 27 exploring Meta’s intentions to continue integrating user-generated AI profiles that users can create and edit to preferences into its social media platforms. Meta deleted its Celebrity AI
This week on Techish, Michael and TechCrunch’s Dominic-Madori dive into how new technologies are affecting privacy, Apple’s take on AI limits, and new media’s role in politics with elections around the corner. They also chat about how consumer habits are shaking up companies like Starbucks and wrap up with a look at the reparations debate. Doxing Strangers With Meta’s Smart Glasses (00:00) Apple Dunks on Large Language Models (06:50) The Podcast Election: How New Media Is Shaping the Trump-Harris Election (11:05) Starbucks is struggling (23:35) UK Prime Minister Says No
Meta is reintroducing facial recognition technology across Facebook and Instagram to fight against scammers who use celebrity images in fraudulent ads. This move comes after the company abandoned facial recognition in 2021 amid to privacy, accuracy, and racial bias concerns. A Return to Controversial Technology In its latest effort to combat fraudulent ads, Meta’s facial recognition system will compare flagged images with the profile pictures of celebrities on Facebook and Instagram. If a match is found, the ad will be automatically removed. Meta’s initial rollout focuses on 50,000 high-profile public
Kenya’s Court of Appeal ruled that Meta, the parent company of Facebook, can be sued in Kenya for labor disputes involving outsourced content moderators, according to The Kenyan Wall Street. The ruling marks a major step in a long-standing case where Meta sought to avoid legal responsibility for its operations in Kenya. The court dismissed Meta’s appeal, which argued that Kenya’s Employment and Labour Relations Court lacked jurisdiction to sue a foreign company like Meta. How Did We Get Here? The case centers around Daniel Motaung, a South African whistleblower,
Meta, the tech giant formerly known as Facebook, is under scrutiny for using public Instagram posts to train its generative AI model without notifying users in Latin America, according to Rest of the World. The company’s decision has particularly impacted artists in the region, who rely heavily on social media to showcase their work but cannot opt out of this data usage. Lack of Notification and Opt-Out Options On June 2, many Latin American artists discovered that Meta had not informed them about its plans to use their public posts
Data workers are exposing the severity of exploitation in the tech and AI industry through the Data Workers’ Inquiry. As part of the community-based research project, 15 data workers joined the Distributed AI Research (DAIR) Institute as community researchers to lead their own inquiry in their respective workplaces. Funded by the Distributed Artificial Intelligence Research (DAIR) Institute, Weizenbaum Institute, and Technische Universität Berlin, the project sheds light on labor conditions and widespread practices in the AI industry. The Plight of African Content Moderators Fasica Berhane Gebrekidan, an ex-content moderator for
Meta, the parent company of Facebook and Instagram, has launched an update enabling content creators in Nigeria and Ghana to monetize their content on its platforms. This new policy, which became effective June 27, 2024, marks an important change. Previously, Facebook excluded creators with Nigerian and Ghanaian addresses from monetization unless their page was managed from an eligible country. Expansion of Monetization Opportunities This policy shift follows an announcement by Meta’s President of Global Affairs, Nick Clegg, in March 2024, confirming the rollout of monetization features in June. “Monetization won’t
Meta’s ad algorithms show racial bias by disproportionately steering Black users towards more expensive for-profit colleges, a recent study found. Researchers from Princeton and the University of Southern California have developed “Auditing for Racial Discrimination in the Delivery of Education Ads,” a third-party auditing report to evaluate racial bias in education ads, focusing on platforms like Meta. Algorithm Education Bias This method allows external parties to assess and demonstrate the presence or absence of bias in social media algorithms, an area previously unexplored in education. Prior audits revealed discriminatory practices
Meta recently announced the formation of an AI advisory council, made up entirely of white men, sparking backlash amid the discourse around diversity and inclusion in the tech industry. The Composition And Role Of The AI Council This AI advisory council is distinct from Meta’s board of directors and its Oversight Board, which have more diverse gender and racial representation. Unlike these bodies, the AI council was not elected by shareholders and held no fiduciary duty. Meta told Bloomberg that the council would provide “insights and recommendations on technological advancements,