Megan Garcia has filed a lawsuit against Character.AI following the death of her 14-year-old son, Sewell Setzer III. Sewell, an Orlando, Florida, teen, reportedly grew attached to a chatbot he named “Dany,” modeled after Daenerys Targaryen from Game of Thrones. Garcia alleges that her son’s obsessive use of the chatbot, coupled with the app’s addictive design, contributed to his mental health struggles, ultimately leading to his suicide. AI Chatbot “Daenerys” Became Son’s Closest Confidant Character.AI, an interactive chatbot platform, lets users design or select lifelike personas with which to communicate.
Operation HOPE and OpenAI CEO Sam Altman have launched an AI Ethics Council to help ensure marginalized poplations and people of color are included in the AI developments. The announcement comes amid criticisms of OpenAI, Meta, and other major players in the AI space lacking diversity on their boards and decision-making bodies. A Historic Partnership According to a press release, OpenAI’s partnership with Operation HOPE, a leading nonprofit dedcated to financial literacy for underserved communities, began with a listening tour at Clark Atlanta University in spring 2024. During this tour,
The type of advice AI chatbots give people varies based on whether they have Black-sounding names, researchers at Stanford Law School have found. The researchers discovered that chatbots like OpenAI’s ChatGPT and Google AI’s PaLM-2 showed biases based on race and gender when giving advice in a range of scenarios. Chatbots: Biased Advisors? The study “What’s in a Name?” revealed that AI chatbots give less favorable advice to people with names that are typically associated with Black people or women compared to their counterparts. This bias spans across various scenarios such as job