Posts in Tag

biased

Some of the most high-profile AI chatbots generate responses that perpetuate false or debunked medical information about Black people, a new study has found. While large language models (LLMs) are being integrated into healthcare systems, these models may advance harmful, inaccurate race-based medicine. Perpetuating debunked race myths A study by some Stanford School of Medicine doctors assessed whether four AI chatbots responded with race-based medicine or misconceptions around race. They looked at OpenAI’s ChatGPT, OpenAI’s GPT-4, Google’s Bard, and Anthropic’s Claude. All four models used debunked race-based information when asked