Black Teens’ Schoolwork Twice As Likely To Be Falsely Flagged As AI-Generated
Black students are over twice as likely to be falsely accused of using artificial intelligence (AI) tools to complete school assignments compared to their white and Latine peers, according to a new report from Common Sense Media.
Released on September 18, the study reveals that while 10% of students from any background report their work being wrongly flagged as AI-generated, the figure jumps to 20% for Black students.
In contrast, only 7% of white and 10% of Latine students face such false accusations.
Biased AI Detection Tools
The discrepancies in AI flagging may be a result of biases within the detection software itself, compounded by educators’ own unconscious prejudices, say experts.
“Humans come with preconceived notions, and AI tools reflect these biases, leading to further unfairness for students of color,” said Amanda Lenhart, head of research at Common Sense Media.
Research has already shown that AI detection tools often mislabel non-native English speakers’ work as AI-generated, showcasing the technology’s limitations.
Such issues are now extending to Black students, who are already disproportionately subjected to disciplinary actions in schools.
Widespread Use of AI Detection in Schools
The use of AI detection software is becoming more prevalent in schools, with over 68% of teachers reporting regular use, according to a survey conducted by the Center for Democracy & Technology.
However, the software’s flaws are increasingly apparent as one study demonstrated that more than half of essays by Chinese students were wrongly flagged as AI-generated, while US students’ work was correctly identified, further demonstrating the technology’s bias.
Educators and experts are urging schools to address the flawed use of AI tools, advocating for clearer policies and open conversations with students before resorting to disciplinary actions.