GPT detectors are biased against non-native English writers: Research University at Stanford

Every instructor should know by now that AI detectors don't work and have high false-positive rates against non-native English speakers 👇 & so a lot of people are turning towards less formal methods of identifying AI cheating.

Invest in Humankind

Every instructor should know by now that AI detectors don’t work and have high false-positive rates against non-native English speakers 👇 & so a lot of people are turning towards less formal methods of identifying AI cheating. Most of those introduce bias as well. To take two common things I have seen people do:

1) Punishing people who write obvious AI essays means you are merely punishing bad AI users not AI use overall, since good AI use has more natural sounding results and is not easily detectable

2) Punishing people whose writing improved the most, because they are likely using AI, means that people who were previously cheating (using essay writing services, etc) don’t get punished, and you just focus on people who gained the ability to cheat. What do you do about previous cheaters?

As instructors, we need to figure out how to work with students to reconstruct the value of homework in an age of AI, rather than just punishing them, especially given that cheating was already common.

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Shifting Dynamics in AI: Altman’s Departure and the Future of OpenAI and Generative Technology

Sam Altman, former head of Y Combinator and a notable figure in the entrepreneurial and investment sphere, has been a prominent advocate for generative AI. His world tour this year placed him at the forefront of this technological wave. After OpenAI’s recent announcement, Altman reflected on his impactful tenure at the company through a post on a social platform, expressing gratitude for his experiences and hinting at future endeavors.