Look out for AI generated essays

A couple days ago, I was shocked to read a story by philosophy professor Darren Hick out of Furman University.  He shared publicly on Facebook a story about a student using AI generated essays turned in for college credit. The student used the recently released chatbot called ChatGPT.  Dr. Hick started his story:

 

AI essay part 1

This software uses machine learning to successively improve its human-like responses. The fact that its only 3 weeks old scares me.  Considering that the technology improves with time, early variations of the learning algorithm will necessarily be immature.  But as the system matures, it will likely get much better. Dr. Hick noted that the current version of the chatbot didn’t do very good with a complex topic like “Hume and the paradox of horror”.

Fortunately, there some hope in the short term.  As Dr. Hick notes: 

As long as the GPT detector can stay ahead of the ChatGPT sophistication, then we (the faculty) have nothing to worry about. Right?  Well, therein lies the problem.  First, there’s no guarantee that the GPT detector can continue to detect fake AI generated text as the generator improves.  Second, while the world is aware of ChatGPT because of their big splash, other AI systems might be release with less fanfare, but with equally good results and no detection systems in place. Third, the detection system rates the likelihood that responses are fake and can’t give definitive answers.  While Dr. Hick was confident with the rating of 99.9% likelihood of fake, can we as faculty pursue academic integrity violations with lower confidence levels?  What if it’s only 95%, 90%, or 75%? As Dr. Hick noted, administrators will have to develop standards around this, and quickly. 

Like Dr. Hick, a short term stop gap might be to enforce alternative oral exams if students are suspected of cheating in this way.  This could be very time consuming, but a means to an end. 

You may also like...