Greetings knowledge seekers! In today’s episode of EduPapers, we will be diving into an empirical study that examines how university teachers assess AI-written texts versus human student texts. In other words, could a machine take your home exams? Could your teacher detect it?
The study focuses on university teachers and lecturers across four departments: philosophy, law, sociology, and education. These disciplines from the humanities and social sciences rely heavily on open-ended written exams or home examinations as part of their assessment.
With ChatGPT’s uncanny ability to generate human-like text, teachers now face a perplexing predicament. How can we detect if a student’s exam response is their own original work versus AI-written?
The key questions of the paper were:
- How well does ChatGPT perform on home exams?
- What is ChatGPT’s impact on teachers’ assessment practices?
To
Shockingly, ChatGPT passed at high rates, ranging from 38% in education up to 86% in philosophy. This means that responses from ChatGPT are more likely to be accepted in philosophy and less likely in education, with sociology and law falling in between.
Here’s where it gets really interesting. In certain cases, participants suspected that the text might be written by a chatbot and not a student. The perceived suspicion ranged from 14% to 23% across text indicators, including strange nonsensical words, lack of opinion, impersonal tone, and repetition across responses.
In focus groups, researchers discovered an intriguing bias. Teachers seemed to be more critical towards student-written texts, even grading them more harshly compared to anonymous responses from ChatGPT that were of equal quality. Several participants suspected that the student texts assessed were bot-written, either because they assumed that some errors and inaccuracies were not human-like or because they thought that a response was too good to be written by a real student. But they actually were written by human students.
The qualitative part of the paper used a post-phenomenological approach grounded in the philosophy of technology research and mediation theory. This approach focuses on studying the relations between humans, technology, and the world. It emphasizes how technological artifacts impact human experience and actions.
While risks of AI exploitation are evident, the researchers ultimately promote measured optimism rather than reactive banning of AI. They advocate adaptation through partnerships with technology.
The key takeaways from this paper are:
- ChatGPT can achieve passing undergraduate exam responses.
- Chat bots may increase mistrust between teachers and students.
- More research on AI’s impact on assessment integrity is vital.
While limited by sample size and experimental design, this study provides important insights into ChatGPT’s potential for better or worse.
If you’re intrigued, you can find the full paper below. Stay curious as we keep exploring the AI age!
This is Dr. Yogila signing off.