A cheating scandal involving the use of artificial intelligence (AI) has shaken Yonsei University. Evidence suggests that some students used generative AI tools such as ChatGPT during online exams, sparking an urgent debate across universities about AI ethics in education.

College students’ use of AI is increasing. Photo = Yonhap News Agency
College students’ use of AI is increasing. Photo = Yonhap News Agency

“Confess or Face Suspension”: Professor Warns 600 Students in AI Course

The professor of the “Natural Language Processing (NLP) and ChatGPT” course for third-year students at Yonsei’s Sinchon campus recently announced that “multiple cases of cheating have been detected,” warning, “students who voluntarily confess will receive a zero, while those who deny wrongdoing may face suspension.”

The course, with about 600 students enrolled, held its midterm online on October 15. Some students allegedly used AI tools or ran multiple programs simultaneously to cheat during the exam.

Bypassing Camera Monitoring… “Sophisticated Cheating” in the Blind Spots

The test consisted of multiple-choice questions, and participants were required to record their hands, face, and screen throughout the exam.

However, some students manipulated their camera angles to create blind spots. Others opened overlapping programs, captured screens, or switched windows repeatedly during the test.

Teaching assistants reviewed all exam recordings, identified suspicious cases, and urged students to come forward.

“More Than Half Cheated?”… Strong Evidence of AI Use

The exact number of students involved is still unclear, but rumors spread among students that “more than half” had cheated.

In a poll on the university community app Everytime, excluding non-participants, 211 out of 387 respondents admitted to cheating, while 176 claimed to have completed the exam on their own.

One student said, “Almost everyone used ChatGPT during the exam. I felt at a disadvantage if I didn’t.” Another added, “Many students used AI to look up answers last semester, too.”

According to the university, “Forty students have voluntarily confessed, while ten others suspected of cheating have not. If their actions are confirmed, disciplinary measures will follow.”

AI Becomes Routine, but Universities Lack Ethics Guidelines

While AI adoption has rapidly increased in higher education, institutional responses have failed to keep pace.

A survey by the Korea Research Institute for Vocational Education and Training found that 91.7% of 726 university students had used AI for assignments or research. Yet, according to the Korean Council for University Education, 71.1% of 131 universities still lacked official AI usage guidelines.

As AI tools grow more sophisticated, they are shifting from learning aids to essential tools for exams and assignments—yet universities remain largely unprepared.

“AI Is Like a Motorcycle—Students Must Learn to Walk First”

Experts warn that excessive dependence on AI could hinder students’ critical thinking.

Professor Jeong Gi-in of Kyung Hee University’s Department of Korean Language and Literature said, “Students who should be learning to walk are instead riding motorcycles,” emphasizing the need for awareness of AI’s risks.

Kim Myung-ju, director of the AI Safety Research Institute, suggested requiring students to write their own reflections alongside AI-generated results to foster critical thinking.

Choi Byung-ho, from Korea University’s AI Research Institute, also urged adopting new forms of evaluation such as face-to-face presentations and in-depth discussions.

By Choi Song-aㅣchoesonga627@gmail.com

저작권자 © KMJ 무단전재 및 재배포 금지