This story is a part of The Northern Light’s investigative series into AI in education, which follows different groups at UAA and their experience with AI.
The use of AI in education is at an all-time high, but how is AI evolving the way we interact with its programs? The Northern Light spoke with UAA conduct administrator Trevor Gillespie who addresses instances of AI sourced cheating and the evolution of AI.
Gillespie examines academic misconduct reports sent in from faculty. If a case is confirmed to exhibit some sort of academic integrity issue, Gillespie and other faculty will meet with the student to discuss the allegation and give the student an opportunity to defend their case.
Gillespie explained that he has seen a rise in suspected cases of AI-related academic dishonesty.
Not only is there a rise in AI use on campus, but Gillespie said that students’ cheating methods have evolved. “I’m not seeing reports of people copying from a third party source, which I would normally see. I’m seeing AI instead,” said Gillespie
Gillespie noted that UAA does not currently have any campus-wide policies regarding AI but believes that “at some point we’ll have to have more specific policies around artificial intelligence, particularly when it comes to academic misconduct.”
The only policies at UAA addressing AI use are created by individual educators that explain approved AI tools and how to use them in a syllabus – if AI is even allowed at all.
College campuses throughout the nation have had instances of students being wrongfully accused of allowing AI to write papers for them.
Gillespie explained the largest difficulty in misconduct evaluations is determining if a student wrote something versus if an AI wrote it. “If there’s doubt, I can’t move forward. AI keeps evolving and tools keep evolving. I found a new tool yesterday using AI that will likely cause more problems for academic misconduct.”
Gillespie has a tedious regimen to follow when attempting to discover if a student is using AI for misconduct, with most cases involving writing projects.
“I have to re-create their solution in the AI and see if I could figure out how they got the solution similarly. It’s kind of a puzzle for me.”
The reasoning for this process is that Gillespie doesn’t want student information to be sent through an online AI checker, because he does not have control over where the information is spread.
Writing-related conduct cases are often an obstacle for Gillespie. When the suspected writing is short, it is harder for him to confirm academic misconduct.
Most integrity cases at UAA involve writing assignments, but Gillespie noted that could be a confirmation bias. There is difficulty in proving AI was used in generating math answers and programming codes.
With the progression of AI, Gillespie said that concerns with AI use in an academic setting often go under the majority’s radar. Gillespie explained that certain components of Grammarly – a program commonly used by students – now utilizes AI without the user even knowing.
These changes in AI use can certainly cause concern but there may be a solution to consider; adapting to AI as a community as it becomes ingrained into everyday technological uses.
“I think there’s still a lot of adaptation going on and figuring out how to do it. I think it’s going to be the future of how we use AI; some sort of corporation teaching students how to use AI as a tool to help shape their work versus telling it to write their work,” said Gillespie.