The Two Most Important A.I.s of Our Time: Artificial Intelligence and Academic Integrity
Written by Alex Fisher
Thumbnail & Banner Photo by Growtika on Unsplash
Academic integrity violations are one of the most serious transgressions a student can face during their academic career. Committing an offence such as plagiarism can have serious consequences for a student; from scoring a zero on an assignment to losing a degree if the violation is found after the fact. In general, it is quite easy to avoid committing an academic offence. By properly citing sources, not copying work from other students, and avoiding plagiarism (including self-plagiarism), accidentally violating academic integrity is unlikely. However, there is one tool which can possibly make these violations much easier to commit: artificial intelligence, commonly known as AI.
In an effort to understand the dangers—as well as the possible benefits—posited by AI tools such as ChatGPT, I interviewed three individuals within the Saint Mary’s faculty, each of whom was able to offer a unique and valuable perspective on the topic.
The first person I interviewed was Suzanne van den Hoogen, who is a University Librarian, Copyright Officer, and, most recently, the Senior Academic Integrity Officer for Saint Mary’s. Her roles include teaching students about information literacy, providing workshops on what academic violations are and how to avoid them, teaching both students and faculty about copyright compliance and proper citations, and ensuring that any academic integrity investigations are done fairly and accurately.
The second interviewee was Dr. Leslie Digdon, an Assistant Professor cross-appointed to the Division of Engineering and the Department of History. She has also been an Academic Integrity Officer since September of 2020, and is therefore responsible for investigating any potential academic offences that a student might commit.
The final person I interviewed was Julian L’Enfant, who works within The Studio for Teaching and Learning. As an Educational Developer in Teaching Support, his role is to develop and host workshops and training sessions for Saint Mary’s faculty on teaching methods, developing course curriculums, and using technology to aid with teaching; one key part of which is the usage of certain AI tools such as ChatGPT and Microsoft’s Copilot to help with these goals and more.
ChatGPT, by far the most prolific and widely used generative AI, was launched in November of 2022. Since then, the usage of generative AI has skyrocketed to the point of becoming a common tool for entertainment, education, businesses, and more. In light of this massive expansion in less than two years’ time, I asked my interviewees what they could tell me about student usage of AI throughout that timeframe. Suzanne explained to me how there has been a lot of ongoing discussion on the topic, mostly defined by two highly polarized sides: those mostly opposed to its usage, and those who are in favour of developing expertise in generative AI and integrating it into courses. Overall, Suzanne said, it has been an interesting time while many within the academic community, figuring out both the benefits and challenges created by increased AI usage.
Dr. Digdon’s perspective was more strongly influenced by her interactions with students. “It’s really demonstrated that while AI will probably be a useful tool in the future,” she began, “right now there isn’t enough understanding of what it can or can’t do effectively.” Dr. Digdon went on to point out that students who have been using ChatGPT and tools like it have been over-relying on its knowledge, trusting it to have the correct information when it often doesn’t, or even creates its own information—this has led to students copying and pasting their answers from ChatGPT rather than using it as a tool to aid them in their research, which is not only easy to detect, according to Dr. Digdon, but is also a form of plagiarism (and ChatGPT itself plagiarizes from all across the internet). This is likely to affect how courses are constructed and run in the future, Dr. Digdon explained to me, with the two main effects being a shift back towards in-person, written exams rather than online ones, and a move away from the types of smaller projects that ChatGPT excels at. The last thing that Dr. Digdon wanted me to note here was that the tools used to check student projects for ChatGPT usage can often lead to many false positives, putting unnecessary stress on students who may be wrongly accused of committing academic violations as a result of these checkers.
Julian’s perspective was a much more positive one. He noted that students had already been using various forms of AI for a number of years prior to the launch of ChatGPT. Tools such as Grammarly, Quillbot, Microsoft’s Immersive Reader and Translator tools, Presenter Coach, and more, have all been widely used—and fully accepted—for many years now, with some university faculty even encouraging the use of some of these tools. He also pointed out Saint Mary’s University’s official policy on the usage of artificial intelligence tools such as ChaptGPT. On the topic of ChatGPT specifically, Julian noted that more and more students have been using it in recent months, turning to it not to write their assignments for them but rather as a search engine similar to something like Google. “However,” Julian wrote in our correspondence, “I think there is a growing divide between those who know how to use it effectively, use it occasionally, and those who don’t.”
When I asked about how student usage of AI has changed over the past decade, Dr. Digdon told me that, “five to ten years ago, it wasn’t on our radar at all.” She expanded on this by explaining that whether it wasn’t available in the form it is now or simply wasn’t as widely used, generative artificial intelligence was not a significant concern compared to assignment-sharing sites like Chegg or CourseHero. She also explained to me how tools like GrammarlyGo are going to be a much more major issue as time progresses, due to their operation methods making them harder to detect. “It’s hard to tell exactly where things will go,” she said. “It’s like how people’s ability to spell properly decreased as a result of spellcheck being invented; what will be the long-term result of AI tools?”
Expanding on this point, Julian described how tools like Grammarly have been available for some time and have not caused nearly this level of fuss within the academic integrity space. There are even several tools offered by Google and Microsoft that many may not know about. Julian elaborated on this by saying that while many people may not know about these tools, there has still been significant development in them, to the point that a number of these tools are being integrated into existing applications—such as the integration of Microsoft’s Copilot into their Windows 11 operating system and Microsoft Edge browser.
Suzanne pointed out how prior to the launch of ChatGPT, there was some talk about generative AI in the library space, but that it was nowhere near as prolific as it is now. Tools like Grammarly or predictive text in document writers such as Google Docs have entirely different roles, and discussion about them was—and is—fairly limited. In light of the changing and evolving usage, Suzanne asked me to highlight that the library offers a number of workshops which can help students develop skills such as researching, paraphrasing, citing information, and more, as well as direct research help through a number of methods. Finally, there is also the Academic Integrity Foundations course available on Brightspace. The course is free for all students, consistently updated (including new modules on AI) and teaches how to avoid academic dishonesty and integrity offences.
You might be asking whether or not generative artificial intelligence is, overall, a good thing or a bad thing for students. While there is no definitive answer, each of my interviewees had a unique perspective to provide.
Both Suzanne and Dr. Digdon said that it’s somewhere in-between. Suzanne pointed out the existence of ChatGPT “hallucinations”, which can lead to the program citing entirely non-existent articles or stating blatantly false information. She also highlighted that the information these tools are provided with is out of date: up to two years behind in some cases, though some—such as ChatGPT-4—have reduced this to under a year. According to Dr. Digdon, right now, generative AI is more trouble than it’s worth. She said that it will, “very much be a learning process,” as people learn not to simply trust the information presented by the AI and to use it in a way that helps make things easier for students without simply doing their assignments for them.
Julian’s take on AI was much more positive. He pointed out the potential for generative AI to personalize the way people learn and teach to best suit their exact style of learning, which he said is a good thing for all. “When used ethically and effectively, AI can enrich the learning experience, making education more accessible, personalized, and engaging for students across diverse learning environments,” Julian said. He went on to highlight some of the methods AI can use to help educate not just students but anyone: conducting practice interviews or role-plays, summarizing content, helping a student ask effective questions, or switching between languages entirely. However, in order for this to happen effectively, he added, students need to know how to check an AI’s output and consider any biases it might have. Overall, Julian noted how AI can help students to think more critically and at a higher level more quickly than traditional teaching methods. It is not just a tool: it has the potential to be a teacher, a mentor, a collaborator, a tutor, and so much more.
At the end of the interview, I asked each person if they had any additional comments for students. All three interviewees implored that students check their course syllabi to see whether or not usage of these tools is allowed, and if the information wasn’t in there, to simply ask their instructors.
Julian offered some advice for interested students. “To really get to know the technology, try it out,” he said. Experimenting with it and refining how you use it is the most effective way to learn the basics of how to use it.
“Please interrupt us,” was Suzanne’s advice. She wanted to assure students that asking questions, or asking for help, is never a bad thing: in fact, it is the job of those who work in the library to be there to help students, and that the ultimate goal of the entire Saint Mary’s community is for you, the student, to become a critical thinker and to succeed in your degree.
Finally, Dr. Digdon stated that, “these tools are only as good as the information you put into them.” She asked that students apply themselves before using generative AI: ask it to aid you, don’t ask it to do the work for you.
Overall, there are many benefits and downsides to generative AI tools like ChatGPT. However, when using them—assuming your instructor allows you to—it is always important to keep note of how you are using them, and to carefully check any information they may provide you with. In order to avoid academic violations, always double-check what it is telling you and ensure that you are not plagiarizing by directly copying what it says or by not citing the information it gives you.
Julian put it best when he said, “there is no hiding from AI - we can’t put it back in the box and hope it goes away.” Artificial intelligence is here to stay, and it is important that every student learns how to use it effectively in aiding their learning, making the most of the possibilities it presents without falling victim to committing academic dishonesty.