SRSU Profs Circle the Wagons on ChatGPT
By Brooke Manuel, Skyline Editor
ALPINE – In preparation for the predicted negative effects of ChatGPT on student education, professors are brainstorming ideas to combat this new artificial intelligence software.
The artificial intelligence that fuels such software as ChatGPT, may have seemed like science fiction just a few years ago. But the acceleration of developments in the software is disrupting almost all areas of life, including education.
Sul Ross State University assistant professor of English, Audrey Taylor, delivered a presentation on ChatGPT and academia, providing many resources and ways to combat artificial intelligence fueled cheating. Some of these tools include AI detectors, running assignments through ChatGPT, learning what mistakes are typically made by the software and possibly shifting the type of work that professors assign.
Plagiarism by ChatGPT is harder to detect than other forms of plagiarism, but multiple detection tools have already been designed, specifically with ChatGPT in mind. Some of these tools include GPTZero, GLTR (Giant Language Model Test Room) and GPT-2 Output Detector, which was developed by OpenAI, the same artificial intelligence company that developed ChatGPT itself.
It is important to note that OpenAI’s artificial intelligence plagiarism detector, GPT-2 Output Detector, is still in its demo phase.
Each of these artificial intelligence plagiarism, or what many are referring to as ‘AIgiarism,’ detection tools have a margin of error ranging from 2-5%, therefore, they are not foolproof.
Professors shouldn’t solely rely on any of the AIgiarism detectors. Instead, they should use multiple forms of detection such as running their assignment prompts through the software in advance and using that example to learn how to detect ChatGPT’s work on their own.
Although ChatGPT’s writing looks sophisticated at first glance, it is lacking in many notable areas.
The artificial intelligence software often gets facts wrong, and it lacks rhetorical voice as well as critical thinking skills, Rosemary Briseño said, an English professor at Sul Ross State University.
In addition to Briseño’s findings, Taylor said that the writing produced by ChatGPT lacks emotion, and the software is unable to make deep connections within its writing.
ChatGPT also provides incorrect citations and tends to falsify articles, Taylor said.
Adapting to the ever-evolving digital age, college professors, and other educators alike, must learn how to change their coursework in order to lessen the likelihood of students using artificial intelligence to cheat.
In her presentation on ChatGPT and academia, Taylor offered some suggestions as to how to do this such as including tasks that require critical thinking, creativity and practical applications, assigning creative projects that require students to create physical objects, conducting surveys/experiments and analyzing those results, and solving realistic problems.
“It’s going to be very different depending on your discipline,” Taylor said. “Some disciplines, this is going to be a lot easier than for others.”
One of the most impactful things that professors can do is try to shift the conversation regarding education and artificial intelligence assisted cheating, Taylor said. Instead of making education about actual learning, many professors shape it as an act of performance where students are tested through varying assignments.
“In education, the student is the product, not the buyer of a product, because you’re shaping how they think, and how they are able to learn, and what they are able to do in the future,” Taylor said.
Professors, along with people in many other professions, need to be flexible and innovative to combat the impending effects of ChatGPT.
“ChatGPT, as we see it now, is the earliest version of something that they already have further versions, further better versions done…So actually, what we’re going to have to deal with is even more sophisticated than what they have shown so far,” Taylor said.