By Richard Burnett
Earlier this year, Dr. James Fleming was teaching a course in assistive technology when he “met” ChatGPT, thanks to an enterprising student. As Fleming watched, the student texted with the artificial intelligence (AI) bot system, researched topics, and created realistic conversational dialogue.
Fleming was fascinated: “I told him, ‘Show me more!’” said the professor and chair of Beacon College’s business & technology department. “Then I started playing with it and I couldn’t stop. I found it a great starting point for doing research and compiling information you can springboard from into your work.”
In that moment, Fleming experienced firsthand a technology that, in the words of one national publication, “sent shockwaves across college campuses” last fall after its developer — Silicon Valley-based OpenAI — released a free version that went viral. Since then, educators have been looking at ways to harness its benefits, while maintaining safeguards against its drawbacks.
For some, the rise of ChatGPT (and similar AI tools), could represent as much a threat to education as it does an asset. They worry the technology could open a new age of high-tech plagiarism with its ability to generate research papers and other forms of writing that students could submit as original work.
That has put Beacon and other colleges on high alert as they work through the issues around ChatGPT/AI and fine-tune policies regarding the pros and cons of its use.
Well-prepared for challenges
“I feel we are well-prepared for the challenges of ChatGPT,” said Beacon Provost Dr. Shelly Chandler. “As a small college, we know our students; and our faculty members really get to know their writing skills. Through our writing center, we can track their progress from the first day they start classes at Beacon, so I’m confident we’ll recognize if anything looks suspicious.”
As a case in point, Chandler said a faculty member recently showed her a written assignment one student had submitted in mid-March that was clearly plagiarized, possibly by using an AI program. It was done in a style that was nothing like the student’s natural writing.
“It just wasn’t very good at all,” she said. “And I knew without a doubt it wasn’t the student’s work. It was so disappointing. I know the student is capable, but maybe just took a shortcut under pressure to get the assignment done right before spring break. When this happens, we have a plan in place that is posted in the syllabus for every class, so the students know the consequences.”
Beacon also deploys a third-party anti-plagiarism software that flags any authenticity problems with a written assignment as part of the college’s online learning management system. The assignments are submitted into the system and are scanned by the third-party software, which may also be effective at detecting assignments generated by AI programs like ChatGPT.
If that’s not enough to keep students from plagiarizing via ChatGPT, perhaps the red flags from its own developer OpenAI would be chilling enough. On its website, the company warns users about the system’s limitations:
“ChatGPT sometimes writes plausible-sounding, but incorrect or nonsensical answers…currently has no [ultimate] source of truth [for its answers] …and is often excessively verbose and overuses certain phrases….,” the company says.
Translated, that means ChatGPT may sometimes appear authoritative, but actually gives outright wrong or factually suspect answers with long, wordy phrasing. In short, it can be good at generating a crock of baloney – certainly not something you can trust to cut-and-paste a research paper.
“Yes, you have to be aware that it’s much like Wikipedia, where anyone can submit information that may or may not be accurate,” said Fleming, the business and technology professor. “When you insert the human aspect into the AI, you can sometimes get information that can lead you astray.”
On the positive side, there are clearly some legitimate uses of ChatGPT/AI that could benefit both faculty and students, Chandler said. For example, it could be used to develop lesson plans, brainstorm research project ideas, or do real-time research during class lectures or discussions.
She expects to sort it all out at the April faculty meeting and come up with a new policy for the new landscape created by ChatGPT/AI.
“We hope to achieve some clarity at that meeting and have the new policy in place by next fall,” Chandler said. “Clearly, we need to have that kind of clarity and direction to be prepared for the future.”
The future is here – Awareness of ChatGPT
For many Beacon students, however, the future is now – as awareness of ChatGPT has spread from educational circles to popular culture.
That hit home recently for Dr. A.J. Marsden, associate professor of human services and psychology at Beacon. She heard some of her students all abuzz after seeing ChatGPT featured on “South Park,” the often-rowdy, animated sitcom popular with many young adults.
“There was a lot of talk about the “South Park” episode in which one of the characters used ChatGPT to text his girlfriend,” she said. “So, they are certainly more aware of ChatGPT now, and they are definitely playing with it.”
In addition to the potential for more plagiarism, Marsden is concerned ChatGPT/AI could also contribute to the overall decline in student writing skills, a growing problem for years amid the rise of texting and other tech-based forms of writing.
“I’ve seen that decline in writing skills over the years — the inability to stay on topic even in a five-page paper, to organize information and write coherently so it makes sense,” she said. “So, from that perspective, I think ChatGPT appears to be just another thing we have to look out for in the classroom, something that could potentially harm students’ progress in writing.”
Marsden is encouraged that Beacon leaders have put ChatGPT/AI on the front burner in terms of raising awareness, finding solutions and setting a policy.
“We need to know how to catch this and how to use it,” she said. “I know some academics may be shying away from it, but that’s not the way to deal with AI. If you ignore it, it will find a way into the classroom and will eventually broadside you.”