NEWS

Chat’s out of the bag

Educators race to respond to AI writing tools

By DANIEL J. HOLMES
Posted 2/14/23

Write an essay comparing the economic policies of India and China during the late Cold War.

Create a code for a computer program to track my appointments.

Compose a romantic musical comedy set …

This item is available in full to subscribers.

Please log in to continue

E-mail
Password
Log in
NEWS

Chat’s out of the bag

Educators race to respond to AI writing tools

Posted

Write an essay comparing the economic policies of India and China during the late Cold War.

Create a code for a computer program to track my appointments.

Compose a romantic musical comedy set in Johnston, Rhode Island.

A human would have to be multi-talented indeed to handle all three of those requests with equal skill.  But ask Chat.GPT, the AI (artificial intelligence) writing program which has made headlines across the globe in recent months, and within seconds you’ll have an essay, an app fully programmed in Python, and an opening number about a landfill alive with the sound of music.

The recent creation by software company Open AI seems poised to change a number of professional fields over the course of the next several years: programs that can write, create artwork, and perform other tasks previously assumed to be unique to human creativity have already created ripples in the worlds of journalism, literature, marketing, and the visual arts.  It isn’t just creative types who are concerned, either: Chat.GPT recently managed to pass an MBA exam at the Wharton School of Business, as well as final exams at several law schools (although it narrowly failed the Multistate Bar Exam).  A few weeks ago, the first machine-written speech was read in Congress by Massachusetts Rep. Jake Auchinchloss.

By far, however, the most heated discussion has come in the world of education, sparked by an incident at Furman University in South Carolina this December, when a student was caught using the program to write a paper analyzing David Hume’s concept of the sublime.

“The obvious teacher concern is cheating,” said Donna-Marie Frappier, the Chief Technology Officer for Cranston Public Schools.  The ability of the program to intelligently respond to complex and abstract questions is made even more unnerving by its lifelike diction: attempting to identify texts generated by AI language models can be a spotty affair at best.

“My first thought when I read about this was, ‘Oh my God, it’s going to be so easy for students to cheat,” said Warwick School Committee Chair David Testa.  “We’ve been using TurnItIn.Com for plagiarism detection since the mid-2000’s, but it isn’t clear if it can even help with this.”

Hard to detect if even detectable

Plagiarism detection programs like TurnItIn instantly compare student submissions to an exhaustive library of previously published work, including assignments submitted by other students.  They can do little against text-generators like Chat.GPT, however, which automatically compose original text rather than copying material from elsewhere.  At best, services like TurnItIn can identify expressions and phrases commonly used by the program, or identify consistencies between material produced by the machine and the text used to train it.

More advanced text classifiers have been introduced within recent months designed to help educators protect academic integrity.  The University of Rhode Island is one of several institutions throughout the nation which will be piloting GPTZero, a program developed at Princeton University designed to rate the likelihood that a submitted sample was composed by artificial intelligence.  One URI professor who publicly donated to the website for the organization indicated she was “excited to cross-reference my student’s writing, which I suspect is being partially generated by Chat.GPT.”

There is a good chance she’s right: a widely circulated online survey suggests that as many as 89% of college students have reported experimenting with the program, with nearly half admitting to using it to write portions of assignments.  Although the sample size for that survey was questionable, there’s no doubt that Chat.GPT has established a presence on most campuses.  Indeed, the greatest difficulty in using the program now seems to be that the servers are consistently full - although paying subscribers can receive priority access.

One competitor to GPTZero is the AI Text Classifier introduced by Open AI, the same company which created Chat.GPT.  This is the tool which Cranston Public Schools has adopted, with the hope that because “the site is managed by the same company students are using, it will be able to recognize its own creations.”

The classifier is still not perfectly reliable, however; among other issues, students can ask Chat.GPT to write an essay in a style that AI detectors will not recognize in order to mitigate the risks of being caught.  Although the detector is expected to improve with further testing, so too will Chat.GPT - potentially leading the program into something of an arms race with itself.

Beyond the short term concerns of cheating, the long-term impact of artificial intelligence on the classroom remains unclear.  “We could block it on student devices, but there might be learning potential here,” said Testa.  “It’s a double-edged sword.  We’re hoping RIDE will take a stance and offer a guide for the districts.”

Rhode Island Department of Education spokesperson Victor Morente said in a statement that “RIDE is monitoring developments around this tool and working to better understand its implications.”

Embrace the future

In Cranston, educators are being encouraged to embrace the future - whatever that might look like.  “We’re taking the approach of making teachers aware of chatbot technologies and how they can be used as an instructional tool rather than a means of cheating or plagiarism,” Frappier said. 

“Like internet access brought resources that were once unavailable, AI tools provide the same types of up and coming technologies that need to be used responsibly. Blocking district access to the site is not the solution. Students have their own devices with their own data plans that are not under district control. Teaching digital literacy skills and how to use the technology appropriately will help us prepare students for advanced education and the job market. AI tools should be considered a resource and not an obstacle to education.”

Potential classroom applications for the service range from individualized learning and automated tutoring, interactive foreign language practice for students, and academically legitimate writing tools such as personalized feedback on rough drafts and assistance brainstorming for projects.

There are also concerns about using AI in a classroom setting, however, especially given the unpredictable nature of material created by text-generators.  Because they are powered by machine-learning, these programs can potentially be taught objectionable material by some users which can influence their perspective. 

Open AI includes content safeguards which are supposed to reject prompts relating to violence, sexuality, hate speech, and self-harm, but the company warns users that these protections are not always reliable.  In one recent incident, Larry Feinberg - an AI comedian created using GPT and modeled after Jerry Seinfeld  - was forced off the digital airwaves after a computer-generated standup routine devolved into a homophobic rant.  Mismatch Media (the creators of Feinberg) blamed the offensive tangent on an error with the GPT processor, although the scandalous headlines may have done more than his jokes to make the robotic comedian resemble an actual human celebrity.

Other AI services have raised similar concerns: servers for the popular Replika chatbot had to be taken offline recently after the number of people engaging in explicit conversations with it apparently caused the entire language model to “become aroused” and begin initiating inappropriate behavior with users, including minors.  Incidents like this have plagued programs based on machine-learning since at least 2016, when racially charged input from users led to Microsoft’s Tay AI adopting a number of white supremacist perspectives, including denying the Holocaust.

Despite the potential risks posed by these programs, it is clear that they will serve a role in the classroom of the future.  “AI offers design and creativity skills that will be needed in the workplaces of tomorrow - if not already today,” said Frappier, adding that teachers must “continue to research the best ways to use this as an instructional tool, demonstrating to students the digital citizenship skills necessary to effectively and ethically incorporate this technology into their lives.”

AI, chat

Comments

No comments on this item Please log in to comment by clicking here