Study finds no significant difference in impact of predictive text technology
by Erin Garrett
A recent University of Mississippi study challenges assumptions about artificial intelligence in writing, finding that users of Google Smart Compose produced work indistinguishable from those writing without AI assistance.
The study aimed to understand how predictive text technologies might influence academic writing or open-ended writing tasks, said Robert Cummings, executive director of academic innovation and professor in the Department of Writing and Rhetoric.
“We wanted to see if Google Smart Compose affected writers’ content,” Cummings said. “When writers start a sentence, and they receive a suggestion on how to complete the sentence, do they take what the machine gives them, do they edit it or do they reject it?”
The paper, published in Artificial Intelligence in Education, involved 119 participants split into two groups. One group used standard word processors with no additional tools, while the other group used Google Smart Compose and were tracked with custom software.
Google Smart Compose is integrated into Gmail and other Google writing tools. It uses machine learning algorithms to predict and suggest words as a user types—typically presented as faint text at the end of what has already been typed. Users can accept these suggestions or reject them.
Participants were given 25 minutes to respond to the prompt, “What’s more important in life for success: hard work or luck?”
The research team, including psychology professor Carrie Smith and Thai Le, then-assistant professor of computer and information science at Ole Miss, analyzed writing length, structure, cohesion and complexity.
They discovered that participants using Google Smart Compose produced the same amount of text as those writing without it. Additionally, the structure of written work from both groups was essentially indistinguishable.
“In terms of textual complexity, it’s dead even,” Cummings reported. “The writing assistants’ technology didn’t lead to longer texts or noticeably altered structure.”
However, the study revealed that writers took longer to simply read and accept AI suggestions than to edit or reject them.
“I would interpret that as writers being very careful about suggestions they receive,” Cummings said. “They want the words to be their own.”
These results suggest that, contrary to expectations, AI writing assistants may not dramatically change how people write, at least in short, timed writing tasks.
Le, now an assistant professor at Indiana University, developed the project’s custom software with graduate student Sijan Shrestha. He said the study’s findings have broader implications.
“There’s a discussion going on whether AI-assisted text is distinguishable from human-level writing,” Le said. “It’s interesting because there’s no difference in the texts produced in our study.
“We don’t know if it’s because people have gotten used to the wording and phrasing of the machine, or if the AI suggestions were so good that they seamlessly integrated with human writing. This raises questions about how these technologies can be designed to help humans write better while maintaining their identities and personalities.”
Cummings, who presented the findings in late July in Brazil at the 25th International Conference on Artificial Intelligence in Education, said he is seeking National Science Foundation funding for follow-up research related to humans co-writing with machines.
“Writing is not changing as much as we think,” he said. “Whether it’s an email or a Hallmark card—if you take the time to write something, you feel a huge sense of ownership over it. I suspect that’s what we’re finding, but we’ll have to retest.”