Perhaps Academics Could be Replaced by AI?! By James Reed

It seems that academics are beginning to make use of AI chatbots to churn out their articles, with at present one percent of even scientific papers having signs of AI use in writing. For Arts/Humanities, this figure will grow, as postmodern nonsense can be readily generated by a machine even now.

So, the issue arises as why pay the high salaries to academics, when the money could go to feeding our homeless, caused by the international student marketplace greed of the universities anyway? Simply replace academics by AI, and no-one will notice the difference, even in lectures, which are increasingly on-line now.

https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/?fbclid=IwAR3ky7IHaRFP5B4ZmQ4FG0FmFn_doqxENh8WrzKDwWTOQIEUHt1gBEuUwxM

"One percent of scientific articles published in 2023 showed signs of generative AI's potential involvement, according to a recent analysis.

Researchers are misusing ChatGPT and other artificial intelligence chatbots to produce scientific literature. At least, that's a new fear that some scientists have raised, citing a stark rise in suspicious AI shibboleths showing up in published papers. Some of these tells—such as the inadvertent inclusion of "certainly, here is a possible introduction for your topic" in a recent paper in Surfaces and Interfaces, a journal

published by Elsevier—are reasonably obvious evidence that a scientist used an AI chatbot known as a large language model (LLM). But "that's probably only the tip of the iceberg," says scientific integrity consultant Elisabeth Bik. (A representative of Elsevier told Scientific American that the publisher regrets the situation and is investigating how it could have "slipped through" the manuscript evaluation process.) In most other cases AI involvement isn't as clear-cut, and automated AI text detectors are unreliable tools for analyzing a paper.

Researchers from several fields have, however, identified a few key words and phrases (such as "complex and multifaceted") that tend to appear more often in AI-generated sentences than in typical human writing. "When you've looked at this stuff long enough, you get a feel for the style," says Andrew Gray, a librarian and researcher at University College London.

LLMs are designed to generate text—but what they produce may or may not be factually accurate. "The problem is that these tools are not good enough yet to trust," Bik says. They succumb to what computer scientists call hallucination: simply put, they make stuff up. "Specifically, for scientific papers," Bik notes, an AI "will generate citation references that don't exist." So if scientists place too much confidence in LLMs, study authors risk inserting AI-fabricated flaws into their work, mixing more potential for error into the already messy reality of scientific publishing.

Gray recently hunted for AI buzzwords in scientific papers using Dimensions, a data analytics platform that its developers say tracks more than 140 million papers worldwide. He searched for words disproportionately used by chatbots, such as "intricate," "meticulous" and "commendable." These indicator words, he says, give a better sense of the problem's scale than any "giveaway" AI phrase a clumsy author might copy into a paper. At least 60,000 papers—slightly more than 1 percent of all scientific articles published globally last year—may have used an LLM, according to Gray's analysis, which was released on the preprint server arXiv.org and has yet to be peer-reviewed. Other studies that focused specifically on subsections of science suggest even more reliance on LLMs. One such investigation found that up to 17.5 percent of recent computer science papers exhibit signs of AI writing.

Those findings are supported by Scientific American's own search using Dimensions and several other scientific publication databases, including Google Scholar, Scopus, PubMed, OpenAlex and Internet Archive Scholar. This search looked for signs that can suggest an LLM was involved in the production of text for academic papers—measured by the prevalence of phrases that ChatGPT and other AI models typically append, such as "as of my last knowledge update." In 2020 that phrase appeared only once in results tracked by four of the major paper

analytics platforms used in the investigation. But it appeared 136 times in 2022. There were some limitations to this approach, though: It could not filter out papers that might have represented studies of AI models themselves rather than AI-generated content. And these databases include material beyond peer-reviewed articles in scientific journals." 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Monday, 16 September 2024

Captcha Image