Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Monday, December 23, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeNewsHeadlinesFlood of ‘junk’: How AI is changing scientific publishing

    Flood of ‘junk’: How AI is changing scientific publishing

    -

    Fly AirAsia from Kuala Lumpur

    PARIS: An infographic of a rat with a preposterously huge genitals. Another showing human legs with way too many bones. An introduction that starts: “Certainly, here is a possible introduction for your topic”.

    These are a few of the most egregious examples of artificial intelligence that have recently made their way into scientific journals, shining a light on the wave of AI-generated text and images washing over the academic publishing industry.

    Several experts who track down problems in studies told AFP that the rise of AI has turbocharged the existing problems in the multi-billion-dollar sector.

    All the experts emphasised that AI programmes such as ChatGPT can be a helpful tool for writing or translating papers – if thoroughly checked and disclosed.

    But that was not the case for several recent cases that somehow snuck past peer review.

    Earlier this year, a clearly AI-generated graphic of a rat with impossibly huge genitals was shared widely on social media.

    It was published in a journal of academic giant Frontiers, which later retracted the study.

    Another study was retracted last month for an AI graphic showing legs with odd multi-jointed bones that resembled hands.

    While these examples were images, it is thought to be ChatGPT, a chatbot launched in November 2022, that has most changed how the world’s researchers present their findings.

    A study published by Elsevier went viral in March for its introduction, which was clearly a ChatGPT prompt that read: “Certainly, here is a possible introduction for your topic”.

    Such embarrassing examples are rare and would be unlikely to make it through the peer review process at the most prestigious journals, several experts told AFP.

    ALSO READ:  Atlassian, Airbnb boost remote work as peers order office return

    Tilting at paper mills

    It is not always so easy to spot the use of AI. But one clue is that ChatGPT tends to favour certain words.

    Andrew Gray, a librarian at University College London, trawled through millions of papers searching for the overuse of words such as meticulous, intricate or commendable.

    He determined that at least 60,000 papers involved the use of AI in 2023 – over 1% of the annual total.

    “For 2024 we are going to see very significantly increased numbers,” Gray told AFP.

    Meanwhile, more than 13,000 papers were retracted last year, by far the most in history, according to the US-based group Retraction Watch.

    AI has allowed the bad actors in scientific publishing and academia to “industrialise the overflow” of “junk” papers, Retraction Watch co-founder Ivan Oransky told AFP.

    Such bad actors include what are known as paper mills.

    These “scammers” sell authorship to researchers, pumping out vast amounts of very poor quality, plagiarised or fake papers, said Elisabeth Bik, a Dutch researcher who detects scientific image manipulation.

    Two percent of all studies are thought to be published by paper mills, but the rate is “exploding” as AI opens the floodgates, Bik told AFP.

    This problem was highlighted when academic publishing giant Wiley purchased troubled publisher Hindawi in 2021.

    Since then, the US firm has retracted more than 11,300 papers related to special issues of Hindawi, a Wiley spokesperson told AFP.

    Wiley has now introduced a “paper mill detection service” to detect AI misuse – which itself is powered by AI.

    ALSO READ:  Colombia soccer federation head and son arrested after Copa America final

    ‘Vicious cycle’

    Oransky emphasised that the problem was not just paper mills, but a broader academic culture which pushes researchers to “publish or perish”.

    “Publishers have created 30% to 40% profit margins and billions of dollars in profit by creating these systems that demand volume,” he said.

    The insatiable demand for ever-more papers piles pressure on academics who are ranked by their output, creating a “vicious cycle”, he said.

    Many have turned to ChatGPT to save time – which is not necessarily a bad thing.

    Because nearly all papers are published in English, Bik said that AI translation tools can be invaluable to researchers – including herself – for whom English is not their first language.

    But there are also fears that the errors, inventions and unwitting plagiarism by AI could increasingly erode society’s trust in science.

    Another example of AI misuse came last week, when a researcher discovered what appeared to be a ChatGPT re-written version of one his own studies had been published in an academic journal.

    Samuel Payne, a bioinformatics professor at Brigham Young University in the United States, told AFP that he had been asked to peer review the study in March.

    After realising it was “100% plagiarism” of his own study – but with the text seemingly rephrased by an AI programme – he rejected the paper.

    Payne said he was “shocked” to find the plagiarised work had simply been published elsewhere, in a new Wiley journal called Proteomics.

    It has not been retracted. – AFP

    Wan
    Wan
    Dedicated wordsmith and passionate storyteller, on a mission to captivate minds and ignite imaginations.

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts