Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Sunday, December 22, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeTechAI’s ‘insane’ translation mistakes endanger US asylum cases

    AI’s ‘insane’ translation mistakes endanger US asylum cases

    -

    Fly AirAsia from Kuala Lumpur

    WASHINGTON: Names translated as months of the year, incorrect time frames and mixed-up pronouns – the everyday failings of AI-driven translation apps are causing havoc in the US asylum system, critics say.

    “We have countless examples of this nature,” said Ariel Koren, founder of Respond Crisis Translation, a global collective that has translated more than 13,000 asylum applications, warning that errors can lead to unfounded denials.

    In one case, she said, attorneys missed a crucial detail in a woman’s account of domestic abuse because the translation app they were using kept breaking down, and they ran out of time.

    “The machines themselves are not operating with even a fraction of the quality they need to be able to do case work that’s acceptable for someone in a high-stakes situation,” said Koren, who used to work for Google Translate.

    She told the Thomson Reuters Foundation a translator with the group had estimated that 40% of Afghan asylum cases he had worked on had encountered problems due to machine translation. Cases involving Haitian Creole speakers have also faced significant issues, she added.

    Government contractors and large aid organisations are increasingly using AI machine translation tools due to “an immense amount of incentive to cut costs”, Koren said.

    The extent to which such tools are being used in US immigration processing is unclear, however, amid a broad lack of transparency, said Aliya Bhatia, a policy analyst with the Center for Democracy & Technology think-tank.

    “We know governments and asylum lobbies around the world … are moving toward using automated technology,” Bhatia said.

    A 2019 report from investigative news outlet ProPublica found that immigration officials were being directed to use Google Translate to “vet” social media use for refugee applications.

    ALSO READ:  China's EV market fuels L3 autonomous driving investments.

    The US Department of Justice and the Immigration and Customs Enforcement agency did not respond to requests for comment from the Thomson Reuters Foundation, nor did the White House, which recently released a national “blueprint” on AI guidelines.

    Asked about concerns over the use of machine translation in asylum cases, a spokesperson for Google said its Google Translate tool underwent strict quality controls and pointed out that it was offered free of charge.

    “We rigorously train and test our systems to ensure each of the 133 languages we support meets a high standard for translation quality,” the spokesperson said.

    Training gap

    A major shortcoming of translation tools’ use in asylum cases stems from the difficulty of building in checks, said Gabe Nicholas, a research fellow with the Center for Democracy & Technology and co-author with Bhatia on a May paper on the models being used for machine translation.

    “Because the person speaks only one language, the potential for mistakes and errors to go uncaught is really, really high,” he said.

    Machine translation has made significant progress in recent years, according to Nicholas and Bhatia, but it is still nowhere near good enough to be relied upon in often complex, high-stakes situations such as the asylum process.

    A core problem is how the apps are trained in the first place – on digitised text, for which there are masses available for English but far less for other languages.

    This not only results in less nuanced or simply incorrect translations, but it also means English or some other high-resource languages become “intermediaries though which these models view the world”, Bhatia said.

    ALSO READ:  America's political polarization has no easy solution, Meta algorithms reveal.

    The result is Anglo-centric translations that often fail to accurately capture crucial details around a particular word.

    Like many other sectors, the translation industry has been upended in recent months by the release of “generative” AI tools such as ChatGPT.

    “ChatGPT and AI are now on everybody’s minds,” said Jill Kushner Bishop, founder and CEO of Multilingual Connections, a company based in the Chicago area.

    “There are cases for it, and those are more and more compelling all the time. But it’s still not ready in most cases to be used with the training wheels off and without a human involved,” Bishop said.

    The company does regular testing of tools and different languages, said production director Katie Baumann, but continues to find problems with text translations involving, say, Turkish or Japanese, or AI-driven audio transcriptions with background noise.

    “We’ve run tests of extracts of law enforcement interviews, processing and putting it through machine translation – a lot of it is nonsense. It wouldn’t save you any time, so we wouldn’t use it,” Baumann said.

    So even as Multilingual Connections does increasingly use machine translation, a human is always involved.

    “You don’t know what you don’t know. So for someone who is not a speaker of the language … you don’t know where the mistakes will be,” said Bishop.

    “Think about asylum cases … and what might be misunderstood without a human verifying,” she said. OpenAI, which developed ChatGPT, declined to comment, but a spokesperson pointed to policies that bar use for “high risk government decision-making” including law enforcement, criminal justice, migration and asylum.

    ALSO READ:  Visa launches $100 million venture fund for generative AI startups

    ‘Terrible mess’

    At Respond Crisis Translation, the shortcomings of AI-driven translation tools are also creating an extra layer of work for Koren and her colleagues.

    “The people who need to clean up the mess are human translators,” she said.

    One of the collective’s translators, Samara Zuza, has been working for three years with a Brazilian asylum seeker whose asylum papers were poorly translated by an AI app while he was in immigration detention in California, she said.

    The application was “full of insane mistakes”, said Zuza. “The names of the city and state are wrong. The sentences are reversed – and that’s the form that was sent to the court.”

    She thinks it was these inaccuracies that resulted in the rejection of initial attempts to secure the man’s release. The man, who asked to be identified only as Carlos, a pseudonym, was eventually released in May 2020 after the two started working together.

    “The language was the worst aspect for me,” Carlos, 49, said of his six months in immigration detention after he fled gang activity in Brazil.

    He spoke by phone from Massachusetts, where he is now living as he applies for US residency.

    Carlos, who is illiterate and speaks Brazilian Portuguese, said he had been unable to communicate with immigration officials or even other detainees for months.

    To fill out his asylum paperwork, he relied on a tablet computer’s voice recorder coupled with an app that used machine translation.

    “So many of the words were being wrongly translated,” he said. “My asylum papers were a terrible mess.” – Thomson Reuters Foundation



    Credit: The Star : Tech Feed

    Suara
    Suarahttps://www.suara.my
    Tech enthusiast turning dreams into reality, one byte at a time 🚀

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts