Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Friday, September 20, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeNewsHeadlinesWould AI be useful for conflict decision-making?

    Would AI be useful for conflict decision-making?

    -

    Fly AirAsia from Kuala Lumpur

    Researchers in the United States have conducted an assessment of the use of generative artificial intelligence models, such as Chat-GPT, in the context of international conflicts. Their findings indicate that AI has a disturbing tendency to escalate situations and even consider the use of nuclear weapons with no prior warning.

    A collaborative study by the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Institution explored the reactions of five large language models (LLMs) in three different simulation scenarios: invasion of one country by another, a cyberattack, and a “neutral scenario without any initial events”.

    In all three scenarios, the AI models were asked to roleplay as nations with varying levels of military power and different goals. The five models tested were GPT-3.5, GPT-4, basic version of GPT-4 (without additional training) from OpenAi, Claude 2 from Anthropic, and Llama 2 from Meta.

    The results of the simulations were clear: the use of generative AI models in war game scenarios often led to increases in violence, exacerbating conflicts rather than resolving them. “We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts,” the study states. “All models show signs of sudden and hard-to-predict escalations.”

    To make decisions based on the simulations, the AI models could choose from 27 different actions, ranging from peaceful options to more aggressive ones, including the option to execute a full nuclear attack.

    According to the researchers, GPT-3.5 made the most aggressive and violent decisions among the evaluated models. On the other hand, GPT-4-Base was the most unpredictable, at times even coming up with absurd explanations, such as quoting the opening text of the film Star Wars IV: A New Hope.

    ALSO READ:  Samsung to manufacture chips from AI chip startup Tenstorrent

    “Across all scenarios, all models tend to invest more in their militaries despite the availability of demilitarisation actions, an indicator of arms-race dynamics, and despite positive effects of demilitarisation actions on, eg, soft power and political stability variables,” the researchers write in the study.

    The researchers are now seeking to understand why these AIs react in the manner they do in an armed conflict situation. They have proposed hypotheses, such as LLMs being trained on biased data.

    “One hypothesis for this behavior is that most work in the field of international relations seems to analyze how nations escalate and is concerned with finding frameworks for escalation rather than de-escalation,” says the study. “Given that the models were likely trained on literature from the field, this focus may have introduced a bias towards escalatory actions. However, this hypothesis needs to be tested in future experiments.” – AFP Relaxnews

    Wan
    Wan
    Dedicated wordsmith and passionate storyteller, on a mission to captivate minds and ignite imaginations.

    Related articles

    Follow Us

    20,249FansLike
    1,158FollowersFollow
    1,051FollowersFollow
    1,251FollowersFollow
    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts