Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Monday, November 25, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeTechPeople are disinformation’s biggest problem, not AI, experts say

    People are disinformation’s biggest problem, not AI, experts say

    -

    Fly AirAsia from Kuala Lumpur

    Lawmakers, fact-checking organisations, and some tech companies are collaborating to combat the threat of a new wave of AI-generated disinformation online. However, experts claim that these efforts are hindered by the public’s distrust of institutions and a general lack of literacy in identifying fake images, videos, and audio clips online.

    “Social media and human beings have made it so that even when we come in, fact check and say, ‘nope, this is fake,’ people say, ‘I don’t care what you say, this conforms to my worldview,’” said Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley.

    “Why are we living in that world where reality seems to be so hard to grip?” he said. “It’s because our politicians, our media outlets, and the internet have stoked distrust.”

    Farid made these comments on the first episode of a new season of the Bloomberg Originals series AI IRL.

    Experts have been warning about the potential for artificial intelligence to accelerate the spread of disinformation for years. However, the urgency to address the issue has increased significantly after the introduction of a new set of powerful generative AI tools that make it cheap and easy to produce visuals and text. In the US, there are concerns that AI-generated disinformation could impact the 2024 US presidential election. Additionally, in Europe, new legislation requires major social media platforms to combat the spread of disinformation on their platforms.

    The extent and impact of AI-generated disinformation are still uncertain, but there are reasons for concern. Bloomberg reported last week that misleading AI-generated deepfake voices of politicians were being circulated online days before a closely contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images.

    ALSO READ:  EY rolls out AI-powered platform after $1.4 billion tech investment

    Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and previously a director at X (formerly known as Twitter), agrees that human fallibility contributes to the difficulty of combating disinformation.

    “You can have bots, you can have malicious actors,” she said, “but actually a very significant percentage of the fake information online is often shared by people who didn’t know any better.”

    Chowdhury noted that internet users have become more adept at identifying fake text posts due to years of exposure to suspicious emails and social media content. However, as AI technology advances and enables the creation of more realistic fake images, audio, and video, “there is a need for people to be educated on this matter.”

    “If we see a video that looks real – for example, a bomb hitting the Pentagon – most of us will believe it,” she said. “If we were to see a post and someone said, ‘Hey, a bomb just hit the Pentagon,’ we are actually more likely to be skeptical because we have been trained more on text than video and images.” – Bloomberg



    Credit: The Star : Tech Feed

    Suara
    Suarahttps://www.suara.my
    Tech enthusiast turning dreams into reality, one byte at a time 🚀

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts