Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Friday, November 22, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeTechSimplified Politics with AI: Cost Reductions and Increased Risks

    Simplified Politics with AI: Cost Reductions and Increased Risks

    -

    Fly AirAsia from Kuala Lumpur

    IT is a jarring political advertisement: Images of a Chinese attack on Taiwan lead into scenes of looted banks and armed soldiers enforcing martial law in San Francisco. A narrator insinuates that it’s all happening under President Joe Biden’s watch.

    Those visuals in the Republican National Committee’s ad aren’t real, and the scenarios are pretty obviously fictional. But thanks to the handiwork of artificial intelligence (AI), the images look like real life.

    Within days of the ad appearing online in April, Representative Yvette Clarke, a New York Democrat, introduced legislation to require disclosure of content produced by AI in political advertisements.

    “This is going too far,” she said in an interview. Tiny type in the RNC ad reads, “Built entirely with AI imagery.”

    Clarke’s bill is going nowhere in a legislature controlled by Republicans, but it illustrates the degree to which the rapid advance of AI has put Washington on the back foot.

    Voters in the United States and around the world are already inundated by AI-generated political content.

    Click on an email asking for donations, for example, and you may be reading a message drafted by a so-called large language model, political consultants say – the technology behind ChatGPT, the wildly popular chatbot from startup OpenAI.

    Politicians also increasingly use AI to hasten mundane but critical tasks like analysing voter rolls, assembling mailing lists and even writing speeches.

    As in many industries, AI is poised to increase political workers’ productivity – and probably eliminate more than a few of their jobs.

    It’s hard to say how many, but the business of politics is full of the sorts of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.

    But even more ominously, AI holds the potential to supercharge the dissemination of misinformation in political campaigns.The technology is capable of quickly creating so-called deepfakes, or fake pictures and videos that some political operatives predict will soon be indistinguishable from real ones, enabling miscreants to literally put words in their opponents’ mouths.

    Deepfakes have plagued politics for years, but with AI, savvy editing skills are no longer required to create them.Put to its best use, AI could improve political communications.

    For instance, upstart campaigns with little cash could use the technology to inexpensively produce campaign materials with fewer staff.

    Some political consultants that traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns using AI to offer more services at a lower price point.

    ALSO READ:  First impressions of WPS.AI - ChatGPT AI productivity done right?

    And the tech industry is trying to combat deepfakes. Companies including Microsoft have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.

    ‘Knife fight’

    In June, Florida governor Ron DeSantis’s presidential campaign posted an online ad featuring AI-generated images of President Donald Trump hugging and kissing Anthony Fauci.

    The former director of the National Institute of Allergy and Infectious Diseases is a pariah among Republicans because of his public-health recommendations during the pandemic.

    A fact-checking note was appended to the DeSantis campaign’s tweet, saying that the images, mixed among real pictures and videos of Trump, were AI-created. DeSantis’s campaign didn’t initially identify them as fake.

    In Germany, a far-right party recently distributed AI-generated images of angry immigrants without telling viewers that they weren’t actual photographs.

    That one got flagged on X (formerly Twitter) as well, but the incident shows how quickly the technology is being adopted for political messaging and the inherent risks, said Juri Schnoller, the managing director of Cosmonauts and Kings, a German political communication firm.

    “AI can save or destroy democracy. It’s like a knife fight, right? You can kill someone, or you can make the best dinner,” Schnoller said.

    Mix in Russian and Chinese disinformation mills, and the concerns grow even more acute, misinformation experts say.

    Trolls and hackers in those nations already churn out propaganda and lies within their own borders and in countries around the world.

    Graphika, a misinformation- tracking firm based in the US, found a pro-Chinese influence operation spreading AI-generated video footage of fake news anchors promoting the interests of the Chinese Communist Party in February.

    Rob Joyce, director of cybersecurity at the National Security Agency, said both nation-state actors and cybercriminals have begun experimenting with ChatGPT-like text generation to trick people online.

    “That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce said earlier this year.

    “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test.”In March, an anonymous X user posted an altered video that went viral, purporting to show Biden verbally attacking transgender people.

    Another one, circulated widely by a right-wing US pundit, appeared to show Biden ordering a nuclear attack on Russia and sending troops to Ukraine.

    Falling behind

    Washington is bad at keeping up with emerging technology, much less regulating it.

    Despite agreeing broadly that Big Tech is too powerful, the two parties have for years been unable to pass any comprehensive legislation to rein in the industry.

    ALSO READ:  California turns to AI to help spot wildfires

    Between 2021 and 2022, Congress held more than 150 hearings on technology, with little to show for it.

    In June, there was a briefing in the Senate called “What is AI?”

    The US doesn’t have a federal privacy law and hasn’t updated antitrust laws to account for the growing concentration in the tech industry.

    Lawmakers have been unable to agree on whether – or how – to regulate online speech.

    Last month, the Federal Election Commission deadlocked 3-3 on a request to develop rules for AI-generated political ads.

    Republicans on the panel, which is evenly divided between the parties and routinely finds itself at an impasse on controversial matters, said the agency didn’t have explicit authority for the regulations.

    Other countries are racing ahead on regulation, spurred into action by the ChatGPT craze.

    On June 14, the European Parliament voted to restrict the nascent technology’s most anxiety- inducing uses, such as biometric surveillance – AI that can identify people from their faces or bodies.

    The law, still up for debate, could also require companies to reveal more information about datasets used to train chatbots.

    European officials are separately pressing companies, including Alphabet’s Google and Meta Platforms, to label content and images generated by AI in order to help combat disinformation from adversaries like Russia.

    Chinese regulators are aggressively imposing new rules on technology companies to ensure Communist Party control over AI and related information available in the country.

    Every AI model must be submitted for government review before introduction into the market, and synthetically generated content must carry “conspicuous labels”, according to a Carnegie Endowment for International Peace paper.

    Cheaper campaigns

    In the best case, AI could make US political campaigns “a lot cheaper”, said Martin Kurucz, the CEO of Sterling Data Company, which works with Democrats.

    The technology is already being used to help write first drafts of speeches and op-eds, create ads, draw up lobbying campaigns and more, according to lobbyists, campaign and congressional staffers and political consultants.

    Art generators like Midjourney, an AI program that generates hyper-realistic images based on text prompts, have the potential to increase productivity or even replace the work of creative teams that can cost thousands of dollars.

    While the RNC has already made an attack ad using generative AI, the Democratic National Committee is still experimenting with the technology.

    A spokesperson said the committee has sent out AI-automated fundraising emails and is considering how to expand its use of AI in the future.

    ALSO READ:  UK regulator issues notice to Snapchat over AI chatbot's privacy risks

    On Capitol Hill, the House chief administrative officer’s digital services office handed out 40 licences for ChatGPT Plus in April, which House offices have used to help write emails, research briefs, and even draft legislation. Writing full bills is still too complicated a task for generative AI.

    The House last month created new rules curtailing the use of ChatGPT in Congress, clarifying that staffers cannot put confidential information into the chatbot.

    There’s some indication lawmakers are taking the threat of AI more seriously than previous technologies that were poised to upend politics.

    After it became clear social media would play a vital role in politics, for example, lawmakers let a decade slide by before they summoned Mark Zuckerberg to testify at a hearing.

    OpenAI CEO Sam Altman testified on the Hill in May, less than a year after ChatGPT was opened to the public.

    He told lawmakers that his industry desperately needs regulation, and he’s worried about nefarious uses of AI.

    ‘Won’t know the truth’

    OpenAI has noticed an uptick in the use of ChatGPT for political purposes, an OpenAI spokesperson said, and has sought to get ahead of concerns that its product might be used to deceive voters.

    The company published new guidelines in March prohibiting “political campaigning or lobbying” using ChatGPT, including generating campaign materials targeted at particular demographics or producing “high volumes” of materials.

    The trust and safety teams at OpenAI are trying to identify political uses of the chatbot that violate the company’s policies, the spokesperson said.

    The American Association of Political Consultants last month condemned the use of deceptive generative AI in political advertisements, calling it a “threat to democracy”.

    The group said it plans to condemn and potentially sanction members who develop deepfake ads.

    But in a society where access to AI tools is widespread and carries little cost, the worst actors are unlikely to be members of a professional association.

    Frank Luntz, a veteran Republican strategist, said he fears that AI technology will foment voter confusion in the 2024 US presidential contest.

    “In politics, the truth is already in short supply,” he said. “Thanks to AI, even those who care about the truth won’t know the truth.” – Bloomberg


    Credit: The Star : Tech Feed

    Wan
    Wan
    Dedicated wordsmith and passionate storyteller, on a mission to captivate minds and ignite imaginations.

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts