Suara Malaysia
ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
Tuesday, November 5, 2024
More
    ADVERTISEMENTFly London from Kuala LumpurFly London from Kuala Lumpur
    HomeTechDall-E 3 is so good it’s stoking an artist revolt against AI scraping

    Dall-E 3 is so good it’s stoking an artist revolt against AI scraping

    -

    Fly AirAsia from Kuala Lumpur

    Dall-E 3, the latest image-generating software created by OpenAI, can produce a picture of almost anything. It can conjure a watercolor portrait of a mermaid, a personalised birthday greeting or a faux photograph of Spider-Man eating pizza, all based on just a few words of prompting.

    The new version of the tool, released in September, represents a “leap forward,” in artificial intelligence-created images, OpenAI says. Dall-E 3 offers better detail and the ability to render text more reliably. It’s also further stoked illustrators’ fears that they will be replaced by a computer program mimicking their work.

    Rapid improvements in image generation have spurred artists to push back on generative AI startups, which ingest vast troves of Internet data in order to generate content like pictures or text. It hasn’t helped much that OpenAI’s new process for artists who want to exclude their data from the system is time-consuming and complex. Some have sued generative AI companies. Others have turned to a growing number of digital tools allowing artists to monitor whether their work has been picked up by AI. And still others have resorted to mild sabotage.

    The goal is to resist losing business and commissions to machines that are copying them, a common sentiment in the art world. “My art has been egregiously violated,” said Kelly McKernan, an illustrator and watercolorist. “And I know of so many artists who feel the same.”

    ‘It feels like a charade’

    Some people are finding that they have limited recourse over how AI systems use their work. McKernan is part of a trio of visual artists suing image-generating startups Stability AI, Midjourney and DeviantArt – all of which, like Dall-E 3, generate detailed and often beautiful pictures. The lawsuit alleges that their work was used to train the AI image generators without permission or payment. The companies have denied wrongdoing. And traditionally, gathering online content for training AI software has been considered protected under the fair use doctrine of US copyright law. In late October, the judge in the case tossed several of the defendants’ allegations while allowing a copyright infringement claim to move forward.

    ALSO READ:  Ericsson bets on new software to spur 5G revenue growth

    The challenges have added to the legal risks faced by AI companies, but it will likely be years before there’s closure on the issue.

    In the meantime, artists concerned that their material is being used for training Dall-E 3 can follow the process outlined by OpenAI itself. That means filling out a form requesting to exclude images from the company’s datasets, so it won’t be used to train future AI systems.

    That opt-out process, which was recently introduced, has stoked controversy because it can be time-consuming and cumbersome to use and may not prevent programs from mimicking an artist’s style. When testing Dall-E 3 via ChatGPT Plus, Bloomberg News found the software would refuse to produce images for a prompt containing copyrighted characters, but would instead offer to create a more generic option – which could still yield an image that looked like the copyrighted character.

    For example, ChatGPT will decline to use the Dall-E 3 to create an image of Spider-Man. But when requested by Bloomberg, it offered to create a character that looks very similar based on the prompt “spider-based superhero wearing a red and blue suit.” Similarly, while the tool will not create images in the style of living artists, it’s possible to generate images that evoke certain styles using detailed descriptions.

    “It feels like a charade, a surface-level way to have the appearance of doing the right thing,” said Reid Southen, a concept artist and illustrator who has worked on films including The Hunger Games and The Matrix Resurrections.

    Southen said he won’t go through the opt-out process, estimating it would take him months to complete. The system asks that artists upload images they’d like excluded from future training to OpenAI, along with a description of each piece. To Southen, it’s built to incentivise people not to remove their data from the company’s training processes.

    ALSO READ:  The FAA is considering more use of technology to warn pilots before they land on the wrong runway

    Asking people to give them copies of their work so that OpenAI can avoid training on it in the future is “ridiculous,” said Calli Schroeder, senior counsel for the Electronic Privacy Information Center, or EPIC. She also doesn’t think artists will trust the company to keep its word. “Since they’re the ones benefiting from all this information, the burden should be on them to make sure that they actually legally and ethically can use that data for their training sets,” Schroeder said.

    Contacted for comment, OpenAI said it’s still evaluating the process to give people control over how their information is used, and would not say how many people had completed the opt-out process so far. “It’s early days, but we’re trying to collect feedback and we want to improve the experience,” a spokesperson said.

    A poison pill

    For artists unsatisfied with official channels, there are other options. One company, Spawning Inc, created a tool called “Have I Been Trained” to allow artists to see if their work had been used to train some AI models, and aims to help them opt out from future datasets. Another service, Glaze, alters the pixels in an image ever so slightly, making it appear to a computer as if it’s a different style of art. Released in August, Glaze has been downloaded 1.5 million times (there are also 2,300 online accounts for an invite-only web-based service).

    Glaze’s creator is Ben Zhao, a professor at the University of Chicago, and his next project will go even further. In the coming weeks, Zhao plans to roll out a new tool called Nightshade, which will act as a kind of AI poison pill that he hopes artists will use to protect their work, while potentially thwarting AI models that train on such data.

    ALSO READ:  Intel's strong results drive chip stocks, hinting at PC market recovery.

    It will work by slightly modifying a picture so it will appear to an AI system to be something else entirely. For example, an image of a castle whose pixels have been tweaked via Nightshade will still appear, to a person, to depict that same castle – but an AI system training on the image would categorise it as something different, for example, a truck. The hope is to discourage rampant digital scraping by making some images harmful to the model, rather than helpful.

    Zhao doesn’t think Nightshade is a solution to artists’ issues, but he hopes to give them a sense of control over their work online, and change the ways AI companies collect training data.

    “I’m not particularly malicious, looking to do damage to any company,” Zhao said. “I think a lot of places do good things. But it’s a question of coexistence and good behaviour.” – Bloomberg

    Suara
    Suarahttps://www.suara.my
    Tech enthusiast turning dreams into reality, one byte at a time 🚀

    Related articles

    ADVERTISEMENTFly London from Kuala Lumpur

    Subscribe to Newsletter

    To be updated with all the latest news, offers and special announcements.

    Latest posts