American researchers have developed a tool capable of āfoolingā the artificial intelligence models used today for automatic image generation. The idea is to help artists prevent their works from being scraped by AI.
Named Nightshade, this tool allows artists to invisibly alter the pixels representing their work. This disrupts AI models that often use unauthorized images in their learning process.
The creators describe it as a “poison pill” for these models, potentially causing serious malfunctions and rendering them unusable if widespread.
The models in question are the ones used by popular image-creating AIs like Midjourney and DALL-E. With Nightshade, these models would misinterpret images, mistaking a dog for a cat, a house for a cake, and so on.
The University of Chicago researchers who developed Nightshade aim to address the concerns of artists whose works are being used as “models” for AI-generated images. The tool aims to fight against copyright infringement and protect artists’ intellectual property rights.
Should Nightshade be deployed, it would complement Glaze, another solution developed by the same researchers. Glaze manipulates images to make them undetectable by AI-trained models.
However, the effectiveness of these tools depends on the models’ inability to detect and decode such manipulations. Once AI becomes powerful enough to overcome these tools, new solutions will be required to protect artists’ original creations from being used by learning models and AI generation.
ā AFP Relaxnews