Artificial intelligence (AI) tools that can generate artwork or realistic images from written commands are now being integrated into popular tools like Adobe Photoshop and YouTube. Tech companies are competing to bring text-to-image generators into mainstream use, but they face challenges in addressing copyright theft and problematic content.
Last year, a small group of early adopters and hobbyists began experimenting with advanced image generators such as Stable Diffusion, Midjourney, and OpenAI’s DALL-E. However, businesses were hesitant to adopt these technologies, considering them merely interesting curiosities.
Following this, a backlash ensued, including copyright lawsuits and calls for stricter regulations to prevent the misuse of generative AI technology. Although these issues have not been completely resolved, new image generators claim to be ready for business use.
For instance, Amazon plans to introduce text-to-image generation on its Fire TV screens, allowing users to generate personalised displays by simply speaking commands. Adobe, known for its graphics editor Photoshop, released an AI generator called Firefly earlier this year to address legal and ethical concerns. The tool uses Adobe’s own image collection and licensed content, ensuring legality and compensating contributors.
Competitors are taking note of such developments. OpenAI, the maker of ChatGPT, unveiled its third-generation image generator DALL-E 3, which will integrate with ChatGPT and include safeguards to decline requests for images resembling those of living artists. However, Truog from Forrester points out that OpenAI hasn’t mentioned compensating authors whose work is used for training.
In separate events, Microsoft and YouTube also showcased their AI image generation products. Microsoft demonstrated how it is incorporating DALL-E 3 into its design tools, Bing search engine, and chatbot, while YouTube introduced the Dream Screen, enabling creators to compose custom backgrounds for their videos.
To address concerns about AI-generated content, major AI providers, including Adobe and Stability AI, have agreed to voluntary safeguards set by President Joe Biden’s administration. These safeguards aim to establish methods like digital watermarking to distinguish AI-generated content and prevent its misuse.
Microsoft, for example, has implemented filters to monitor the types of imagery generated from text prompts on Bing. The company aims to avoid producing content like hate speech and has blocked certain text prompts accordingly.
Despite the progress made in addressing copyright and ethical concerns, there is still work to be done. Tech companies are striving to make AI image generators more user-friendly and trusted, ensuring that businesses and creative professionals can use them without legal repercussions.
Credit: The Star : Tech Feed