SAN FRANCISCO: YouTube announced on November 14 that it will soon introduce features allowing users to request the removal of artificial intelligence-generated imposter content and will require labels on videos featuring realistic-looking “synthetic” content. The new rules are in response to growing concerns about the potential misuse of AI-generated video material for promoting scams, spreading misinformation, and creating fake pornography.
The new functionality will allow users to request the removal of AI-generated or manipulated content that simulates an identifiable individual, including their face or voice. The decision to remove such content will take into account whether it is portraying a real person or is intended as a parody.
In addition, YouTube will start requiring creators to disclose when realistic video content was created using AI, by using labels to inform viewers about the nature of the content. This is especially important for videos discussing sensitive topics such as elections, conflicts, and public health crises.
Creators who violate the disclosure rule may have their content removed from YouTube or be suspended from the partner program that shares ad revenue on the platform.
YouTube is also giving its music partners the ability to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice.
In a related move, Meta announced that advertisers will soon have to disclose when AI or other software is used to create or alter imagery or audio in political ads on its platforms. This requirement will take effect globally at Facebook and Instagram at the beginning of next year.
Advertisers will be required to reveal when AI is used to create completely fake yet realistic people or events, and notices will be added to ads to let viewers know that the content is the product of software tools.
As the 2024 elections approach, there are increasing concerns about the potential interference in electoral processes by authoritarian nation states using AI and new technologies. Microsoft’s chief legal officer Brad Smith and corporate vice president Teresa Hutson highlighted this in a recent blog post, warning about the potential impact on the integrity of electoral systems.
– AFP