YouTube targets AI-generated spoofers in videos – Technology ReadingS


SAN FRANCISCO: YouTube on Tuesday said it will soon allow users to request that AI-generated imposters be removed from the platform and will require tagging videos with realistic-looking “synthetic” content.

New rules targeting AI-generated video material will come into force in the coming months, as fears grow that the technology is being misused to promote fraud and misinformation, or even falsely portray people appearing in pornography.

“We will make it possible to request the removal of AI-generated or other synthetic or manipulated content that simulates an identifiable person, including their face or voice,” said Emily Moxley and Jennifer Flannery O’Connor, YouTube vice presidents of product management. Blog post.

Pakistan and the Demon of Artificial Intelligence

When considering removal requests, the Alphabet-owned site will consider whether the videos are parodies and whether the real people depicted can be identified.

YouTube also plans to require creators to disclose when realistic video content has been created using AI so viewers can be notified with tags.

“This could be an AI-generated video that realistically depicts an event that never happened, or content that shows someone saying or doing something they didn’t actually do,” Moxley and O’Connor said in the post.

“This is especially important where content discusses sensitive topics such as elections, ongoing conflicts and public health crises, or public officials.”

Depending on the platform, video creators who violate the disclosure rule may have their content removed from YouTube or suspended from its ad revenue-sharing partner program.

“We also offer our music partners the ability to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” Moxley and O’Connor added.

Elsewhere online last week, Meta said advertisers will soon have to disclose on their platforms when artificial intelligence or other software is used to create or manipulate images or audio in political ads.

Parent company Meta said the mandate would go into effect globally on Facebook and Instagram early next year.

Advertisers will also need to disclose when AI is used to create completely fake but realistic people or events, according to Meta.

Meta will add notifications to ads to let viewers know what they’re seeing or hearing is the product of software tools, the company said.

“By 2024, the world may see multiple authoritarian nation-states attempting to interfere with election processes,” wrote Brad Smith, Microsoft’s chief legal officer, and corporate vice president Teresa Hutson, whose company runs the groundbreaking generative AI platform ChatGPT, in a recent blog post. he warned.

“And by combining traditional techniques with artificial intelligence and other new technologies, they can threaten the integrity of election systems.”

news source (


Please enter your comment!
Please enter your name here