Videos created by artificial intelligence will require declaration on YouTube

In view of the explosive increase in content generated by artificial intelligence, YouTube is taking steps to limit abusive publications. On November 16, the video platform announced that from next year creators will have to report the use of AI tools in their videos.

Jennifer Flannery O'Connor and Emily Moxley, YouTube's VPs of Product, justified the decision by pointing out that powerful new forms of AI storytelling can be exploited to create content that can mislead viewers, particularly if they are unaware that the video has been edited or synthetically generated.

This new requirement specifically concerns videos containing "realistic" content created by generative AI. In particular, YouTube is targeting content featuring fictitious events or people saying or doing things they haven't actually said or done.



Explicitly, users will have to tick a box when uploading the video, and a warning will appear in the description panel. For sensitive subjects such as elections, armed conflicts and public health crises, a warning will also be displayed directly on the video player, making it visible at all times.

YouTube emphasizes that disclosure of AI-generated content is not optional. Creators who fail to comply with this requirement face sanctions, including demonetization or account deletion.

In addition, YouTube will introduce a new feature enabling users to request the removal of "AI-generated or other synthetic or altered content simulating an identifiable person, including their face or voice". Satirical or parodic content will, however, benefit from a certain latitude in the face of these new rules.

The platform is also committed to automatically tagging content generated using its own AI tools for creators, while creating a new form enabling record companies to request the removal of songs imitating an artist's distinctive voice.

Source : L'Usine Digitale

Vanessa Ntoh

Les commentaires

Poster un commentaire