BEST OF THE WEB

YouTube will required disclosure of AI Content

YouTube is set to implement new policy changes next year, requiring creators to disclose the use of generative AI in videos, especially for content depicting sensitive topics such as politics and health issues.

These changes are a response to the rapidly advancing capabilities of generative AI in creating realistic-looking videos.

Under the new policies, YouTube will:
– Require creators to disclose if generative AI has been used to create scenes that depict fictional events or show real people saying things they did not actually say.
– Allow individuals to request the removal of content that simulates an identifiable person, including their face or voice. This removal request, however, will not be automatically granted, with a higher threshold for moderation applied to satire, parody, or content involving public figures.
– Establish a separate process for music industry partners to seek the removal of content that imitates an artist’s unique singing or rapping voice.
– Ensure full disclosure of any generative AI tools used in YouTube’s own content production.

The disclosure requirement is mandatory for creators, and failure to comply could lead to content removal or other penalties. YouTube emphasizes that while AI can enable powerful storytelling, it also has the potential to mislead viewers, particularly if they are not aware that the content has been altered or synthetically created.

The manner in which AI usage is disclosed to viewers will depend on the sensitivity of the content. For most videos, the disclosure will appear on the video’s description screen. However, for videos addressing sensitive topics like politics, military conflicts, and health issues, YouTube plans to make these labels more prominent.

YouTube also noted that all its standard content guidelines, including those governing violence and hate speech, will apply to AI-generated videos. This move by YouTube reflects a growing awareness of the ethical implications and potential risks associated with AI-generated content, particularly in the context of misinformation and the integrity of online information.

Sources include: Axios

IT World Canada Staff
IT World Canada Staffhttp://www.itworldcanada.com/
The online resource for Canadian Information Technology professionals.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web