How we're helping creators disclose altered or synthetic content
Mar 18, 2024 – [[read-time]] minute read
Mar 18, 2024 – [[read-time]] minute read
Generative AI is transforming the ways creators express themselves – from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.
That’s why today, we're introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, scene, or event – is made with altered or synthetic media, including generative AI. We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.
This builds on our approach to responsible AI innovation announced in November, which includes disclosure requirements and labels, an updated privacy request process, and ensures responsibility is built into all our AI products and features.
The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include:
Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential. These cases include:
You can see a longer list of examples in our Help Center. For most videos, a label will appear in the expanded description, but for videos that touch on more sensitive topics — like health, news, elections, or finance — we’ll also show a more prominent label on the video itself.
You’ll start to see the labels roll out across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV. And while we want to give our community time to adjust to the new process and features, in the future we’ll look at enforcement measures for creators who consistently choose not to disclose this information. In some cases, YouTube may add a label even when a creator hasn't disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people.
Importantly, we continue to collaborate across the industry to help increase transparency around digital content. This includes our work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).
In parallel, as we previously announced, we’re continuing to work towards an updated privacy process for people to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice. We’ll have more to share soon on how we’ll be introducing the process globally.
Creators are the heart of YouTube, and they’ll continue to play an incredibly important role in helping their audience understand, embrace, and adapt to the world of generative AI. This will be an ever-evolving process, and we at YouTube will continue to improve as we learn. We hope that this increased transparency will help all of us better appreciate the ways AI continues to empower human creativity.