YouTube is taking steps to introduce labels on videos, especially as Artificial Intelligence (AI) becomes increasingly prevalent in visual media. In a new blog update, the video platform confirmed that while they won’t be banning AI usage, users will have to disclose when they do use it in videos within the “coming months.”
This would include music videos and songs, as well as “especially important” cases like election coverage or “content showing someone saying or doing something they didn’t actually do.”
“To address this concern, over the coming months, we’ll introduce updates that inform viewers when the content they’re seeing is synthetic,” YouTube’s statement read. “Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using A.I. tools.”
They added, “When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an A.I.-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”
A test example of the AI label is shown on the site, with an example noting something is “Altered or synthetic content,” even on YouTube Shorts.
Read YouTube’s complete statement about AI here.