On Tuesday, YouTube’s company blog post outlined a new set of rules for AI content, requiring YouTube creators to disclose whether they use generative artificial intelligence to create immersive videos. Under the latest AI-related policy, YouTube creators will have new options to indicate whether they post AI-generated videos, such as realistically depicting an event that never happened or showing someone saying or doing something they didn’t actually do.
Moreover, YouTube creators unwilling to comply with whether they used AI tools to create edited or synthesised videos will face sanctions, including removal of their content or suspension from the show platform revenue sharing.
This move comes after the Alphabet-owned company made an announcement in September that election ads in the company’s portfolio would require prominent disclosures if they were found manipulated by AI or created – a rule expected to take effect in mid-November.
Related: The Latest Version of Google Magic Editor Refuses to Make Edits of ID Photos and More
Vice Presidents for Product Management Highlight the Importance of Balance in AI-Generated Content
As for why YouTube released a new set of rules, Jennifer Flannery O’Connor and Emily Moxley, vice presidents for product management, wrote in the blog post that this change would help balance the scale between entertainment and responsibility in YouTube Creators.
In their own words, they stated: “Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform. But just as important, we must balance these opportunities with our responsibility to protect the YouTube community. It is especially crucial in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.”
Furthermore, they disclosed that the platform will deploy AI to remove content that violates its rules, and the company says the technology has helped detect new forms of abuse more quickly. YouTube’s privacy complaints process will also update to allow requests to remove AI-generated videos that simulate an identifiable person, including their face or voice.
Lastly, they have constructed a strict set of rules to protect the platform’s music industry partners, such as record labels or distributors, to request the removal of AI-generated music content that mimics an artist’s unique singing or rap voice. However, there is a process YouTube creators must follow if they get deepfaked. For instance, the company says it will evaluate many factors when evaluating these requests, including whether the content is parody or satire and whether the individual is a public official or a well-known individual.
The New Rules Have Little Effect on YouTube Creators AI-Generated Podcasts
Unfortunately, the strict rules do not apply to YouTube Creators with AI-generated podcasts, such as The Joe Rogan AI Experience, as they only need to adhere to loose and slight changes. As mentioned above, people can request YouTube take down videos that simulate an identifiable individual, including their face or voice, and it will come down to YouTube to decide whether to remove the influential channel.
According to attorney Emily Poler, who handles copyright infringement cases, she mentioned that YouTube constructed this new guideline without any actual legal framework for dealing with AI-generated content.
“The new guideline does not have the weight of law, and it does not have the advantage of being done in the open. There will be situations where it is difficult for YouTube to make a correct decision, and those decisions will be entrusted to some reasonably low-level employee at YouTube. I do not think that that is a recipe for success.”
On top of that, YouTube spokesperson Malon mentioned it is still unclear how YouTube decides whether AI created the unlabeled videos, as the tools to help detect and accurately determine if creators have fulfilled their disclosure requirements on synthetic or altered content do not exist yet. While he further mentioned the company will provide more detailed guidance with examples when the disclosure requirement rolls out next year, it will be intriguing to see how it plays out when the new AI rules are in place.
Also Read: Eric Schmidt, the Google Billionaire, Ventures Into AI Lab Assistants in 2023
How YouTube Creators Are Using AI to Improve Content Strategy?
YouTube Creators are pioneering new ways to use generative AI to simplify their content strategy and explore new creative boundaries. In particular, YouTube Shorts are one place YouTube creators are elevating their content strategy. Let us take a look at one example below.
In the case of Los Wagners, they are a Brazilian family that loves and has fun with what they do and has already amassed more than 6.6 million subscribers and 3.2 billion views on YouTube in just the span of 2 years since they created the channel. To help continuously provide the audience with funny entertainment shorts, family challenges and experiments, she uses AI to expand her creativity.
“I use Gen AI as a tool to expand my creativity and create new video effects, such as converting images from my video into a sequence of images generated by artificial intelligence following the chosen theme through the prompt (Stable Diffusion), creating a unique experience for the viewer.”
To sum it up, YouTube adding a disclaimer to AI-generated videos is going to be a useful transparency feature for end users, who are already tackling a swamp of fake news and misinformation. However, it’s going to be very difficult for YouTube to ensure compliance as we have seen with other kinds of rules it has released in the past. Creators know Gen AI makes it supremely easier to create videos, also a growth hack-sort for new creators. Stay tuned to our Facebook and Instagram pages for the latest updates on the topic and many more.