YouTube has rolled out a policy for people to lodge privacy violations if AI-generated content has been uploaded that simulates their face or voice.
👉 Background: YouTube is the video on-demand platform that launched back in 2005 with a video titled “Me at the Zoo”. Shortly after, it was acquired by Google for $1.65 billion USD. And it has grown into a video behemoth - with over 14 billion videos on YouTube.
👉 What happened: With so many videos being uploaded each second, YouTube needs to closely monitor what is being uploaded. Now, it has rolled out a policy for people to lodge privacy violations if AI-generated content has been uploaded that simulates their face or voice.
👉 What else: YouTube decides on which submissions for takedown request would be approved. But, AI-generated content is particularly concerning on tech platforms because the quality is has become so good, it can appear completely real.
💡In the digital age, finding trustworthy sources has become harder and harder to rely on. AI-generated content risks spreading misinformation and also eroding public trust in digital platforms
💡The global AI video generator market size is just getting started. It was estimated at $555 million USD in 2023, but it’s expected to grow at nearly 20% over the next 6 years, This means that tech companies that host this content have a huge obligation to ensure that content on their site is authentic.
💡Companies like Meta and TikTok have already began their initiatives. But it looks like this is just the beginning.
Sign up for Flux and join 100,000 members of the Flux family