A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Artificial intelligence (AI) deepfakes have been causing quite a stir lately. These are videos or images where AI technology swaps one person's face with another. The interest in this tech skyrocketed over a year ago, mainly due to the excitement around OpenAI's ChatGPT.
To give you a clearer picture, deepfake algorithms learn from tons of facial data to create convincing content of people doing or saying things they never actually did. The problem? Deepfakes can be used to spread misinformation or create harmful material, which could have serious consequences. For instance, during the recent US Democratic presidential primary, there were reports of fake calls supposedly from President Joe Biden, urging voters not to go to the polls in New Hampshire.
More recently: To tackle this, tech companies are teaming up. The Content Authenticity Initiative led by Adobe is one such effort. They're pushing for things like digital watermarks and labels on AI-made content. President Biden even signed an executive order in October backing more AI regulation. Google is also on board and is planning to slap labels on AI-generated content on YouTube and other platforms.
The development: Meta announced on Tuesday that it would identify and label images created by other companies' AI services that get shared on their platforms, Facebook and Instagram. How? By sneaking little hidden markers into the files. This change is coming soon, it says. Meta is also exploring AI-generated audio and video content, recognizing its potential for deception.
Meta Global Affairs' president, Nick Clegg, stressed the need for transparency with AI-made content. While Meta already tags its own AI images, it plans to do the same for images from other companies like Google and Microsoft. Clegg pointed out the importance of knowing if what you're seeing is AI or human-made, especially with more AI content floating around. He said Meta plans to tag content in all languages, especially with the US elections around the corner.
"In the coming months, we'll introduce labels that inform viewers when the realistic content they're seeing is synthetic," YouTube CEO Neal Mohan said in a year-ahead blog post Tuesday.
"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," said Meta president of global affairs, Nick Clegg.
"It's kind of a signal that they're taking seriously the fact that generation of fake content online is an issue for their platforms," said Gili Vidan, an assistant professor of information science at Cornell University, adding that it could be "quite effective" in flagging a lot of AI-generated content made with commercial tools, but it won't likely catch everything.
"There's a lot that would hinge on how this is communicated by platforms to users," said Vidan. "What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?"