The content produced through tech has become more and more difficult to separate from reality. Especially when it comes to deepfakes. A deepfake uses algorithms and augmented reality to produce new text, audio, images or video. This tech can make something look hyper-realistic even if it’s been fabricated. It’s a real problem when someone’s likeness is replaced with another in a video, making it look like they said or did something they did not.
Now, China has set some rules on deepfakes, the first of their kind. These new rules are called The Administrative Provisions on Deep Synthesis for Internet Information Services, and they require those who use this kind of tech to first get consent from people before they mess with their voices or images. Going into effect on January 10, the rules were announced by the Cyberspace Administration of China. They’re meant to protect people from unpermitted impersonation.
The Cyberspace Administration of China says that new regulations are meant to “provide powerful legal protection to ensure and facilitate the orderly development” of this new AI technology.
“Up to this point, we have not seen a single example of deepfake generation algorithms that can create realistic human hands and demonstrate the flexibility and gestures of a real human being,” said Siwei Lyu, a professor at the University at Buffalo in New York, when speaking on technology meant to identify deepfakes from authentic footage.