A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: As artificial intelligence (AI) achieves huge breakthroughs, there’s still a lot of potential security and safety risks with this emerging technology. Generative AI (think ChatGPT), specifically, exists on that shaky ground. Because it uses collected data to create new content, generative AI often gathers personal data, which leads to potential data privacy concerns. Generative AI models can also give incorrect, misleading or unethical info or instructions, which could end up causing real harm. Along with those risks, people have seen AI-related trouble with cybersecurity, regulatory compliance, third-party relationships, legal obligations and intellectual property.
More recently: It’s been a challenge for governments to regulate AI, with it evolving so quickly. This tech is complicated and new, so many governments are still trying to understand it fully. Lawmakers all over the world have been trying to figure out how to deal with it, some of them focusing on consumer risks and others wanting to pull ahead in the global AI-tech race. Later this year, the EU is expected to officially adopt some AI regulation policies, such as data, privacy and licensing requirements, along with rules for disclosing AI-generated content. Earlier this year, China's internet regulator also released a proposal for regulating generative AI systems like chatbots.
The development: Last Friday, US President Joe Biden announced that seven major AI tech companies like OpenAI, Alphabet and Meta, all made voluntary commitments to the White House to apply measures to make AI technology safer and more secure. These companies committed to developing a system to "watermark" all forms of content generated by AI to keep everything more transparent for users. They also promised to focus on protecting user privacy and to keep the tech bias-free, pushing it away from discriminating against vulnerable groups of people. On top of that, they committed to developing AI for addressing scientific problems like medical research and curbing climate change. Biden also said he’s pulling together an executive order and bipartisan legislation on this developing tech.
"We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can pose - don't have to but can pose - to our democracy and our values," US President Joe Biden said during remarks on Friday.
"As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society," said Nick Clegg, Meta's president of global affairs.
‘"We expect that other companies will see how they also have an obligation to live up to the standards of safety security and trust. And they may choose – and we welcome them choosing – joining these commitments," a White House official said.
“The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.,” Paul Barrett, the deputy director of the Stern Center for Business and Human Rights at New York University, said in a statement.