A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Ever since OpenAI’s ChatGPT sparked international interest in artificial intelligence (AI) about a year ago, there have been concerns about how this fast-developing technology might affect our future. For example, political and tech leaders around the world, including Elon Musk and OpenAI’s own CEO Sam Altman, have warned that AI poses catastrophic risks to humankind if it gets into the wrong hands. Many people also worry that AI could eventually become even smarter than humans.
All this worry and speculation has led to a race by governments to come up with some sort of framework to regulate this tech. Tech companies and many countries are also competing to dominate in the industry by creating the most advanced AI models.
The thing is, even though there are some swirling concerns about it, everyone pretty much agrees that AI could revolutionize just about every industry, from education and agriculture to finance, defense and health care. So the question is, how do we use it without putting ourselves at existential risk?
More recently: In May, OpenAI’s Altman spoke at a US Senate panel hearing, where he urged lawmakers to carefully think of how to regulate AI, saying this is a “printing press moment” that still needs some safeguards. One suggestion he made is a combo of licensing and testing requirements for companies working on advanced AI systems.
Governments across the globe have been working on different potential regulations, including the EU, China and the US. In August, one of the first laws specifically targeting generative AI came into effect in China. And just a few days ago, on Monday, US President Joe Biden signed an executive order outlining America’s first AI regulations.
The development: UK Prime Minister Rishi Sunak is hoping Britain can be a key leader in this global race and be a middle ground between the economies of the US, China and the EU. On Wednesday, the UK kicked off its two-day AI Safety Summit, where many countries were represented, including China. There was some pushback about China being invited, but Sunak made it clear that all of the world’s leading AI powers need to be part of the conversation.
King Charles III delivered a video address at the start of the event. Sunak also said he’d have a livestreamed conversation on X with Elon Musk afterward on Thursday night.
At the summit, the UK presented a document called the Bletchley Declaration that warned of the risks of advanced “frontier” AI systems (essentially the most advanced ones being worked on). Reps from 28 countries signed the document, including the US and China, agreeing that there needs to be international cooperation on future AI regulation. Although it didn’t set any specific policy goals, it’s a big step forward for everyone to agree on some global AI safety standards.
“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” the Bletchley Declaration said. “Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI.”
“We are witnessing one of the greatest technological leaps in the history of human endeavor,” King Charles III said in his video address. “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.”
"For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI," British digital minister Michelle Donelan told reporters.
"The Chinese side expressed willingness to work with all parties to strengthen communication and exchanges on AI safety governance and contribute wisdom to the formation of an international mechanism with universal participation and a governance framework with broad consensus," said a report by Xinhua.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” OpenAI CEO Sam Altman said in his opening remarks before a US Senate Judiciary subcommittee in May.