The EU AI Act – what businesses need to know now

The EU finalized years of work on Wednesday, with the European Parliament approving the EU AI Act.

The EU AI Act – what businesses need to know now
European Union flags flutter outside the EU Commission headquarters in Brussels, Belgium, July 14, 2021. REUTERS/Yves Herman/File Photo

The backstory: Back in 2022, OpenAI stormed onto the tech scene with ChatGPT, an artificial intelligence (AI) chatbot that quickly skyrocketed to 100 million users in just two months. This set off a frenzy among tech giants like Microsoft and Google, who scrambled to create their own AI tools. For example, Microsoft rolled out the Bing chatbot, while Google introduced Bard (later renamed Gemini). Businesses worldwide also hopped on the AI train, using these tools for tasks like summarizing documents and coding. 

But as AI tech kept advancing, governments faced a major challenge – figuring out how to regulate this new tech. The EU is taking the lead here. It started with regulatory proposals in 2021, covering issues like banning mass surveillance or social credit scoring using AI. When generative AI entered the scene, the EU knew it needed to update its approach. That led to the creation of the EU AI Act, building on those initial regulatory proposals. EU member states have been hashing out the details over the past few years.

More recently: Other countries are also working on regulations for the technology. Last year, China published new rules for generative AI. One key requirement was that any generative AI services go through a security review before they can start operating. Meanwhile, President Biden signed an executive order in the US in October backing more AI regulation. Plus, lawmakers in different US states are crafting their own AI rules.

The development: The EU finalized years of work on Wednesday, with the European Parliament approving the EU AI Act. It categorizes AI into different risk levels and focuses on transparency, also setting guidelines for disclosing AI-generated content and registering the basic models used. The primary goal is to safeguard consumers by evaluating AI applications according to their risks. For instance, low-risk applications like content suggestions will face lighter regulations, while AI in high-risk sectors like medical devices will face stricter ones. Some deemed "unacceptable" will be outright banned. Developers must also share their training data and follow EU copyright rules or face big fines. 

As for what's ahead, the EU AI Act is still under approval, but it's expected to become law in a few weeks. Then, it will come into effect in phases starting next year. But many countries will keep an eye on what happens after the rules come into play, as Brussels has a lot of influence as a tech regulator.

Key comments:

"The AI act is not the end of the journey but the starting point for new governance built around technology," said Dragos Tudorache, a lawmaker who oversaw EU negotiations on the agreement. 

"Parliament's priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes," said a statement by the European Parliament last year. "Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems."

"The way China approaches AI regulation will likely be consistent with its approach to regulating other areas of prominent technology, such as internet or social media, where it operates strict censorship to control the flow of information," said Citi analysts in a research note last year.