OpenAI faces scrutiny from European regulators

The language model from OpenAI, ChatGPT, has been making waves with its impressive abilities.

OpenAI faces scrutiny from European regulators
OpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration

The backstory: The language model from OpenAI, ChatGPT, has been making waves with its impressive abilities. Users love its knack for explaining complex topics in simple terms and even creating poetry. In fact, it's become the fastest-growing app in history, with millions of users already on board, according to a UBS study. But some are worried about the potential downsides of AI. Tech titans like Tesla CEO Elon Musk are calling for a pause on AI development, fearing it's getting out of hand.

More recently:  Two weeks ago, Italy became the first Western country to ban ChatGPT due to privacy violations and the absence of age restrictions. The Italian Data Protection Watchdog, Garante, was particularly concerned about how the chatbot could harm minors and vulnerable individuals. Plus, there were worries about the accuracy of its responses. The ban came after a data breach that exposed conversations between ChatGPT and its users. If OpenAI fails to address the issues within 20 days, the company could be subject to a fine of up to 20 million euros (US$21.68 million) or 4% of its annual global revenue.

The development: Now, OpenAI is addressing Italy's ban on ChatGPT and ensuring that user data is protected while advancing AI technology. In a recent video conference, CEO Sam Altman promised to be more transparent about user data and age verification. OpenAI will pitch ideas to Garante and wait for its evaluation. Other European privacy regulators are also checking if they should implement stronger measures for chatbots following Italy's ban.

Key comments:

There "appears to be no legal basis underpinning the massive collection and processing of personal data in order to 'train' the algorithms on which the platform relies," said Garante in a statement.

"We don't use data for selling our services, advertising, or building profiles of people," said OpenAI in a blog post. "We use data to make our models more helpful for people. ChatGPT, for instance, improves by further training on the conversations people have with it.

"Consumers are not ready for this technology. They don't realise how manipulative, how deceptive it can be. They don't realise that the information they get is maybe wrong," said Ursula Pachl, the European Consumer Organisation (BEUC) Deputy Director, to Euronews.