A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: ChatGPT, by the company OpenAI, has become a sensation in the AI world. Just to give you an idea of its growth, it took TikTok around nine months after its global launch to reach 100 million users; Instagram took 2.5 years. But for ChaptGPT? Two months. But not everyone is cheering about the tech. Elon Musk, for example, and other AI experts have said they are worried about the risks associated with developing more powerful AI systems. Musk even wrote an open letter calling for a six-month break in creating systems stronger than OpenAI's GPT-4.
Just last month, OpenAI's CEO, Sam Altman, testified before a Senate subcommittee, highlighting the need for regulation in the rapidly evolving field of AI. The three-hour hearing covered all sorts of things, like risks to society and job market impacts.
More recently: OpenAI has been caught up in some legal drama. Last November, the company got hit with a class action lawsuit alongside Microsoft and GitHub. The claim is that their coding assistant, GitHub Copilot, generates functionally similar bits of code without honoring the licensing or copyright protections of the original source material. Basically, they say it's copying code, even if there are slight alterations. Think of that age-old argument of Vanilla Ice copying a hook from musical legends Queen and David Bowie in his song “Ice Ice Baby.” But the defendants say these strings of adapted code aren’t copyrighted because they aren’t the same, and Copilot isn’t producing code verbatim from those sources.
Also, last month, OpenAI got slapped with a defamation lawsuit. A radio host in the US said that ChatGPT wrongly accused him of fraud. This case is important because a lot of people have been complaining about ChatGPT (and similar bots) being able to generate false info.
The development: OpenAI is now dealing with a lawsuit filed by some unnamed individuals who want to turn it into a class action status. And who else is caught up in this legal tangle? Microsoft, the tech heavyweight planning to invest US$13 billion in OpenAI. The plaintiffs are hiding their identities to avoid backlash, and they’re seeking damages up to US$3 billion.
Essentially, they say OpenAI has been getting hold of massive amounts of personal info without consent. We're talking about 300 billion words scraped from the internet. They said that OpenAI is breaking privacy laws left and right, with the lawsuit including the term "civilizational collapse" as a potential risk.
According to the lawsuit, OpenAI allegedly got its hands on this private info through other platforms like Snapchat, Spotify, Stripe, Slack and Microsoft Teams integrating its software. The plaintiffs argued that OpenAI had abandoned its original mission of using AI for the greater good, shifting its focus solely to raking in profits. They even estimated that ChatGPT alone would make US$200 million in revenue this year. But the people behind this suit aren’t just after cash. They're also pushing to stop OpenAI's products from being used commercially and further developed.
“Despite established protocols for the purchase and use of personal information, Defendants took a different approach: theft,” said the anonymous individuals who filed the lawsuit.
“When you put content on a social media site or any site, you’re generally granting a very broad license to the site to be able to use your content in any way,” said Katherine Gardner, an intellectual-property lawyer at Gunderson Dettmer. “It’s going to be very difficult for the ordinary end user to claim that they are entitled to any sort of payment or compensation for use of their data as part of the training.”
“All of that information is being taken at scale when it was never intended to be utilized by a large language model,” said Ryan Clarkson, managing partner of Clarkson, the law firm behind the suit.
“My worst fears are that we cause, we the field, the technology, the industry, cause significant harm to the world,” said Sam Altman, CEO of OpenAI, last month while testifying before members of a Senate subcommittee about AI regulation. “I think if this technology goes wrong, it can go quite wrong.”
“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” said Altman.