Google hits pause on Gemini AI due to historical inaccuracy backlash
Recently, Gemini has been dragged on social media for generating inaccurate images, specifically by replacing white people with people of color in historical imagery.
A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Back when OpenAI first introduced ChatGPT, it caused quite a stir in the artificial intelligence (AI) world. The AI tool quickly gained traction, reaching 100 million users in just two months – faster than TikTok or Instagram ever did. Microsoft and Google weren't far behind, rolling out their own chatbots to jump on the AI train. Microsoft had its Bing bot, and Google came up with Bard. Google, in particular, upped its game in late 2023 with Gemini (formerly Bard), its most advanced AI yet. But during a demo last February, Google's chatbot, still named Bard at the time, stumbled over a fact about the James Webb Space Telescope, causing a brief stock market stir.
While chatbots like ChatGPT and Bard were in the spotlight, there was another tech trend quietly brewing: text-to-image generation. Companies like OpenAI, Midjourney, Runway and Pika were leading the charge, with Google joining in with Gemini's image generator earlier this month.
More recently: Last week, Google's Gemini was in hot water after a viral post on X (formerly Twitter) showing the tool's response to the question "Is Modi a fascist?" – referring to India's Prime Minister Narendra Modi. Compared to Gemini's answers to similar questions asked about former US President Donald Trump and current President Joe Biden, its response got a lot of backlash, with people saying the tool was biased. India's technology minister said the incident broke India's laws, while a Google spokesperson said, "We've worked quickly to address this issue."
The development: Recently, Gemini has been dragged on social media for generating inaccurate images, specifically by replacing white people with people of color in historical imagery. Critics say the tool shows bias, especially regarding historical figures or imagery. Social media also blew up with accusations that Google was pushing a certain agenda. While OpenAI's Dall-E has faced criticism for producing overly stereotypical images of race and ethnicity, it seems Google has overcorrected in the opposite direction, with some saying it's trying to be too "woke."
Facing the heat, Google admitted that Gemini's AI was off the mark with historical accuracy and vowed to fix it immediately. Then, the company confirmed on Thursday that it's pausing Gemini's image generation of people while it works on making it better. Jack Krawczyk, senior director for Gemini Experiences, stressed that Google is taking bias seriously and is hustling to make things right.
Key comments:
"We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon," said Google on X.
"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," said Jack Krawczyk, senior director for Gemini Experiences.
"We will continue to do this for open ended prompts (images of a person walking a dog are universal!)," said Krawczyk on X. "Historical contexts have more nuance to them and we will further tune to accommodate that."
Comments ()