A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Last November, the AI chatbot race really kicked off when Open AI launched ChatGPT, which was a real eye-opener for many into how far AI has come. Then, Bing launched its own AI chatbot to enhance its tools as a search engine a few months later. And Microsoft later announced it would integrate full AI capabilities into its work tools for its Office suite.
More recently: Google is responsible for much of the research that has gone into making these steps in AI possible, but it's lagging behind in launching its own user-ready AI tools. In March, the company launched an experimental version of its own AI chatbot, Bard, to compete with other tech giants. But, in a promotional video for the service, Bard flubbed and gave a misleading answer to a question about the James Webb telescope.
Google's reportedly been pushing to launch Bard to get on the level of Bing's AI bot and Open AI's ChatGPT. And Microsoft is considering replacing Google with Bing as the default search engine on its smartphones, which may be because Google's still in the back of the AI race. So, the company has a lot of pressure to catch up.
The development: Now, 18 former and current Google workers have told Bloomberg that the competition has motivated Google to ignore ethics for the sake of launching Bard ASAP. Different ethics teams at Google have reportedly been trying to pump the brakes on Bard's release but have been trumped by Google's business goals.
These workers referred to Bard as "worse than useless," a "pathological liar" and also reported to their superiors that the bot may even be harmful. For example, when asked for tips on how to land a plane, Bard gave advice that would lead to a crash. Allegedly, Bard also gave scuba diving tips "which would likely result in serious injury or death." Simply put, these insiders say Bard just isn't ready for the public.
One of the workers called Bard “a pathological liar,” and another described it as “cringeworthy.” In an internal message group, one employee wrote in February, “Bard is worse than useless: please do not launch.”
“AI ethics has taken a back seat,” said Meredith Whittaker, former Google manager and the current president of the Signal Foundation, which supports private messaging. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”
“We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Brian Gabriel, a spokesperson for Google, to Bloomberg.