Why did Geoffrey Hinton, aka the “Godfather of AI,” leave Google?

Right now, it probably seems that no matter where we look, everyone’s talking about AI

Why did Geoffrey Hinton, aka the “Godfather of AI,” leave Google?
Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. Reuters/Mark Blinch/File Photo

The backstory: Right now, it probably seems that no matter where we look, to social media, friends and family, the news … everyone’s talking about AI. It’s either progressing, gaslighting us, taking our jobs, something – and it doesn’t seem to be slowing or stopping anytime soon. In fact, as we mentioned yesterday, IBM’s CEO Arvind Krishna said during an interview several days back that the company will be pausing hiring for jobs that it expects AI to replace. “I could easily see 30% of [non-customer-facing roles] getting replaced by AI and automation over a five-year period.”

One of the pioneers in this field that essentially laid some of the foundation for bigger companies to build the AI tech we have today is a guy called Geoffrey Hinton. Google spent about US$44 million a few years ago to buy a company started by Hinton and two of his students in 2012 that made huge leaps in things like speech recognition. And for the past 10 years or so, Hinton has been working part-time at Google where he has become one of the biggest voices in the AI space, so much so that the 75-year-old is also dubbed the “Godfather of AI.”

The development: Now, Hinton has left Google, saying to The New York Times that the world of AI is quite “scary” and that a part of him regrets his life’s work. He also said that he left the company so he could speak a bit more freely about the dangers of AI. Hinton fears that a racing competition to develop better and better AI could lead to the tech becoming even smarter than humans. He also worries about the internet being flooded with false images and text – so much so that most people don’t know what’s real and what’s not.

“I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” he said. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” Hinton later made it clear that Google has been really responsible, though.

Some other dangers Hinton warns about include nations worldwide advancing AI without any regulations. He’s also deeply against AI being used for warfare, or what he calls “robot soldiers,” but without global regulation, it’s hard to say how far the tech will go. And, unlike weapons issues like nuclear bombs, it’s much harder to perceive “bad actors” working on AI technology in secret.  

Key comments:

“Look at how it was five years ago and how it is now,” Geoffrey Hinton said to the New York Times of AI technology. “Take the difference and propagate it forwards. That’s scary.”

“In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly,” tweeted Hinton.

“I’ll miss him, and I wish him well!” wrote Jeff Dean, Hinton’s supervisor within Google Brain. “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”