AI 'godfather' quits Google and warns of dangers ahead
Dr Geoffry Hinton, widely referred to as AI's "godfather," has confirmed in an interview with the New York Times that he has quit his job at Google -- to talk about the dangers of the technology he helped develop.
Hinton's pioneering work in neural networks -- for which he won the Turing award in 2018 alongside two other university professors -- laid the foundations for the current advancement of generative AI.
The lifelong academic and computer scientist joined Google in 2013, after the tech giant spent $44m to acquire a company founded by Hinton and two of his students, Ilya Sutskever (now chief scientist at OpenAI) and Alex Krishevsky. Their neural network system ultimately led to the creation of ChatGPT and Google Bard.
But Hinton has come to partly regret his life's work, as he told the NYT. "I console myself with the normal excuse: If I hadn't done it, somebody else would have," he said. He decided to leave Google so that he could speak freely about the dangers of AI and ensure that his warnings don't impact the company itself.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
-- Geoffrey Hinton (@geoffreyhinton) May 1, 2023
According to the interview, Hinton was prompted by Microsoft's integration of ChatGPT into its Bing search engine, which he fears will drive tech giants into a potentially unstoppable competition. This could result in an overflow of fake photos, videos, and texts to the extent that an average person won't be able to "tell what's true anymore."
But apart from misinformation, Hinton also voiced concerns about AI's potential to eliminate jobs and even write and run its own code, as it's seemingly capable of becoming smarter than humans much earlier than expected.
The more companies improve artificial intelligence without control, the more dangerous it becomes, Hinton believes. "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."
The need to control AI development
Geoffry Hinton isn't alone in expressing fears over AI's rapid and uncontrolled development.
In late March, more than 2,000 industry experts and executives in North America signed an open letter, calling for a six-month pause in the training of systems more powerful than GPT-4, ChatGPT's successor.
The signees -- including researchers at DeepMind, computer scientist Yoshua Bengio, and Elon Musk -- emphasised the need for regulatory policies, cautioning that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Across the Atlantic, ChatGPT's growth has stirred the efforts of EU and national authorities to efficiently regulate AI's development without stifling innovation.
Individual member states are trying to oversee the operation of advanced models. For instance, Spain, France, and Italy have opened investigations into ChatGPT over data privacy concerns -- with the latter being the first Western country to regulate its use after imposing a temporary ban of the service.
The union as a whole is also moving closer to the adoption of the anticipated AI Act -- the world's first AI law by a major regulatory body. Last week, Members of the European Parliament agreed to advance the draft to the next stage, called trilogue, in which lawmakers and member states will work out the bill's final details.
According to Margrethe Vestager, the EU's tech regulation chief, the bloc is likely to agree on the law this year, and businesses could already start considering its implications.
"With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way," Vestager said when the bill was first announced.
Unless regulatory efforts in Europe and the globe are sped up, we might risk repeating the approach of Oppenheimer of which Hinton is now sounding the alarm: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success."