An artificial intelligence pioneer nicknamed the “Godfather of AI” has left his position at big tech firm Google to speak more openly about the technology’s potential dangers.
Before retiring, Dr. Geoffrey Hinton worked on machine learning algorithms at Google for more than a decade. He reportedly earned his nickname due to his lifelong work on neural networks.
However, in a May 1 tweet, Hinton clarified that he has left his position at Google “so that I can speak about the dangers of AI.”
In today’s NYT, Cade Metz hints that I left Google to criticize Google. Actually, I left to talk about the dangers of AI without thinking about how it affects Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
In an interview with The New York Times, his most immediate concern about AI was its use in flooding the internet with fake photos, videos and text so that many “can no longer know what is true”.
Hinton’s other concerns were about AI technology taking over jobs. He believes that AI could pose a threat to humanity in the future as it learns unexpected behaviors from the vast amounts of data it analyses.
He also expressed concern about the ongoing AI arms race aimed at advancing the technology for use in lethal Autonomous Weapons Systems (LAWS).
Hinton also expressed a partial regret over his life’s work:
“I console myself with the usual excuse: If I hadn’t done it, someone else would have.”
In recent months, regulators, legislators and tech industry executives have also expressed their concerns about the evolution of AI. In March, over 2,600 tech executives and researchers signed an open letter urging a temporary halt to AI development, citing “profound risks to society and humanity.”
A group of 12 European Union lawmakers signed a similar letter in April, and a recent EU draft law classifies AI tools according to their level of risk. The UK is also providing $125 million to support a task force to develop ‘secure AI’.
AI is used in fake news campaigns and pranks
AI tools are already reportedly being used for disinformation, with recent examples of media outlets being tricked into publishing fake news, while a German company even used AI to fake an interview.
On May 1, Binance claimed to have been the victim of a ChatGPT-originated smear campaign, sharing evidence that the chatbot claimed its CEO Changpeng “CZ” Zhao was a member of a Chinese Communist Party youth organization.
To all the crypto and AI spies out there, here’s the ChatGPT thread in case anyone wants to dig in. As you will see, ChatGPT pulls this from a fake LinkedIn profile and a non-existent one @Forbes Article. We can’t find any evidence for this story, nor for the LinkedIn page that ever existed. pic.twitter.com/szLaix3nza
— Patrick Hillmann ♂️ (@PRHillmann) May 1, 2023
The bot linked to a Forbes article and a LinkedIn page from which it claimed to have obtained the information, but the article does not appear to exist and the LinkedIn profile is not Zhao’s.
Last week, a group of pranksters also tricked several media outlets around the world, including the Daily Mail and The Independent.
Related: Scientists in Texas have developed a GPT-like AI system that can read minds
The Daily Mail published and later picked up a story about an alleged Canadian actor named “Saint Von Colucci” who allegedly died after undergoing cosmetic surgery to make him look more like a South Korean pop star.
The news comes from a press release about the actor’s death that was broadcast by an organization posing as a PR firm and apparently using AI-generated imagery.
In April, German edition Die Aktuell published an interview ChatGPT used to synthesize a conversation with former Formula 1 driver Michael Schumacher, who suffered a serious brain injury in a 2013 skiing accident.
It was reported that Schumacher’s family would take legal action over the article.
Magazine: AI Eye: “Biggest AI leap ever”, cool new tools, AIs are real DAOs
#Godfather #resigns #Google #warns #dangers