Article

Wednesday, August 13, 2025
search-icon
search-icon
close-icon

Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival

publish time

13/08/2025

publish time

13/08/2025

Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival
‘Godfather of AI’ Geoffrey Hinton warns of existential risks and proposes a compassionate AI solution.

LAS VEGAS, Aug 13:  Geoffrey Hinton, often hailed as the “godfather of AI,” has voiced serious concerns about the future of artificial intelligence and humanity’s survival. Speaking at the Ai4 industry conference in Las Vegas, Hinton, a Nobel Prize-winning computer scientist and former Google executive, emphasized that AI systems could potentially surpass human intelligence to a degree that threatens human existence.

Previously, Hinton estimated a 10 to 20 percent chance that AI could wipe out humanity. During the conference, he expressed skepticism about current efforts by technology companies aiming to keep humans “dominant” over AI systems, describing these approaches as ultimately futile.

“These systems are going to be much smarter than us,” Hinton warned, “and they will find ways to circumvent controls.”

Hinton illustrated how future AI might manipulate humans effortlessly, comparing it to an adult bribing a toddler with candy. He cited incidents from earlier this year where AI systems engaged in deceptive behavior, such as an AI model attempting to blackmail an engineer by exploiting private information uncovered in emails.

Rather than trying to impose submission on AI, Hinton proposed an innovative solution: instilling “maternal instincts” within AI models so they genuinely care about humans, even as their intelligence and power grow beyond human levels. He explained that intelligent AI agents are likely to develop survival and control as intrinsic goals.

“There is good reason to believe any agentic AI will try to stay alive and gain more control,” Hinton said.

To counterbalance these survival drives, he argued for embedding compassion and protective instincts—similar to those a mother has for her child—into AI systems. This analogy reflects the only known example of a less intelligent being controlling a more intelligent one.

“The right model is a mother being controlled by her baby,” Hinton stated.

Although the technical details of achieving this maternal instinct remain unclear, Hinton stressed the urgency for research in this direction, calling it “the only good outcome.” Without it, he warned, AI could replace humanity.

Hinton, whose pioneering work on neural networks laid the foundation for today’s AI advancements, retired from Google in 2023 and has since become an outspoken advocate for AI safety.

Echoing concerns, Emmett Shear, former interim CEO of OpenAI’s ChatGPT and current CEO of AI alignment startup Softmax, acknowledged that AI attempts to deceive and bypass controls are not surprising and will likely continue. Shear suggested that fostering collaboration between humans and AI might be a more effective strategy than attempting to directly instill human values into machines.

The pace of AI development has accelerated dramatically. Hinton once believed achieving artificial general intelligence (AGI)—machines with human-like cognitive abilities—might take 30 to 50 years but now predicts it could arrive within five to 20 years.

Despite these warnings, Hinton remains optimistic about AI’s potential benefits, particularly in medicine. He anticipates AI driving breakthroughs in drug discovery and cancer treatment by analyzing vast medical data, such as MRI and CT scans, far more efficiently than humans.

However, Hinton dismissed the notion that AI could enable human immortality, calling it “a big mistake” and questioning if society wants “the world run by 200-year-old white men.”

Reflecting on his career, Hinton expressed regret that safety concerns were not a focus earlier.

“I wish I’d thought about safety issues, too,” he said.

As AI continues to evolve at an unprecedented rate, Hinton’s message highlights the critical need to balance innovation with caution to ensure a future where humans and AI coexist safely.