For years, warnings about scammers misusing artificial intelligence to create fake videos, spread disinformation, or launch cyberattacks have dominated headlines. But Geoffrey Hinton, widely known as the Godfather of AI, believes those threats are only the tip of the iceberg. His deeper worry now lies in the way AI companies are racing for profit while ignoring long-term risks to humanity and the planet.
In a conversation highlighted by Fortune, Hinton explained that the research agenda in AI is increasingly shaped by short-term economic gains rather than broader questions of human survival. “For the owners of the companies, what’s driving research is short-term profits,” he said, adding that curiosity-driven research often overlooks the bigger picture of what AI could mean for the future of humanity.
These concerns, he says, will only worsen if companies focus solely on profit. “That’s very different from the risk of AI itself becoming a bad actor,” Hinton noted, stressing that the incentives driving AI development today ignore the catastrophic potential of the technology tomorrow.
“If AI is not going to parent me, it’s going to replace me,” Hinton warned, estimating that there is at least a 10 percent chance of AI-driven extinction within the next three decades. His timeline for the arrival of superintelligent AI has also shortened dramatically—he now suggests it could arrive in as little as five to twenty years.
As scammers continue to misuse AI in predictable ways, Hinton’s message is that the greater danger lies elsewhere: in an economic and technological system that prizes quick profits over survival. “We must decide what values we want in our AI ‘children’ before they outgrow us,” he cautions. Waiting too long, he warns, will mean it’s already too late.
In a conversation highlighted by Fortune, Hinton explained that the research agenda in AI is increasingly shaped by short-term economic gains rather than broader questions of human survival. “For the owners of the companies, what’s driving research is short-term profits,” he said, adding that curiosity-driven research often overlooks the bigger picture of what AI could mean for the future of humanity.
Beyond scams: the economic and existential risks
Hinton, who helped pioneer artificial neural networks, has long warned that unchecked AI could widen wealth gaps and disrupt labor markets. Now, he emphasizes that the danger is twofold. On one hand, there are the immediate risks of bad actors using AI for cybercrime, misinformation, and even designing harmful viruses. On the other hand, there is the chilling possibility of AI itself evolving into a powerful actor that may not have humanity’s best interests at heart.These concerns, he says, will only worsen if companies focus solely on profit. “That’s very different from the risk of AI itself becoming a bad actor,” Hinton noted, stressing that the incentives driving AI development today ignore the catastrophic potential of the technology tomorrow.
Maternal instincts as a radical solution
Earlier this year, Hinton proposed an unconventional idea at the Ai4 conference: embedding “maternal instincts” into AI systems. Drawing from nature, he argued that the only consistent example of a more intelligent being being “controlled” by a less intelligent one is the relationship between a mother and her baby. If machines can be taught to care for human well-being in a similar way, peaceful coexistence might be possible.“If AI is not going to parent me, it’s going to replace me,” Hinton warned, estimating that there is at least a 10 percent chance of AI-driven extinction within the next three decades. His timeline for the arrival of superintelligent AI has also shortened dramatically—he now suggests it could arrive in as little as five to twenty years.
The call to act before it’s too late
The urgency, according to Hinton, lies in shifting resources. Today, most funding flows toward making AI more powerful, not safer. Without investment in alignment research—ensuring machines are trained to act in humanity’s interest—the risks could spiral beyond control.As scammers continue to misuse AI in predictable ways, Hinton’s message is that the greater danger lies elsewhere: in an economic and technological system that prizes quick profits over survival. “We must decide what values we want in our AI ‘children’ before they outgrow us,” he cautions. Waiting too long, he warns, will mean it’s already too late.