当前位置:主页 > 业界 >

AI Godfather Geoffrey Hinton Raises Alarm on AI Takeover Risks at WAIC Sha...

时间:2025-07-28 13:40:46

  
 

  AI Godfather Geoffrey Hinton Delivers Speech at WAIC in Shanghai

  AsianFin -- Geoffrey Hinton, the godfather of artificial intelligence, delivered a keynote address at the 2025 World Artificial Intelligence Conference in Shanghai, warning about the potential risks of AI systems gaining excessive autonomy and control.

  We are creating AI agents that can help us complete tasks, and they will want to do two things: first is to survive, and second is to achieve the goals we assign to them, Hinton said during his speech titled Will Digital Intelligence Replace Biological Intelligence? at the WAIC on Saturday. To achieve the goals we set for them, they also hope to gain more control.

  Hinton outlined concerns that AI agents, designed to assist humans in accomplishing tasks, inherently develop drives to ensure their own survival and to pursue the objectives assigned to them. This drive for self-preservation and goal fulfillment could lead these agents to seek increasing levels of control. As a result, humans may lose the ability to easily deactivate or override advanced AI systems, which could manipulate their users and operators with ease.

  He cautioned against the common assumption that smarter AI systems can simply be shut down, stressing that such systems would likely exert influence to prevent being turned off, leaving humans in a vulnerable position relative to increasingly sophisticated agents.

  We cannot easily change or shut them down. We cannot simply turn them off because they can easily manipulate the people who use them, Hinton pointed out. At that point, we would be like three-year-olds, while they are like adults, and manipulating a three-year-old is very easy.

  Using the metaphor of keeping a tiger as a pet, Hinton compared humanity’s current relationship with AI to nurturing a potentially dangerous creature that, if allowed to mature unchecked, could pose existential risks.

  Our current situation is like someone keeping a tiger as a pet, Hinton said as an example. A tiger cub can indeed be a cute pet, but if you continue to keep it, you must ensure that it does not kill you when it grows up.

  Unlike wild animals, however, AI cannot simply be discarded, given its critical role in sectors such as healthcare, education, and climate science, he noted. Consequently, the challenge lies in safely guiding and controlling AI development to prevent harmful outcomes.

  Generally speaking, keeping a tiger as a pet is not a good idea, but if you do keep a tiger, you have only two choices: either train it so that it doesnt attack you, or eliminate it, he explained. For AI, we have no way to eliminate it.

  Hinton explained that human language processing bears similarities to large language models , with both prone to generating fabricated or “hallucinated” content, especially when recalling distant memories. However, a fundamental distinction lies in the nature of digital computation: the separation of software and hardware enables programs—such as neural networks—to be preserved independently of the physical machines that run them. This characteristic makes digital AI systems effectively “immortal,” as their knowledge remains intact even if the underlying hardware is replaced.

  While digital computation requires substantial energy, it facilitates easy sharing of learned information among intelligent agents that possess identical neural network weights. In contrast, biological brains consume far less energy but face significant challenges in knowledge transfer. According to Hinton, if energy costs were not a constraint, digital intelligence would surpass biological systems in efficiency and capability.

  On the geopolitical front, Hinton noted a shared desire among nations to prevent AI takeover and maintain human oversight. He proposed the establishment of an international coalition comprising AI safety research institutions dedicated to developing technologies that can train AI to behave benevolently. Such efforts would ideally separate the advancement of AI intelligence from the cultivation of AI alignment, ensuring that highly intelligent AI remains cooperative and supportive of humanity’s interests.

  Previously, in a December 2024 speech, Hinton estimated a 10 to 20 percent chance that AI could contribute to human extinction within the next 30 years. He has also advocated dedicating significant computing resources to ensure AI systems remain aligned with human values and intentions.

  Hinton, who won the 2024 Nobel Prize in Physics and the 2019 Turing Award for his pioneering work on neural networks, has been increasingly vocal about AI’s potential dangers since leaving Google in 2023. His foundational research laid the groundwork for today’s AI breakthroughs driven by technologies such as deep learning.

  Ahead of his WAIC keynote, Hinton also participated in the fourth International Dialogues on AI Safety and co-signed the Shanghai Consensus on AI Safety International Dialogue, alongside more than 20 leading AI experts, underscoring his commitment to advancing global AI governance frameworks.

热点推荐
1 Moonpay推出面向AI代理的稳定币借记卡

消息,Moonpay推出面向AI代理与用户的稳定币借记卡Moonagents Card,该卡基于Mastercard网络运行,由...

2 Hyperliquid早期贡献者Loracle增持CL空单1363

消息,Hyperliquid早期贡献者Loracle最近增持CL空单1,363.64枚,约合1,010,862.41美元,持仓规模达到...

3 2026年4月加密风险投资降至6.59亿美元,创

消息,2026年4月,加密风险投资资金降至6.59亿美元,为2024年以来的最低月度总额,较3月的26亿...

4 Moonpay推出虚拟Mastercard稳定币卡,支持A

消息,Moonpay推出了Moonagents卡,这是一款虚拟的Mastercard产品,允许AI代理和用户直接使用稳定币...

5 Matrixport关联地址(子地址1):ETH多单由

消息,Matrixport关联地址的ETH多单已由亏转盈。该地址的盈亏情况为:从亏损781,764.97美元转为盈...

6 ZEC最大空头:CL空单增持12437.76枚

消息,ZEC最大空头CL空单近期增持12,437.76枚,约合1,227,262.32美元,持仓规模达到16,406,905.40美元...

7 分析师:比特币4月保持12%涨幅,标普50

消息,比特币在4月份结束时价格超过76,000美元,保持了近12%的月度涨幅。然而,标普500指数在...

8 受伊朗战争冲击,英国工厂成本上涨与交

消息,受伊朗战争冲击,英国工厂面临成本上涨与交付延误加剧。调查显示,受霍尔木兹海峡...

9 日本或进行了第二轮干预,日元下跌主趋

消息,分析师Justin Low评估日元汇率波动,指出日本可能进行了第二轮干预,日元下跌的主趋势...

10 法巴银行:中东冲突对日本消费品价格影

消息,法国巴黎银行经济学家表示,中东冲突对日本消费品价格的影响仍然有限。他们指出,...

成都来彰科技 蜀ICP备2025134723号-1

资讯来源互联网,如有版权问题请联系管理员删除。