On March 21, about 200 protesters marched in the streets of San Francisco, stopping by the headquarters of frontier artificial intelligence labs OpenAI, Anthropic and xAI carrying the banner: ‘Stop the AI Race’. The protest was organised by the entity Stop the AI Race, led by former AI safety researcher Michael Trazzi.
Their ask: pause the development of AI systems over the significant risks they pose, including existential threat. This isn’t the first such protest the AI epicentre has seen. Over the past two years, several groups such as Stop AI and Pause AI have come up, driving the movement to pause the development of superintelligence that could potentially harm the human race, cause job losses, impact the environment and trigger an AI arms race. AI pioneers such as Stuart Russell, Geoffrey Hinton and Yoshua Bengio, founder and scientific advisor of Mila - Quebec Artificial have supported these campaigns, signing an open letter to pause AI development. But not many consider ban as an option. It is impossible to stop the development of AI systems, said AI researchers ET spoke with, adding that the focus should be on bringing out solutions that can tackle the ill effects of AI.
Dangers of AGI
Since ChatGPT changed the AI landscape in November 2022, the world has been moving towards superintelligence where machines surpass human intelligence. Frontier model developers are deliberately building superintelligence, said Jonas Vollmer, chief operating officer at non-profit AI Futures Project. “That could be wonderful but also extremely dangerous. It could mean that our world is going to be run by these systems that we don’t fully understand, that we don’t fully know how to steer. In this scenario, it ends with the extinction of humanity,” Vollmer told ET at his office in Berkeley, San Francisco. “If you have an AI system that’s smarter than all humans and it concludes that humans are kind of inconvenient to have around for its goal, which is science advancement, then it might as well get rid of them (humans),” he said.
Several AI researchers currently peg the existential threat caused by AI to be 10-15%, which is significant and one that needs to be taken seriously, said Stuart Russell, computer science professor at University of Berkeley. Organisations like the Future of Life Institute, Stop AI and Pause AI are trying to create awareness about the dangers of these systems through protests and open letters to garner public and government support.
Ban not practical
But it is not possible to stop the development, said Subbarao Kambhampati, professor, Arizona State University. “How are you going to stop the development of AI systems? Even if one government actually agrees with the ban, it is not like we control the entire world,” he said. According to him, the better way is to develop technical solutions to control rogue AI systems. “My biggest AI safety considerations are agentic systems.”
Currently, AI systems cannot execute plans. But agentic systems will have API access to actions, and those actions work in the real world, and that can cause a lot more damage. “Safety is important to make sure that you don’t actually execute a plan unless you know for sure that the probability that it will basically cause damage is extremely low and I’m doing research in that area,” Kambhampati said. Yoshua Bengio, founder and scientific advisor of Mila - Quebec Artificial Intelligence Institute, who spoke to ET at the sidelines of the India AI summit, pointed out that initiatives such as writing open letters and protests can increase awareness about risks posed by AI systems. Bengio also founded LawZero, a not-for-profit startup that is focused on building technical solutions for safe AI systems. Russell, who was cited earlier, has two major problems when it comes to current AI systems: how safe they are and what is the acceptable level of risk when deploying these systems. According to him, it is important that unsafe systems are not deployed and to have verification and licensing for the AI systems.
Their ask: pause the development of AI systems over the significant risks they pose, including existential threat. This isn’t the first such protest the AI epicentre has seen. Over the past two years, several groups such as Stop AI and Pause AI have come up, driving the movement to pause the development of superintelligence that could potentially harm the human race, cause job losses, impact the environment and trigger an AI arms race. AI pioneers such as Stuart Russell, Geoffrey Hinton and Yoshua Bengio, founder and scientific advisor of Mila - Quebec Artificial have supported these campaigns, signing an open letter to pause AI development. But not many consider ban as an option. It is impossible to stop the development of AI systems, said AI researchers ET spoke with, adding that the focus should be on bringing out solutions that can tackle the ill effects of AI.
Dangers of AGI
Since ChatGPT changed the AI landscape in November 2022, the world has been moving towards superintelligence where machines surpass human intelligence. Frontier model developers are deliberately building superintelligence, said Jonas Vollmer, chief operating officer at non-profit AI Futures Project. “That could be wonderful but also extremely dangerous. It could mean that our world is going to be run by these systems that we don’t fully understand, that we don’t fully know how to steer. In this scenario, it ends with the extinction of humanity,” Vollmer told ET at his office in Berkeley, San Francisco. “If you have an AI system that’s smarter than all humans and it concludes that humans are kind of inconvenient to have around for its goal, which is science advancement, then it might as well get rid of them (humans),” he said.
Several AI researchers currently peg the existential threat caused by AI to be 10-15%, which is significant and one that needs to be taken seriously, said Stuart Russell, computer science professor at University of Berkeley. Organisations like the Future of Life Institute, Stop AI and Pause AI are trying to create awareness about the dangers of these systems through protests and open letters to garner public and government support.
Ban not practical
But it is not possible to stop the development, said Subbarao Kambhampati, professor, Arizona State University. “How are you going to stop the development of AI systems? Even if one government actually agrees with the ban, it is not like we control the entire world,” he said. According to him, the better way is to develop technical solutions to control rogue AI systems. “My biggest AI safety considerations are agentic systems.”
Currently, AI systems cannot execute plans. But agentic systems will have API access to actions, and those actions work in the real world, and that can cause a lot more damage. “Safety is important to make sure that you don’t actually execute a plan unless you know for sure that the probability that it will basically cause damage is extremely low and I’m doing research in that area,” Kambhampati said. Yoshua Bengio, founder and scientific advisor of Mila - Quebec Artificial Intelligence Institute, who spoke to ET at the sidelines of the India AI summit, pointed out that initiatives such as writing open letters and protests can increase awareness about risks posed by AI systems. Bengio also founded LawZero, a not-for-profit startup that is focused on building technical solutions for safe AI systems. Russell, who was cited earlier, has two major problems when it comes to current AI systems: how safe they are and what is the acceptable level of risk when deploying these systems. According to him, it is important that unsafe systems are not deployed and to have verification and licensing for the AI systems.




