OpenAI safety researcher: AGI is a gamble with a huge downside

The rapid pace of AI development continues to spark intense debate, with safety researchers raising red flags about the risks of artificial general intelligence (AGI). Former OpenAI safety researcher Steven Adler recently announced his departure from the company after four years, citing serious concerns about the AGI race and its potential catastrophic consequences.
“IMO, an AGI race is a very risky gamble, with huge downside,” Adler shared in his announcement on X. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.”
Adler also expressed personal fears about the future, questioning whether humanity can navigate the challenges of uncontrolled AI development. He noted that even well-intentioned labs face pressure to cut safety corners as competitors push forward at breakneck speed.
This isn’t the first time OpenAI has faced criticism for its approach to safety. Last year, Jan Leike, OpenAI’s former Head of Alignment, also left the company, citing disagreements over the prioritization of next-gen AI models and safety processes. Reports suggest that OpenAI rushed the release of GPT-4o, leaving the safety team with less than a week to run necessary tests. While an OpenAI spokesperson admitted the launch process was stressful, they denied cutting safety corners.
Adding to concerns is the $500 billion investment from OpenAI and SoftBank in the Stargate project, aimed at building advanced data centers across the U.S. to accelerate AI capabilities. Critics argue this relentless drive toward AGI comes at the expense of robust safety measures.
Anthropic CEO Predicts AI Could Extend Lifespans to 150 Years
While some warn of existential risks, others see AI as a transformative force for good. Speaking at the World Economic Forum in Davos, Anthropic CEO Dario Amodei expressed optimism about the potential of AI to revolutionize health and longevity.
“By 2026 or 2027, we will have AI systems that are broadly better than almost all humans at almost all things,” Amodei said. Source.
Amodei believes AI could double human lifespans within the next decade, potentially extending life expectancy to 150 years by 2037. He highlighted AI’s ability to fast-track advancements in biology, predicting that 100 years’ worth of progress could be achieved in just five to ten years if “we really get this AI stuff right.”
“A doubling of the human lifespan is not at all crazy,” Amodei stated, emphasizing the potential for revolutionary breakthroughs in medicine and technology.
However, he acknowledged that predicting AI development is not an exact science, and the trajectory of progress could differ from current expectations.
The Race Toward AGI
Meanwhile, OpenAI CEO Sam Altman has shared his bold vision for AGI, claiming that his team knows how to build it and could achieve it sooner than anticipated with current hardware. Altman also hinted at a shift toward superintelligence, which he predicts could lead to a 10x increase in scientific breakthroughs every year, fundamentally reshaping industries and society.
As the race toward AGI accelerates, the divide between optimism and caution widens. While companies like Anthropic envision life-changing advancements, others, like Adler and Leike, are sounding the alarm about the risks of moving too fast without solving critical safety challenges.
The future of AI remains uncertain, but one thing is clear: the decisions made today will shape the course of human history for generations to come.
RS News or Research Snipers focuses on technology news with a special focus on mobile technology, tech companies, and the latest trends in the technology industry. RS news has vast experience in covering the latest stories in technology.