Steven Adler, a prominent AI safety researcher at OpenAI, has publicly voiced serious concerns about the rapid pace of the artificial general intelligence (AGI) arms race. Adler, who left OpenAI at the end of last year after four years with the company, described the global race toward AGI development as a “very risky gamble” for humanity.
The AGI Race and Its Alarming Risks
Adler took to X, formerly known as Twitter, to share his apprehensions about the trajectory of AGI advancements. “An AGI race is a very risky gamble, with huge downside,” Adler stated in his post. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.”
AI alignment, the process of ensuring AI systems adhere to human goals and values, remains unresolved across all AI labs. Adler’s statement underscores growing fears that accelerating AGI development without solving alignment issues could lead to catastrophic consequences.
Adler, who previously led safety research for OpenAI’s product launches and speculative AI systems, expressed his increasing unease about the pace of AI progress. “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” he confessed.
The Pressure Cooker of Global AI Competition
Adler’s concerns reflect a broader anxiety shared by AI experts globally. The AGI race, particularly between the U.S. and China, has pushed AI labs to prioritize speed over safety. The competitive environment creates a “bad equilibrium,” as Adler called it, where even labs committed to responsible AGI development feel compelled to cut corners just to keep up.
This cautionary perspective aligns with remarks from Stuart Russell, a professor of computer science at UC Berkeley. Russell previously warned, “The AGI race is a race towards the edge of a cliff.” Even some AI company leaders, including OpenAI CEO Sam Altman, have acknowledged the existential risks posed by AGI advancements.
OpenAI Faces Escalating Scrutiny
Adler’s resignation is not an isolated incident. OpenAI has experienced a notable exodus of safety researchers over the past few years, further fueling debates about its commitment to AI safety. Key figures such as Ilya Sutskever and Jan Leike, who previously co-led OpenAI’s Superalignment team, also departed after voicing concerns about the lab’s shifting priorities.
Leike, for instance, criticized OpenAI’s leadership for prioritizing product launches over safety protocols. “Safety culture and processes have taken a backseat to shiny products,” he wrote on X, following his resignation. Even former OpenAI governance researcher Daniel Kokotajlo highlighted that nearly half of OpenAI’s long-term AI risk staff had left the company.
OpenAI’s internal challenges came into stark focus in 2023, when CEO Sam Altman was temporarily ousted amid mounting controversies surrounding AI safety. Although he was reinstated within days, the incident left lingering doubts about the company’s direction and its ability to balance innovation with ethical responsibility.
Chinese Competition Heats Up
Adler’s warnings come at a time when the global AI rivalry is intensifying. On Monday, Chinese AI company DeepSeek made waves by reportedly developing an AI model that matches or surpasses U.S.-based innovations at a fraction of the cost. The announcement rattled U.S. tech leaders, prompting Altman to announce expedited OpenAI product releases to counteract DeepSeek’s momentum.
Altman himself described the competition as “invigorating” but remains committed to advancing AGI. “We look forward to bringing the world AGI and beyond,” he declared, signaling OpenAI’s response to international challengers.
The Bigger Picture
Adler’s departure, and the broader exodus of AI safety researchers, has raised serious questions about the future of responsible AI development. His candid critique sheds light on a critical issue—the tension between the race for technological dominance and the need for safeguards against potentially unmanageable AI systems.
For now, Adler remains outspoken about the dangers of unchecked AGI development and the need for global cooperation on AI safety regulations. His stark warning serves as a wake-up call for an industry teetering on the edge of exciting innovation and existential risk.
With the stakes higher than ever, one thing is clear—the global AI race is not just a competition; it’s a gamble that could define the fate of humanity.