Leading AI companies are alarmingly unprepared for the potential dangers of human-level artificial intelligence, a new report warns. Despite claims of achieving Artificial General Intelligence (AGI) within the decade, none of the major developers scored above a C+ in overall safety, raising serious concerns about the industry's readiness for its own ambitious goals.
AI Giants Lack Robust Safety Plans for AGI
A recent report by the US non-profit Future of Life Institute (FLI) reveals a "deeply disturbing" lack of safety planning among the world's leading AI companies. The FLI's latest AI Safety Index, which evaluates the safety and security risks of top-tier AI firms, found that despite their stated aim to achieve AGI within the next ten years, none of the benchmarked companies achieved an overall safety grade higher than a C+.
This critical assessment highlights a significant gap between technological ambition and the necessary safety infrastructure.
Existential Safety: A Major Concern
One of the most alarming findings was the poor performance in "existential safety," which assesses companies' preparedness for managing extreme risks from future AI systems that match or exceed human capabilities. None of the seven firms benchmarked by the FLI managed to score above a D in this crucial category.
Worst Performers: Chinese-owned firms Zhipu AI and DeepSeek received the lowest possible safety grade, F, with major issues across all categories, including risk assessments, safety frameworks, and information sharing.
Lacking Strategies: Meta and OpenAI were found deficient in existential safety strategies, internal monitoring, technical safety research, and support for external research.
OpenAI's "Deteriorating Safety Culture": The FLI specifically noted a decline in OpenAI's safety culture, attributing it to the loss of most of its safety researchers over the past year, resulting in a failing grade for the company in this area.
Slightly Better: Anthropic was commended for its detailed and transparent research regime, while Google DeepMind's safety documentation was described as "well thought out, with a serious commitment to monitoring."
An Unregulated Industry
Max Tegmark, co-founder of FLI, likened the current situation to "building a gigantic nuclear power plant in New York City set to open next week – but there is no plan to prevent it having a meltdown." The report concludes that the industry is "fundamentally unprepared for its own stated goals," with competitive pressures and technological ambition far outpacing safety infrastructure and norms.

Another watchdog, SaferAI, echoed these concerns, labelling the current safety regimes at top AI companies as "weak to very weak" and their approach "unacceptable."
Key Takeaways
No major AI developer scored higher than a C+ in overall safety.
None of the seven firms achieved above a D in existential safety, which assesses preparedness for extreme risks from human-level AI.
Chinese firms Zhipu AI and DeepSeek received an F grade for overall safety.
OpenAI's safety culture was noted as deteriorating, leading to a failing grade in existential safety.
Experts warn that the industry lacks a coherent, actionable plan for ensuring the safety and control of advanced AI systems.
The report highlights an unregulated industry where the pursuit of AGI outpaces the development of robust safety measures.