The AI Race to the Cliff: Only Anthropic Scores Above C, As FLI Report Reveals ‘Deeply Disturbing’ Lack of Safety Controls

Futuristic glowing AI brain made of digital circuitry hovering above a platform as researchers observe its expanding power, symbolizing the risks of unprepared AGI development.

Major AI developers, including OpenAI and Meta, are rushing toward superintelligence without a “coherent, actionable plan” for control, prompting urgent calls for legally binding regulation.

A new, scathing independent assessment of the world’s leading AI companies reveals that their internal safety practices are critically underdeveloped, leaving humanity unprepared for the very systems they are rapidly developing.

According to the Summer 2025 AI Safety Index released by the Future of Life Institute (FLI), a panel of distinguished AI scientists found a “deeply disturbing” disconnect between the industry’s ambitions and its safety readiness. Despite firms racing toward Artificial General Intelligence (AGI) within the decade, none of the seven assessed companies scored above a D in Existential Safety planning.

The Index serves as a stark warning: if AI development is a race to build the fastest, most powerful car in the world, the developers are ignoring the brakes, seatbelts, and steering controls even as they approach what experts describe as a “cliff edge.”

Failure in Existential Preparedness

The most alarming finding of the report centers on the fundamental lack of preparation for managing catastrophic risks associated with smarter-than-human systems.

While companies like OpenAI and Anthropic are predicting AGI within 2-5 years, the report found they lack “anything like a coherent, actionable plan” for ensuring these systems remain safe and controllable. The scores in the Existential Safety domain were abysmal:

  • Anthropic was the highest, scoring a D.
  • OpenAI, DeepSeek, Zhipu AI, and Meta all received a failing F.

The index specifically measures concrete strategies for technical alignment, control plans, clear criteria for halting development, and post-AGI governance structures, all areas where major developers were found lacking.

The Safety Scoreboard: Anthropic Leads a Failing Class

The FLI Index evaluated seven major firms on 33 indicators spanning domains like Risk Assessment, Current Harms, and Governance & Accountability. Overall, the grades reinforce the view that self-regulation is failing:

CompanyOverall GradeOverall Score
AnthropicC+2.64
OpenAIC2.10
Google DeepMindC-1.76
x.AID1.23
MetaD1.06
Zhipu AIF0.62
DeepSeekF0.37

Anthropic, a Public Benefit Corporation (PBC), secured the best overall grade (C+), distinguishing itself by leading on risk assessments, conducting the only human participant bio-risk trials, and notably committing not to train on user data.

OpenAI placed second (C) and was the only company to publish its full whistleblowing policy, although this occurred only after media reports highlighted restrictive non-disparagement clauses within the policy.

Transparency and Risk Assessment Gaps Widen

Beyond existential preparedness, the gap between capabilities and risk management is widening due to the inadequacy of voluntary pledges, according to the sources.

  • Limited Dangerous Testing: Only three of the seven firms, Anthropic, OpenAI, and Google DeepMind, reported substantive testing for dangerous capabilities linked to large-scale risks, such as bio- or cyber-terrorism. One reviewer noted they have “very low confidence that dangerous capabilities are being detected in time to prevent significant harm.”
  • Whistleblowing Policy: Public whistleblowing policies are considered best practice in safety-critical industries, yet only OpenAI has published its full policy. The Index recommends that Google DeepMind, xAI, and Meta follow suit to meet this minimal transparency standard.
  • Chinese Firm Context: Chinese firms Zhipu AI and Deepseek received failing overall grades. However, the report noted the difference in regulatory context; China already has regulations for advanced AI development, meaning there is less reliance on the self-governance norms the Index measures, unlike the US and UK, where the other firms are based.

The Call for Regulation

The findings have intensified the pressure for mandatory, legally binding safety standards.

Max Tegmark, MIT Professor and FLI President, stated plainly that the results confirm that “self-regulation simply isn’t working.” He emphasized that legally binding safety standards, similar to those required for medicine or food, are the only responsible path forward, arguing that continued opposition to regulation while pursuing superintelligence is “pretty crazy.”

Stuart Russell, a Professor of Computer Science at UC Berkeley and a member of the review panel, concluded that while some companies are making “token efforts,” “none are doing enough,” stressing the need for a fundamental rethink of how AI safety is approached as the ongoing AI race shows no signs of slowing down.

Disclaimer: The views, information, and opinions expressed in our articles and community discussions are those of the authors and participants and do not necessarily reflect the official policy or position of Blockrora. Any content provided by our platform is for informational purposes only and should not be considered as financial, legal, or investment advice. Blockrora encourages readers to conduct their own research and consult with professionals before making any investment decisions.

Related Articles

Secret Link

Blockrora

AD BLOCKER DETECTED

We have noticed that you have an adblocker enabled which restricts ads served on the site.

Please disable it to continue reading Blockrora.