OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

Importance Score: 75 / 100 🔴

OpenAI Updates AI Safety Framework Amid Competitive Pressures

In a revision to its Preparedness Framework, the internal system OpenAI employs to evaluate the safety of AI models and determine necessary safeguards during development and release, the company announced it might “adjust” its requirements. This modification could occur if a competing AI laboratory launches a “high-risk” system without similar protective measures.

Competitive Landscape and Safety Standards

This alteration in policy reflects the growing competitive demands on commercial AI developers to rapidly deploy their models. OpenAI has faced accusations of potentially weakening safety benchmarks to facilitate quicker releases and for not providing timely reports detailing its safety assessments.

Assurances of Continued Vigilance

In what could be seen as an attempt to preemptively address potential criticism, OpenAI asserts that these policy adjustments would not be undertaken trivially. The company maintains it would keep its safeguards at “a more protective level.”

“Should another leading AI developer release a high-risk system lacking comparable safeguards, we might revise our requirements,” OpenAI stated in a blog post published Tuesday afternoon. “However, we would first rigorously ascertain that the risk environment has genuinely shifted, publicly declare any adjustment we are making, evaluate that the change does not substantially amplify the overall risk of severe harm, and still uphold safeguards at a more protective standard.”

Increased Reliance on Automated Evaluations

The updated Preparedness Framework also clarifies OpenAI’s growing dependence on automated evaluations to accelerate product advancement. The company indicates that while human-led testing has not been entirely discarded, it has developed “an expanding suite of automated evaluations” purportedly capable of “keeping pace with a quicker release schedule.”

vCard QR Code

vCard.red is a free platform for creating a mobile-friendly digital business cards. You can easily create a vCard and generate a QR code for it, allowing others to scan and save your contact details instantly.

The platform allows you to display contact information, social media links, services, and products all in one shareable link. Optional features include appointment scheduling, WhatsApp-based storefronts, media galleries, and custom design options.

Concerns Over Accelerated Timelines

Conflicting accounts have emerged regarding the intensity of testing. According to the Financial Times, OpenAI reportedly allocated testers less than one week for safety checks for an upcoming major model – a significantly condensed timeframe compared to previous launches. Sources cited by the publication also suggest that numerous safety tests at OpenAI are now conducted on earlier iterations of models than those released publicly.

In public statements, OpenAI has refuted the idea that it is compromising on safety.

Risk Categorization and Thresholds

Further modifications to OpenAI’s framework address how the company classifies models based on risk, including models capable of obscuring their abilities, circumventing safeguards, preventing shutdown, and even self-replication. OpenAI indicates it will now concentrate on whether models meet one of two benchmarks: “high” capability or “critical” capability.

Defining Capability Thresholds

OpenAI defines “high” capability as a model that could “magnify existing pathways to severe harm.” “Critical” capability refers to models that “introduce unprecedented new pathways to severe harm,” according to the company.

OpenAI stated in its blog post: “Covered systems that achieve high capability must possess safeguards that sufficiently minimize the associated risk of severe harm before deployment. Systems reaching critical capability also necessitate safeguards that adequately minimize associated risks during development.”

First Update Since 2023

These revisions represent the first updates OpenAI has implemented to its Preparedness Framework since 2023.


🕐 Top News in the Last Hour By Importance Score

# Title 📊 i-Score
1 Russia sends horror WW3 threat to UK – 'We will kill all British people' 🟢 85 / 100
2 Citizen who was ordered to leave the U.S. in 7 days says she's heard nothing from federal officials 🔴 80 / 100
3 One-off gene-editing therapy could permanently lower cholesterol 🔴 78 / 100
4 Slovakia's pro-Russian leader rejects a call by the EU not to attend a military parade in Moscow 🔴 75 / 100
5 Man Allegedly Held Captive by Stepmom for 20 Years Breaks Silence 🔴 65 / 100
6 Rocket Factory Augsburg replaces CEO 🔴 65 / 100
7 Tax Day 2025: Who Is Eligible for the Child Tax Credit and What It’s Worth 🔵 58 / 100
8 Rory McIlroy has already named two huge goals he dreams of achieving before retirement 🔵 45 / 100
9 Black Mirror viewers frustrated as creepy versions of same 'diabolical' episode emerge 🔵 45 / 100
10 Saints win fleur-de-lis trademark case over alleged descendant of French royalty 🔵 45 / 100

View More Top News ➡️