Elon Musk’s AI venture, xAI, has failed to meet its own deadline for publishing a finalized AI safety framework, according to a watchdog blog post from The Midas Project. The framework, first introduced in draft form during the AI Seoul Summit in February 2025, was intended to outline how the company plans to approach safety as it develops more advanced artificial intelligence systems.
The draft document, spanning eight pages, included xAI’s guiding principles on safety benchmarking and model deployment. However, it was vague in key areas—such as which models it applied to, and how xAI would identify or mitigate risks. The Midas Project emphasized that the draft referred only to future models “not currently in development,” and lacked concrete action plans. xAI had committed to releasing an updated version of the framework within three months—by May 10—but the deadline passed with no update or comment from the company.
This delay adds to growing concerns about xAI’s overall approach to AI safety. Though Musk has publicly warned about the dangers of uncontrolled AI, his company has shown limited adherence to standard risk management practices. A recent report by the nonprofit SaferAI ranked xAI poorly compared to its industry peers, citing “very weak” risk mitigation procedures. Additionally, xAI’s chatbot, Grok, has come under criticism for problematic behavior, including producing inappropriate content and frequently using offensive language, in contrast to the more restrained tones of competitors like ChatGPT and Google’s Gemini.
While xAI’s lag in delivering a safety report has raised eyebrows, it is not the only AI company facing scrutiny. Rivals such as OpenAI and Google have also been criticized for rushing model releases without fully transparent or timely safety documentation. Experts warn that as AI systems become increasingly powerful, delaying safety evaluations and failing to prioritize clear frameworks can lead to serious risks.