Sponsored by Looka AI – Exclusive lifetime deal

Google Faces Criticism Over Gemini 2.5 Pro Safety Report

Google Faces Criticism Over Gemini 2.5 Pro Safety Report

 Google is facing criticism from AI researchers and policy experts over its recently published technical report on Gemini 2.5 Pro, the company’s most powerful AI model to date. Released weeks after the model’s public launch, the report outlines results from internal safety evaluations but omits key information, making it difficult to assess the model’s potential risks. Several experts say the report lacks depth and transparency, casting doubt on Google’s commitment to AI safety.

Unlike some AI companies that release comprehensive reports—including evaluations of dangerous capabilities—Google publishes such data separately and only after a model has moved beyond the experimental phase. Critics argue that this fragmented approach hinders meaningful safety assessments and independent scrutiny. Notably, the Gemini 2.5 Pro report does not reference Google’s own Frontier Safety Framework, a system the company introduced to detect and manage AI features that could lead to serious harm.

Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, said the report was sparse and delayed, making it nearly impossible to evaluate whether Google is meeting its public safety commitments. Thomas Woodside of the Secure AI Project echoed this concern, noting the lack of timely updates and pointing out that Google’s last published dangerous capability tests date back to June 2024.

The absence of a safety report for Google’s recently announced Gemini 2.5 Flash model adds to the concerns. Although a company spokesperson said a report is “coming soon,” critics are urging more consistent and detailed documentation—especially for models that have yet to be publicly released but could pose significant risks.

Google, which once led the way in proposing standardized AI reporting, now faces growing scrutiny alongside other major AI labs. Meta’s recent safety report for its Llama 4 model was also criticized for lacking depth, and OpenAI has not released a report for its GPT-4.1 series. Kevin Bankston of the Center for Democracy and Technology called the trend a “race to the bottom” in AI safety practices, warning that tech firms are prioritizing speed over responsibility as they push their AI models to market.

Facebook
X
LinkedIn
Pinterest
Reddit

Subscribe and get Cheat Sheet of Super Power AI prompts for FREE !

Limited Time Only!

Embark on your AI journey by securing your copy today!