On Thursday, week after he began his most powerful model of him, Gemini 2.5 Pro, Google published a technical report showing the results of his internal security ratings. However, the report is easy in detail, experts say, making it difficult to determine which risk the model may pose.
Technical reports provide useful information – and unclear, sometimes – that companies do not always advertise extensively for them. In general, the community of it sees these reports as tribute attempts to support independent research and security assessments.
Google receives a different security reporting approach than some of his rivals of him, publishing technical reports only once he considers a model that has graduated in the “experimental” phase. The company also does not include findings from all its “hazardous ability” estimates in these writings; Reserves them for a particular audit.
Some experts that Techcrunch spoke to him were still disappointed by the sparse of the Gemini 2.5 Pro report, however, which they noticed did not mention the Google Border Security Frame (FSF). Google introduced FSF last year to what it described as an attempt to identify future skills of the one that can cause “serious harm”.
“This is very rare, it contains minimal information and it came out weeks after the model was made available to the public,” said Peter Wildord, co -founder of the Institute for Politics and Strategy, Techcrunch told him. “Impossibles impossible to verify whether Google is adapting to his public commitments and thus impossible to evaluate the safety and safety of their models.”
Thomas Woodside, co -founder of the Secure Project, said that while he is happy that Google issued a Gemini 2.5 pro report, he is not convinced of the company’s commitment to providing additional safety assessments in time. Woodside stressed that the last time Google published the results of the hazardous skill tests was in June 2024 – for a model announced in February that year.
Not inspiring much confidence, Google has not made a 2.5 flash report available, a smaller, more efficient model the company announced last week. A spokesman told Techcrunch a flash report is “soon”.
“Hope this is a promise from Google to start publishing the most frequent updates,” Woodside Techcrunch told Woodside. “These updates should include the results of ratings for models that have not been publicly set yet, as those models can also pose serious risks.”
Google may have been one of the first laboratories of the one proposing standardized models reports, but is not the only one who has been accused of underestimating transparency recently. Meta issued a similar assessment of his safety of his new models Llama 4 open, and Openai decided not to publish any report on his GPT-4 series.
Depending on Google’s head are insurance that the technology giant was for regulators to maintain a high standard of testing and reporting it. Two years ago, Google told the US government that it would publish security reports for all “important” public “models” within the scope “. The company followed that promise with similar commitments for other countries, pledging to “provide public transparency” about it.
Kevin Bankiston, a senior adviser to govern him at the Center for Democracy and Technology, called the trend of sporadic and unclear reports a “end race” for him.
“Combined with reports that competitive laboratories like Openai have shaved their security test time before release from month a day, this poor documentation of Google’s top model tells him a worrying story of a competition at the end of the security and transparency he rushes their models in the market,” he told Techcrunch.
Google said in the statement that, although not detailed in his technical reports, he performs the security test and the “red opponent” for the before release models.