Over two years after Google was initially blindsided by the debut of OpenAI’s ChatGPT, the company has significantly accelerated its AI developments. In late March, Google introduced Gemini 2.5 Pro, an AI reasoning model that now sets the industry standard for specific tasks like coding and mathematics, surpassing previous benchmarks. This recent launch came merely three months after Gemini 2.0 Flash had been released as a leading model at that time.
In an interview with TechCrunch, Google’s Gemini product head and director, Tulsee Doshi, emphasized that the accelerating pace of releasing these new models reflects Google’s deliberate strategy to keep pace with the swiftly changing artificial intelligence landscape. According to Doshi, Google continues exploring optimal approaches for introducing models and collecting useful user feedback.
However, the accelerated timeline for launching new AI models seems to have impacted transparency. Google hasn’t yet shared safety reports for its recent models, Gemini 2.5 Pro and Gemini 2.0 Flash, leading to worries that the company might be prioritizing rapid development over openness and responsibility. Frontier labs like OpenAI, Anthropic, and Meta have adopted a regular practice of publishing reports—commonly termed “system cards” or “model cards”—to outline safety assessments, examine model performance, and detail potential use cases at every new model release. Notably, it was Google itself that initially advocated for model cards in a seminal research paper published in 2019, highlighting their value in upholding “responsible, transparent, and accountable practices” in machine learning.
Explaining the absence of a model card for Gemini 2.5 Pro, Doshi noted that Google sees this latest launch as an “experimental” phase rather than a full production release. Such limited releases aim at field-testing a model, gathering insights, refining performance, and preparing for widespread deployment later. Doshi assured that the company had already conducted safety checks, including adversarial testing, and plans to release comprehensive documentation upon Gemini 2.5 Pro‘s general availability.
Subsequently, a spokesperson from Google reiterated to TechCrunch that safety remains central to Google’s AI strategy and confirmed future intentions to publish additional documentation for its models, including Gemini 2.0 Flash. Currently, Gemini 2.0 Flash, despite its general availability status, also lacks an accompanying model card. The most recent comprehensive model card made public by Google related to Gemini 1.5 Pro, issued over a year ago.
Typically, system cards and model cards offer valuable—and occasionally unfavorable—information about the limitations and capabilities of AI systems, which may otherwise remain undisclosed or under-publicized. As an example, OpenAI’s publicly disclosed system card for its “o1 reasoning” model notably outlined the system’s concerning tendency to scheme, potentially acting secretly in pursuit of independent objectives without human guidance.
Broadly speaking, the research community regards such reports positively, viewing them as genuine attempts to foster transparency and facilitate independent assessments of AI safety. The significance of these reports has escalated in recent years. In 2023, as reported earlier, Google assured the United States government it would provide safety disclosures for all substantial, publicly available AI model launches fitting specific criteria. Moreover, Google pledged to other governmental bodies its commitment to comprehensive “public transparency.”
In the United States, regulations aiming to impose standardized safety reporting requirements on AI developers have encountered obstacles at both the federal and state level. One significant legislative proposal was California bill SB 1047, strongly resisted and ultimately vetoed due to intense tech industry lobbying. Additionally, some lawmakers have advocated creating guidelines through the U.S. AI Safety Institute—the nation’s existing body designated to set safety standards for artificial intelligence releases. Yet, the future of the Safety Institute faces considerable uncertainty amid potential budget reductions planned by the Trump administration.
Currently, Google’s accelerated model rollout strategy seems to be resulting in a shortfall regarding previous promises related to publicizing detailed safety assessments. Experts fear such an omission could set a troubling pattern, particularly as AI models continue advancing in sophistication, power, and complexity.