US-INTERNET-SOFTWARE-COMPUTERS-AI-GOOGLE

A group of 60 U.K. lawmakers, representing multiple parties, has alleged that Google DeepMind has violated international agreements for the secure development of artificial intelligence. This accusation was made in a letter shared exclusively with TIME before its public release. The letter, issued on August 29 by the activist organization PauseAI U.K., states that Google’s March launch of Gemini 2.5 Pro without accompanying details on safety testing “establishes a hazardous precedent.” The correspondence, whose signatories include digital rights advocate Baroness Beeban Kidron and former Defence Secretary Des Browne, urges Google to clarify its commitment.

For an extended period, leading figures in AI, including Google DeepMind’s CEO, have cautioned that artificial intelligence presents potential catastrophic threats to public safety and security—for instance, by assisting aspiring bio-terrorists in creating a new pathogen or aiding hackers in disrupting vital infrastructure. To mitigate these dangers, Google, OpenAI, and other entities endorsed the Frontier AI Safety Commitments at a global AI summit jointly hosted by the U.K. and South Korean governments in February 2024. The signatories committed to “publicly disclose” system capabilities and risk evaluations, as well as detailing if and how external participants, such as government bodies, were engaged in testing. In the absence of enforceable regulations, both the public and legislators have largely depended on data derived from these voluntary commitments to grasp the evolving hazards of AI.

Nevertheless, Google launched Gemini 2.5 Pro on March 25—a model it claimed surpassed competing AI systems in industry benchmarks by “significant margins”—but failed to release comprehensive safety test data for more than a month. The letter contends that this omission not only signifies a “failure to uphold” its international safety obligations but also endangers the precarious standards encouraging secure AI advancement. “Should prominent corporations such as Google regard these commitments as discretionary, we face the peril of a hazardous pursuit to implement increasingly potent AI without adequate protections,” stated Browne in a commentary accompanying the correspondence.

“We are honoring our public promises, including the Seoul Frontier AI Safety Commitments,” a spokesperson for Google DeepMind informed TIME through an email statement. The spokesperson added, “Our models are subjected to stringent safety reviews during their development, conducted by entities like UK AISI and various other third-party evaluators – and Gemini 2.5 adhered to this protocol.”

The public letter urges Google to define a precise schedule for the release of safety assessment reports pertaining to future product launches. Google initially released the Gemini 2.5 Pro model card—a standard document for conveying safety test data—22 days post-model release. Yet, this eight-page document offered only a concise segment on safety tests. It wasn’t until April 28—more than a month following the model’s public availability—that the model card received an update featuring a 17-page document detailing specific assessments, which determined that Gemini 2.5 Pro exhibited “substantial” advancements, though not yet hazardous, in areas such as hacking. The revised document also mentioned the engagement of “third-party external testers” but omitted the identification of these parties or whether the U.K. AI Security Institute was among them—a point the letter highlights as another breach of Google’s commitment.

Following an earlier failure to respond to an inquiry regarding whether Gemini 2.5 Pro had been provided to governments for safety evaluations, a Google DeepMind spokesperson informed TIME that the company had indeed shared Gemini 2.5 Pro with the U.K. AI Security Institute and a “varied collection of outside specialists,” including Apollo Research, Dreadnode, and Vaultis. Nonetheless, Google stated that this sharing with the U.K. AI Security Institute occurred only after Gemini 2.5 Pro’s public launch on March 25.

On April 3, shortly after the debut of Gemini 2.5 Pro, Tulsee Doshi, Google’s senior director and product lead for Gemini, explained to TechCrunch that the absence of a safety report was due to the model being an “experimental” release, while confirming that safety tests had already been performed. She articulated that the purpose of such experimental deployments is to introduce the model restrictively, gather user input, and enhance it before a full production launch, at which point the company would issue a model card detailing the safety tests already completed. However, just days prior, Google had made the model available to all its free users, stating in a post on X that “we want to get our most intelligent model into more people’s hands asap.”

The open letter asserts that “designating a publicly available model as ‘experimental’ does not exempt Google from its safety responsibilities,” and further requests Google to define “deployment” in a more universally understood manner. “Corporations bear a significant public duty to thoroughly test new technologies and refrain from using the public for experimental purposes,” commented Steven Croft, the Bishop of Oxford and a signatory of the letter. He elaborated, “Consider a car manufacturer releasing a vehicle with the statement, ‘we invite the public to experiment and provide feedback when accidents occur, or when they collide with pedestrians, or when the braking system fails.’”

Croft expressed skepticism regarding the limitations in furnishing safety reports at the moment of release, reducing the core issue to a question of resource allocation: “What proportion of [Google’s] substantial AI investment is directed towards ensuring public safety and building trust, versus the amount allocated to immense computational power?”

It is important to note that Google is not the sole industry giant appearing to disregard safety pledges. xAI has not yet issued a safety report for Grok 4, an AI model launched in July. In contrast to GPT-5 and other recent introductions, OpenAI’s Deep Research tool, released in February, did not have an accompanying safety report on its launch day. The company asserts it conducted “rigorous safety testing,” but the report was only made public 22 days afterward.

Joseph Miller, the director of PauseAI U.K., stated that his organization is troubled by other potential breaches, explaining that the emphasis on Google stemmed from its local presence. DeepMind, the AI laboratory purchased by Google in 2014, continues to operate its main office in London. The U.K.’s current Secretary of State for Science, Innovation and Technology, during his 2024 campaign, indicated he would “compel” major AI firms to disclose safety tests. However, it was reported in February that the U.K.’s AI regulatory strategies were postponed, as the government aimed to better synchronize with the Trump administration’s deregulatory stance. Miller asserts that the moment has come to substitute corporate pledges with “genuine regulation,” adding that “voluntary commitments are simply ineffective.”