UK Lawmakers Accuse Google of Breaking AI Safety Pledge with Gemini 2.5 Pro Releasetimeline_event

ai-safetyai-governancetransparency-failureregulatory-violationtech-accountabilityfrontier-ai-safety
2024-03-25 · 1 min read · Edit on Pyrite

type: timeline_event Sixty U.K. lawmakers accused Google DeepMind of violating international AI safety commitments by releasing Gemini 2.5 Pro without comprehensive public safety disclosures. The allegations center on Google's failure to 'publicly report' system capabilities and risk assessments as pledged at a February 2024 international AI summit co-hosted by the U.K. and South Korea.

Key concerns include:

  • Releasing the model without detailed safety information
  • Not immediately clarifying external testing processes
  • Treating safety commitments as optional rather than mandatory
  • Notable signatories like Baroness Beeban Kidron and former Defence Secretary Des Browne warned that such practices could trigger a dangerous precedent in AI development.

    The accusations highlight a broader industry trend where major AI companies appear to be retreating from comprehensive safety reporting, potentially undermining international AI governance efforts.