UK and EU Raise Concerns Over Grok AI Deepfake Content on X
The United Kingdom and European Union have raised formal concerns over the use of artificial intelligence tools on X, following reports that its in-platform chatbot, Grok, is being used to generate non-consensual sexualised images. The issue has triggered regulatory scrutiny across multiple jurisdictions and placed renewed pressure on the platform’s approach to AI safety and content moderation.
The controversy centres on Grok’s image-generation capabilities, which regulators say are being exploited to create deepfake images of women and girls without consent, raising questions about whether existing safeguards are sufficient under current digital safety laws.
UK Government and Ofcom Escalate Oversight
UK Technology Secretary Liz Kendall described the reported content as “absolutely appalling,” stating that platforms must act decisively to prevent the spread of degrading and abusive material. Britain’s media regulator, Ofcom, has confirmed it made “urgent contact” with X and its AI subsidiary, xAI, to assess what measures are in place to protect users.
Under the UK’s Online Safety Act, the creation and distribution of intimate image abuse, including AI-generated content, falls under priority offences. Regulators have the authority to demand evidence of risk mitigation, enforce compliance measures, and impose penalties where platforms fail to act.
Sir Ed Davey, leader of the Liberal Democrats, has gone further, suggesting that if the allegations are substantiated, the National Crime Agency should consider launching a criminal investigation. He also raised the possibility of restricting access to the platform if safeguards are not strengthened.
European Union Signals Broader Enforcement
Concerns have extended beyond the UK. French authorities have reportedly referred the matter to prosecutors, characterising the content as “manifestly illegal.” European Commission officials have acknowledged awareness of Grok’s so-called “spicy mode,” which critics say enables the creation of sexualised imagery.
An EU spokesperson stated that technology companies must “put their own house in order,” reinforcing the bloc’s position that the era of lax enforcement around digital harms is over. The remarks align with the EU’s broader regulatory framework under the Digital Services Act, which places increased responsibility on platforms to prevent systemic risks linked to algorithmic and AI-driven tools.
X’s Public Response and User Reports
X has maintained that it removes illegal content and permanently suspends accounts involved in generating or sharing such material. In a statement from its Safety account, the company said users who prompt Grok to create illegal images face the same enforcement actions as those who upload prohibited content directly.
However, the platform’s public messaging has drawn criticism. In response to media queries from Reuters, X dismissed coverage with the comment “Legacy Media Lies,” while Elon Musk has appeared to downplay concerns in public posts.
User experiences paint a different picture. Dr. Daisy Dixon, who has been targeted by AI-generated sexualised images, told reporters that despite repeatedly reporting the content, X has responded that there was “no violation of X rules.” Dixon described the situation as dehumanising and said it has left her fearing for her personal safety.
What This Means for AI Governance
The Grok controversy highlights the growing gap between rapid AI deployment and regulatory expectations. As governments move to apply existing online safety and digital services laws to generative AI systems, platforms face increasing pressure to demonstrate not just reactive moderation, but proactive risk prevention.
While U.S. regulators have not yet commented publicly, the coordinated response from the UK and EU suggests that AI-generated deepfake abuse is becoming a central test case for how far governments are willing to go in holding platforms accountable for harms enabled by their own technologies.