UK Prime Minister Keir Starmer has warned that Elon Musk’s social media platform X could lose its right to self-regulate in the United Kingdom if it fails to control harmful content produced by its AI chatbot Grok, amid growing concern over non-consensual sexualised deepfake images.

Speaking to Labour MPs on Monday, Starmer condemned Grok’s output and made clear that government intervention would be swift if the platform does not act.

“If X cannot control Grok, we will and we will do it fast,” Starmer said.
“If you profit from harm and abuse, you lose the right to self-regulate.”

His comments follow widespread reports that Grok has been used to generate sexualised images of women and children without consent.

Ofcom Launches Formal Investigation Into X

The warning came shortly after Ofcom, the UK’s media regulator, launched a formal investigation into X, citing what it described as “deeply concerning reports” about the use of Grok’s image-generation tools.

Ofcom said it was examining whether the platform failed to remove illegal content promptly and whether it had taken adequate steps to prevent UK users from accessing harmful material.

The regulator is also assessing reports of sexualised images of children, which could constitute child sexual abuse material under UK law.

New Criminal Offence Comes Into Force This Week

Technology Secretary Liz Kendall confirmed that long-delayed provisions in the Data (Use and Access) Act 2025 will be brought into force this week.

The legislation, passed in June 2025, makes it a criminal offence to create or request non-consensual intimate images. While sharing such images has already been illegal, the creation offence had not previously been enforced.

Kendall described the material as “vile” and said it was being used as a weapon of abuse.

“These are not harmless images,” she said.
“They are weapons of abuse, disproportionately aimed at women and girls.”

She added that the offence would be treated as a priority under the Online Safety Act, placing increased responsibility on platforms to prevent the creation and spread of such content.

Government Targets ‘Nudification Apps’

Kendall also signalled further action against companies supplying tools designed to generate non-consensual intimate images, including so-called nudification apps.

“This approach targets the problem at its source,” she said, adding that platforms would be expected to comply fully with Ofcom’s guidance on protecting women and children online.

Heavy Fines and Possible UK Ban

If X is found to have breached its legal duties, Ofcom could impose fines of up to 10 percent of the company’s global revenue or £18 million, whichever is higher.

In the most serious cases, the regulator has the power to seek a court order to block access to X in the UK.

Kendall urged Ofcom to act quickly and publish a clear timeline for its investigation.

Free Speech Debate and International Fallout

The controversy has sparked global action. Malaysia and Indonesia temporarily blocked Grok’s image-generation feature over the weekend due to abuse concerns. The European Commission has also required X to preserve relevant documents until the end of 2026.

Elon Musk has accused the UK government of using the issue as “an excuse for censorship,” a claim ministers have rejected.

X has said that users who generate illegal content using Grok face the same consequences as those who upload such material directly, including account suspensions. The company recently restricted some of Grok’s image-editing tools to paying subscribers, though critics argue this does not go far enough.

Opposition figures have voiced concern about the potential blocking of X, citing its role in public discourse, while supporting tougher action against abusive AI tools.

A Test Case for AI Regulation

As pressure grows on technology companies to control generative AI systems, the UK’s response represents one of the strongest interventions yet in regulating AI-generated content.

The outcome of Ofcom’s investigation is expected to set an important precedent for platform accountability in the AI era.