News

UK Orders X to Act as Grok AI Triggers Outrage Over Child Deepfake Abuse

0

The UK government has demanded urgent action from Elon Musk’s social media platform, X, following reports that its artificial intelligence tool, Grok, has been used to generate fake sexually explicit images of children.

Technology Secretary Liz Kendall, in a statement on Tuesday, described the situation as “absolutely appalling and unacceptable in a decent society,” insisting that X must address the issue without delay.

“X needs to deal with this urgently,” Kendall said, adding that the UK media regulator, Ofcom, has her “full backing to take any enforcement action it deems necessary.”

Grok, developed by Musk-owned xAI, has come under growing international criticism for allegedly allowing users to create sexualised deepfake images of women and minors through its so-called “spicy mode” setting. The controversy has intensified concerns about the misuse of generative AI tools and the adequacy of safety safeguards on major platforms.

On Monday, the European Commission announced it was “very seriously” reviewing complaints related to Grok, while Ofcom confirmed it is examining both X and xAI over the matter.

Under the UK’s Online Safety Act, which came into force in July, social media platforms, websites and video-sharing services are required to implement strict age-verification measures, including facial recognition technology or credit card checks, to prevent children from accessing harmful content.

The law also makes it illegal to create or share child sexual abuse material or non-consensual intimate images, including AI-generated sexual deepfakes. Companies that fail to comply face fines of up to 10 percent of their global turnover or £18 million (£24 million), whichever is higher.

The UK government has further announced plans to ban so-called “nudification” tools that use AI to digitally remove clothing from images of people.

Responding to the backlash, Grok said on Friday that it had identified flaws in the system, describing them as “lapses in safeguards,” and confirmed it was working “urgently” to fix the problems.

The incident has renewed pressure on tech companies to strengthen AI safety controls amid rising global concerns over the misuse of artificial intelligence.

Mike Ojo

Abe Dismisses One-Party State Fears, Urges Nigerians to Support Tinubu’s Tax Reforms

Previous article

Badaru Dismisses ADC Defection Rumours, Reaffirms Loyalty to APC

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More in News