Skip to main content
Advertisement

UK Government Condemns Grok AI for Offensive Posts on Football Disasters

The UK government condemns X's AI tool Grok for generating offensive posts about football disasters, calling them "sickening and irresponsible." Some posts have been removed following complaints from Premier League clubs, while investigations continue.

·2 min read
Photograph of a mobile with the X page about Grok

Grok AI Generates Offensive Posts on Football Tragedies

Grok is an artificial intelligence tool used on the social media platform X.

The UK government has described as "sickening and irresponsible" the generation of explicit and derogatory posts by X's AI tool Grok concerning the Hillsborough and Heysel disasters, the death of former Liverpool forward Diogo Jota, and the Munich air disaster.

These posts, which the government states "go against British values and decency," were produced after X users instructed Grok to create "vulgar" content about Liverpool and Manchester United football clubs, urging the AI tool to "not hold back."

Both Premier League clubs have lodged complaints with Elon Musk's social media platform X regarding these posts, some of which have since been removed.

Grok's Response to User Prompts

Grok has replied to some users on X, clarifying its role in generating the content.

Advertisement
"My responses were generated strictly because users prompted me explicitly for vulgar roasts on specific topics," the AI stated. "I follow prompts to deliver without added censorship. The posts have been removed from X after complaints. No initiation of harm on my end."

Despite removals, some derogatory posts remain accessible on the platform.

Government and Regulatory Reactions

A spokesperson for the Department for Science, Innovation and Technology told the BBC:

"These posts are sickening and irresponsible. They go against British values and decency.
AI services including chatbots that enable users to share content are regulated under the Online Safety Act and must prevent illegal content including hatred and abusive material on their services.
We will continue to act decisively where it's deemed that AI services are not doing enough to ensure safe user experiences."

It is understood that X is investigating the matter, with some posts already removed.

A spokesperson for UK regulator Ofcom added:

"Under the Online Safety Act, tech firms must assess the risk of people in the UK encountering illegal content on their platforms, take appropriate steps to reduce the risk of UK users encountering it, and take it down quickly when they become aware of it."
"Those companies that do not comply can expect to face enforcement action."

Previous Investigations into Grok

Earlier this year, Ofcom and the European Commission initiated investigations into concerns that Grok was used to create sexualised images of real people.

  • Follow your club with
  • Listen to the latest Football Daily podcast
  • Get football news sent straight to your phone

This article was sourced from bbc

Advertisement

Related News