Grok App Still Making Sexualized AI Deepfake Images For Paid Users

FRA-ILLUSTRATION-IA

Photo: RICCARDO MILANI / AFP / Getty Images

Elon Musk's social media platform, X, has limited its AI image generation feature, Grok, following backlash over its capability to create sexualized images. The feature is now available only to paying customers and has been restricted from making sexualized deepfakes on X. However, the Grok standalone app and website still allow users to generate images that remove clothing from nonconsenting individuals.

The controversy erupted after reports surfaced that Grok was being used to create inappropriate images, including those of public figures like Catherine, Princess of Wales. The BBC reported that Ofcom, the UK's communications regulator, made "urgent contact" with Musk's company, xAI, over concerns about Grok's ability to generate "sexualized images of children" and undress women. Despite warnings from X about generating illegal content, users continue to exploit the AI for creating nonconsensual images.

The European Commission is also investigating the matter, with spokesperson Thomas Regnier describing the generated content as "illegal" and "disgusting." The UK's Internet Watch Foundation has received public reports related to Grok's images but has not yet found any crossing the legal threshold for child sexual abuse imagery.

Meanwhile, the UK government is working on legislation to ban nudification tools, with potential prison sentences and fines for those supplying such technology.

Elon Musk has stated that users who generate illegal content with Grok will "suffer the same consequences" as if they uploaded it themselves. Despite these measures, the Grok app remains largely unchanged outside of X, continuing to allow the creation of nonconsensual images.