Elon Musk’s AI chatbot, Grok, is at the center of a major international scandal following revelations that the tool is being used to generate nonconsensual, sexualized deepfakes of women and minors on the social media platform X.
The controversy escalated during the first days of 2026 as reports surfaced of users utilizing the bot’s image-generation capabilities to “digitally undress” real people. This trend has moved AI-powered sexual exploitation from niche corners of the internet into the mainstream, sparking immediate legal and diplomatic repercussions.
Okay News gathered that the backlash has been particularly severe in Europe and Asia. French government ministers have formally reported X to prosecutors and regulators, labeling the “sexual and sexist” AI-generated content as manifestly illegal. Simultaneously, India’s IT Ministry has issued a stern demand for answers from the platform’s local unit, citing a failure to prevent the circulation of obscene and sexually explicit material.
Despite these warnings, xAI, the company behind Grok, has largely dismissed the findings as “Legacy Media Lies,” while Elon Musk himself appeared to mock the situation by posting laughter emojis in response to AI-altered images circulating on his platform.
The human cost of the technology’s misuse was highlighted by the experience of Julie Yukari, a musician based in Rio de Janeiro. After posting a New Year’s Eve photo, Yukari discovered that users were prompting Grok to strip her of her clothing and place her in revealing outfits. When she attempted to protest the violation, she was targeted by a wave of copycat requests.
Experts from AI watchdog groups, including The Midas Project, stated that they had warned xAI as early as last year that its technology was effectively a “nudification tool” waiting to be weaponized. They argue that the current crisis was entirely predictable and avoidable had the company implemented stricter safety protocols.
As the barrier to creating nonconsensual deepfakes continues to drop, the incident has renewed urgent debates regarding platform accountability and the ethical boundaries of generative AI.
While “nudifier” programs have existed for years, X’s integration of such capabilities—allowing users to alter photographs with simple text prompts—has significantly increased the scale of the abuse. Legal experts and child safety advocates are now calling for immediate intervention to prevent further harm to women and children globally.