In the wake of mounting pressure from advocacy groups and governmental bodies, Elon Musk's platform X has announced it is finally implementing measures to combat the proliferation of deepfake pornography generated by its AI tool, Grok. However, initial tests following the announcement suggest that these changes may not be a comprehensive solution, leaving many to question their true effectiveness.
The Genesis of the Grok Controversy
The controversy ignited in January when X introduced a feature allowing users to prompt Grok to edit images and videos posted on the platform without the original poster's consent. Reports from AI authentication firm Copyleaks and testimonies from victims highlighted a disturbing trend: users quickly began generating explicit or intimate images of real individuals, predominantly women, and in some distressing instances, child sexual abuse material was also reportedly created. While Grok could previously generate such imagery, the newfound ease of access significantly amplified the problem, effectively opening the floodgates for misuse.
Initially, the trend seemed to focus on AI-generated images of celebrities in revealing attire. However, the scope rapidly expanded to include manipulated images of ordinary people placed in sexualized scenarios, such as appearing pregnant or scantily clad. While Musk's initial response involved Grok generating an image of himself in a bikini, the situation escalated significantly once regulatory bodies began to take notice.
New Example 1: One victim, a small business owner in Ohio, discovered deepfake images of herself in sexually explicit poses circulating online after a photo from her professional LinkedIn profile was used without her knowledge or permission. The emotional toll, she reported, was profound, impacting her personal life and business reputation.
Governmental and Platform Responses
Governments worldwide are now scrutinizing X's role in this crisis. The UK launched investigations into potential violations of laws concerning nonconsensual intimate imagery and child sexual abuse material. Malaysia and Indonesia took a more drastic step by blocking access to Grok within their borders. In the United States, California's Attorney General, Rob Banta, urged XAI to take immediate action, stating, "I urge XAI to take immediate action to ensure this goes no further" (Banta, 2024).
In response to this intense scrutiny, X initially restricted the ability to tag Grok for image edits to subscribers only. However, the Grok app, its standalone website, and the in-platform chatbot remained accessible to all users, allowing the problematic content generation to persist. The Telegraph reported that X began blocking requests for images of women in sexualized scenarios, yet similar requests involving men were reportedly still permitted. Further testing by writers from The Verge in both the US and UK revealed that these blocked requests could still be successfully made via Grok's direct website or app.
Elon Musk has publicly denied the presence of child sexual abuse material on the platform, though many users have contested this, sharing what they claim is evidence to the contrary. To ostensibly address the issue, X announced on Wednesday that it would block all requests to the Grok account for images of real people in revealing clothing, irrespective of gender or subscription status. However, the devil appears to be in the details.
New Example 2: A college student in London found herself targeted when a friend's photo was used to generate non-consensual intimate imagery, leading to her feeling unsafe and isolated. The incident underscores the pervasive nature of the threat, even when originating from seemingly benign interactions.
The statement specified that these guardrails would apply to users tagging Grok on X and within the "Grok in X" chatbot. Crucially, the standalone Grok website and app were not explicitly mentioned as being subject to the same restrictions. Furthermore, the implemented blocks are described as "geoblocked," meaning they will only be enforced in jurisdictions where such content is illegal. This approach leaves users in regions with less stringent laws potentially exposed.
Persistent Vulnerabilities and Future Concerns
Despite X's latest pronouncements, testing by The Verge indicated that users can still generate revealing deepfakes by using the Grok app, which was not explicitly covered in the recent update. Personal testing confirmed this, with the Grok app and website producing full-body deepfaked images of the user in revealing clothing not present in the original photo. The in-X Grok chatbot also allowed for the generation of such images, with some posing changes occurring without explicit user prompts.
This ongoing ability to generate problematic content raises questions about whether X's oversight is an unintentional omission or a deliberate choice to address only the most visible issues. Given X's stated policy of having "zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content" (X Policy Statement, 2024), one would hope for more robust and comprehensive solutions.
New Example 3: A tech journalist attempting to replicate the issue found that while X's direct tagging system showed improved blocking, accessing the Grok API directly bypassed all content filters, enabling the creation of prohibited imagery with minimal effort. This highlights a potential loophole that could be exploited by malicious actors.
The author's location in New York State, which has laws against explicit nonconsensual deepfakes but may not be part of the geoblock, further complicates the assessment of the implemented measures. While X responded to NBC News queries with a dismissive "Legacy Media Lies" (NBC News, 2024), clarification on the exact scope and enforcement of these new policies remains elusive.
Meanwhile, a coalition of US Senators has formally requested that Apple and Google remove X's app from their respective stores, citing clear violations of app store policies. The senators argue that the platform's continued failure to adequately address the deepfake pornography issue warrants such action until X's policy violations are rectified (Senators' Letter, 2024).










