A New Form of Gendered Violence: Elon Musk’s Grok
- Human Rights Research Center
- 1 hour ago
- 7 min read
Author: Irem Cakmak, LLM March 10, 2026
Summary
Elon Musk’s AI chatbot Grok has sparked controversy for enabling the rapid creation of nonconsensual sexualized deepfake images of women, including LGBTQ+ individuals and sometimes minors. These highly realistic images can be generated within seconds and spread widely online, contributing to a new form of digital sexual abuse. This form of AI-enabled violence might result in psychological distress, reputation damage, and the silencing of women in online spaces. In the long term, this dynamic may deepen the digital gender gap, limit women’s economic and political participation and reinforce broader patterns of gender inequality.
![[Image credit: Julian Cordero on Pexels]](https://static.wixstatic.com/media/e28a6b_2ab9af1b0bda4d0da6e6d1ddfa765428~mv2.png/v1/fill/w_49,h_33,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/e28a6b_2ab9af1b0bda4d0da6e6d1ddfa765428~mv2.png)
The Case of Grok and the Rise of Non-Consensual Deepfakes
Elon Musk’s artificial intelligence (AI) chatbot Grok has been drawing attention for its use in creating sexualized visuals of women, including LGBTQ+ women and children. Approximately 6,700 intimate images are reportedly created per hour. This represents a uniquely gendered threat because of the speed and ease of production: within seconds, users can comment on a target’s post on X to prompt Grok to produce images of women in revealing or no clothing, or to depict parts of their bodies covered in semen. These images appear publicly in replies and can spread rapidly to large audiences. These non-consensual visuals are nearly impossible to fully remove and are increasingly used to harass or silence victims. The creation and circulation of deepfake imagery reflects a targeted practice in which sexualized visuals are used to exert power and control over women, constituting a new form of digital sexual abuse.
When xAI launched Grok 2 and its image generator, Grok Imagine, in August 2025, the system included a “Spicy Mode” that produced hypersexualized images of celebrities in response to non-explicit prompts. According to Dr. Federica Fedorczyk, a Research Fellow at the Institute for Ethics in AI, the harms faced by celebrities, as well as non-public figures, are not isolated incidents. Grok was intentionally designed with fewer safeguards than other AI systems. Nana Nwachukwu, a PhD candidate at Trinity College Dublin’s AI Accountability Lab, similarly notes that other platforms, such as Open AI’s ChatGPT and Google’s Gemini, include stronger protections and will not generate realistic depictions of identifiable individuals.
These images, highly realistic to the point that they are nearly indistinguishable from reality, can cause psychological harm similar to the non-consensual sharing of real intimate content, including anxiety and depression. This harm may intensify as AI tools become increasingly capable of producing highly realistic sexual content. In addition to emotional distress, AI-assisted gender-based violence may impact social relations, damaging reputations and disrupting personal relationships. Given the speed at which these non-consensual visuals can be created and disseminated, it is highly likely that the images will be detached from their original context and viewed by audiences who do not verify their source. As a result, even though the visuals are fabricated, they can still cause harm both offline and to individuals’ digital identities.
Elon Musk’s reaction to the controversies surrounding Grok appeared shifted from dismissal to limited concession. As concerns escalated in early January 2026, particularly with the spread of AI-generated non-consensual images, Musk suggested that responsibility rested with users rather than the technology. Mounting regulatory pressure, including bans in Indonesia and Malaysia and investigations in the European Union and California, finally led xAI to introduce technical safeguards on January 14, 2026. However, these safeguards apply only to non–paying users, creating a system in which gender-based abuse becomes accessible to those who can afford it. This pay-to-bypass approach offers limited protection and fails to address the structural nature of AI-enabled gendered violence.
Gendered Economic and Political Effects of AI-Assisted Digital Violence
Women are disproportionately impacted by AI-assisted abuse. The failure of platforms and states to respond adequately or in a timely manner produces a silencing effect, pushing many to self-censor or withdraw from online spaces. One commonly observed strategy for protecting against online abuse involves altering digital identity by avoiding personal photographs and instead using non-identifying images. AI-assisted digital harm is especially evident in professions that require a visible online presence, such as journalism, politics, entertainment, modeling, and influencing, where disengagement can directly undermine employment and career advancement. Women journalists, politicians, and public figures targeted by such abuse have reported limiting their online activity, closing accounts, or stepping back from professional opportunities to avoid further abuse. When women cannot participate safely online, this exclusion extends beyond visibility and expression, limiting access to e-commerce, remote work, digital financial services, and entrepreneurial opportunities, and ultimately reducing participation in key growth sectors.
Digital violence has also translated into a direct economic penalty. To protect their reputations, some women pay monthly fees to digital forensics and reputation management firms ranging from $1,500 to $50,000. Effective protection typically requires legal remedies, preventive digital practices, platform accountability, and specialized support.
These online attacks against women represent a growing systemic threat to women’s participation in public life and democratic debate. They operate within a broader landscape of organized misogyny and increasing repression of women’s rights. Deepfakes can function as a political tool when they are used as a form of disinformation and harassment, weaponizing sexualization and threats to reputation to undermine women’s credibility, deter political engagement and intimidate them into silence. For example, following the 2024 U.S. presidential nomination, sexually explicit AI-generated content depicting Vice President Kamala Harris circulated online to discredit her and to mislead voters. A UN Women study of more than 6,400 respondents found that online violence disproportionately affects writers and public communicators on human rights, with 24 percent reporting work-related attacks assisted by AI.
Deepfakes can also serve as a deliberate strategy to curtail women’s freedom of expression and roll back gains in gender equality and empowerment, even when used to target non-public figures. According to a news report published on December 29, 2025, AI-generated videos depicting distressed, unmarried, childless middle-aged women expressing regret over their life choices circulated in China, reportedly purchased by parents to pressure younger women into marriage. Although these videos do not portray real individuals, they deploy fabricated female narratives to shame and coerce women, framing autonomy and remaining unmarried as social failure. This dynamic is particularly notable in the context of China, which recorded its lowest number of new marriages since 1980 in 2024. Similarly, research shows that in India, deepfake pornification and sexually altered imagery are strategically used to suppress women’s voices online, particularly when they challenge entrenched cultural and religious norms.
The failure to protect women from AI-enabled violence, and the impunity of the perpetrators, also have long-term consequences for women’s participation in economic and social life. Laura Bates, activist and author of The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny, argues that the constant exposure to online abuse, combined with gender bias in emerging technologies, may suppress women's willingness and ability to engage with new technological tools. Experiences such as deepfake sexual abuse, and the persistent risk of such harm, discourage women from engaging with emerging technologies, deepening the digital gender gap over time and contributing to future inequalities in employment opportunities, political participation, and the advancement of women's rights.
Undermining Women’s Bodily Autonomy and Sexual Narratives
The failure to treat online attacks against women as a serious harm obscures the gendered power relations through which these violations operate. Not addressing the gendered dimension of AI-generated non-consensual sexual content reinforces longstanding structures of misogyny and inequality, particularly regarding women’s autonomy, sexuality, and consent. Deepfake abuse does not occur in a social vacuum; it reflects broader cultural patterns that normalize the surveillance, discipline, and punishment of women’s bodies when they assert visibility or voice.
Attacks facilitated through tools, such as Grok, risk reinforcing rape culture narratives. The fabrication and circulation of sexualized images without consent constitutes a form of power exercised over women’s bodies and identities. By severing women’s likenesses from their consent, AI-assisted abuse undermines bodily autonomy and agency, enabling others to redefine women’s bodies and sexual narratives without accountability. Where perpetrators act with impunity and institutional responses remain inadequate, responsibility is subtly displaced onto women themselves, reframing harm as a consequence of visibility rather than a violation of consent.
The targeting of non-consenting women becomes particularly revealing in digital environments where consensual adult sexual content is readily accessible, including platforms such as X. It points to a coercive transformation of sexuality into a tool of domination, in which sexualized imagery is weaponized precisely because it overrides consent and autonomy. Sexual harm, in this sense, is produced not through sexuality itself but through its use as a mechanism of control.
These harms are further compounded by structural biases embedded within AI systems. Prior research shows that generative models frequently encode gendered and racialized assumptions, for example, by lightening skin tones to signal femininity or producing non-white women with masculinized features. As deepfakes become more widespread, such biases shape whose bodies are more readily manipulated, sexualized, or discredited, reinforcing existing hierarchies of gender and race within digital environments.
Need for Accountability
The use of emerging technologies to harm women is not an isolated or accidental development but instead reflects systemic gender inequality. The attention on Grok highlights how generative AI can be weaponized to facilitate sexualized abuse, silence women, and undermine their participation in public, economic, and political life.
Addressing these harms requires a dual and coordinated response. On the one hand, meaningful deterrence and accountability mechanisms must hold those who create and distribute non-consensual AI-generated sexual content responsible. At the same time, platforms must implement effective safeguards in their design rather than reacting only after harm occurs.
Glossary
AI-Enabled Gender-Based Violence: Gender-based harm that is facilitated by AI technologies, disproportionately affecting women and gender minorities.
Bodily Autonomy: Right of a person to have control over their own body, and to make decisions about it without any external pressure or coercion.
Deepfake: Synthetic media created using AI, including images, videos or audio, that realistically imitate a real person’s appearance or voice, often without their knowledge or consent.
Digital identity: The collection of personal data, images, representations and online behaviors through which an individual is perceived and recognized in digital spaces.
Digital Sexual Abuse: Any form of sexual abuse, including harrassment, exploitation, coercion, that is carried out through digital technologies.
Gendered Power Relations: Social structures in which power, control and authority are distributed unequally based on gender, shaping whose bodies, voices and autonomy are valued or determined.
Reputational Harm: Damage to an individual’s professional or social standing caused by false, misleading or abusive content.
