Tennessee Minors Sue Musk’s xAI, Alleging Grok Generated Sexually Explicit Images of Them
Comments
Link successfully copied
xAI and Grok logos are seen in a illustration photo on Feb. 16, 2025. (Dado Ruvic/Reuters)
By Kimberly Hayek
3/18/2026Updated: 3/18/2026

Three people from Tennessee, including two minors, sued Elon Musk’s xAI on March 16, alleging that it knowingly designed its Grok ​image generator to let people create sexually explicit content by ‌using real photos of others.

Grok, the AI chatbot developed by xAI, allows X users to generate text and images or edit existing images by tagging the Grok account in a post on X or by opening a chat window via a dedicated icon.

The lawsuit, filed in the San Jose, California, federal court, is seeking class-action status for people in the United States who were “reasonably identifiable” in sexualized images or videos allegedly generated by Grok based on real images of themselves.

The lawsuit alleges that xAI failed ‌to ⁠install safeguards to prevent its systems from generating sexual content involving minors. All three plaintiffs were minors at the time the images were allegedly generated.

Plaintiffs allege their real images were digitally altered into explicit content and ​then shared online ​through platforms, ⁠causing emotional distress and creating a public nuisance.

They are seeking unspecified damages, legal fees, and an injunction requiring ​xAI to halt the alleged practices.

“xAI ... saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children,” the complaint states.

xAI, based in the San Francisco Bay Area, developed Grok as a generative AI tool marketed as a free-speech AI model and creative tool.

The Epoch Times reached out to xAI for comment but did not receive a response by publication time.

The company’s Safety account on X said in January that it works with governments to remove such content.

“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the X Safety account wrote in a Jan. 4 post on X. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

Later in January, the Safety account said it was imposing limits on Grok’s image editing capabilities to prevent the Grok account from editing images of real people to make them appear in revealing clothing, such as bikinis.

The Safety unit said that image creation and editing via the Grok account is now only available for paid subscribers. The unit said this was to add an “extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable.”

“We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” the statement said.

Musk said in a Jan. 14 post on X that he was not aware of “any naked underage images generated by Grok.”

“Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests,” Musk said.

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

Victoria Friedman and Reuters contributed to this report.

Share This Article:
Kimberly Hayek is a reporter for The Epoch Times. She covers California news and has worked as an editor and on scene at the U.S.-Mexico border during the 2018 migrant caravan crisis.