Over the previous few weeks, folks on X ― the Elon Musk–owned social media platform ― have used the app’s chatbot, Grok, to generate sexual images of women and ladies with out their consent.
With a number of easy directions ―“put her into a really transparent mini-bikini,” for example ― Grok will digitally strip anybody all the way down to their bikini.
A report by the nonprofit AI Forensics discovered that 2% of 20,000 randomly selected images generated by Grok over the vacations depicted an individual who seemed to be 18 or youthful, together with 30 younger or very younger ladies or ladies in bikinis or clear clothes. Different photos depict women and girls with black eyes, covered in liquid, and looking out afraid.
Regardless of receiving international backlash and regulatory probes in Europe, India and Malaysia, Musk first mocked the scenario by sharing an array of Grok-generated photos, together with one depicting himself in a bikini, alongside laughing-crying emojis.
By Jan. 3, Musk commented on a separate post: “Anybody utilizing Grok to make unlawful content material will endure the identical penalties as in the event that they add unlawful content material.” (We’ll clarify what constitutes unlawful content material in a while.)
“What issues legally and morally is that an actual particular person’s physique and id have been used with out consent to create a sexualized lie.”
– Rebecca A. Delfino, an affiliate professor of regulation who teaches generative AI and Authorized Apply at Loyola Marymount College
Deepfake nudes are nothing new, however consultants say it’s getting simpler to create and publish them.
Deepfake nudes are nothing new. For years, apps like “DeepNude” have given folks entry to deepfake expertise that enables them to digitally insert women into porn or be stripped naked with out their data. (In fact, males have been victims of sexualized deepfakes as properly, however the analysis signifies that males are extra possible than ladies to perpetrate image-based abuse.)
Nonetheless, Grok’s utilization this week is totally different and arguably extra alarming, mentioned Carrie Goldberg, a victims’ rights legal professional in New York Metropolis.
“The Grok story is exclusive as a result of it’s the primary time there’s a combining of the deepfake expertise, Grok, with a right away publishing platform, X,” she mentioned. “The fast publishing functionality allows the deepfakes to unfold at scale.”
“It must be underscored how weird it’s that the world’s richest man not solely owns the businesses that create and publish deepfakes, however he’s additionally actively selling and goading customers on X to de-clothe harmless folks,” Goldberg added. “Elon Musk feels entitled to strip folks of their energy, dignity, and garments.”
What’s been occurring the previous couple of weeks is unlucky, however none of it’s a shock to Riana Pfefferkorn, a coverage fellow on the Stanford Institute for Human-Centered AI. Her take: This drawback will worsen earlier than it will get higher.
“Each tech service that enables user-generated content material will inevitably be misused to add, retailer and share CSAM (youngster intercourse abuse materials), as CSAM unhealthy actors are very persistent,” she mentioned.

VINCENT FEURAY through Getty Photos
The upshot is that AI firms should learn to greatest implement strong safeguards towards unlawful imagery. Some firms might have a stronger tradition of “CSAM/nonconsensual deepfake porn isn’t OK.”
Others will attempt to have it each methods, establishing unfastened guardrails for security whereas additionally making an attempt to generate profits from permissible NSFW imagery, Pfefferkorn mentioned.
“Sadly, whereas I don’t have any direct perception, x.AI doesn’t appear to have that sturdy of a company tradition in that respect, going off Elon Musk’s dismissive response to the present scandal in addition to previous reporting from a number of months in the past,” she mentioned.
Victims of this type of exploitation typically really feel powerless and uncertain of what they’ll do to cease the photographs from proliferating. Ladies who’re vocal on-line fear about the identical factor occurring to them.
Omny Miranda Martone, the founding father of the Washington-based Sexual Violence Prevention Affiliation, had deepfake nude movies and pics posted of themselves on-line a number of years again. As an advocate on laws stopping digital sexual violence, Martone wasn’t precisely stunned to be a goal.
“Additionally they despatched the deepfakes to my group, in an try to silence me. I’ve seen this identical tactic used on Twitter with Grok during the last week,” they mentioned.
Martone mentioned they’ve seen a number of cases of a lady sharing her opinion and males who disagree together with her utilizing Grok to create express photos of her.
“In some instances, they’re utilizing these photos to threaten the ladies with in-person sexual violence,” they added.

Roc Canals through Getty Photos
Probably the most persistent beliefs about deepfakes depicting nudity is that as a result of a picture is “faux,” the hurt is in some way much less actual. That assumption is fallacious, mentioned Rebecca A. Delfino, an affiliate professor of regulation who teaches generative AI and authorized observe at Loyola Marymount College.
“These photos may cause critical and lasting harm to an individual’s fame, security, and psychological well-being,” she mentioned. “What issues legally and morally is that an actual particular person’s physique and id have been used with out consent to create a sexualized lie.”
Whereas protections stay uneven, untested and sometimes come too late for victims, Delfino mentioned the regulation is slowly starting to acknowledge that actuality.
“Tales like what’s occurring with Grok matter as a result of public consideration typically drives the authorized and regulatory responses that victims at the moment lack,” she mentioned. “The regulation is lastly beginning to deal with AI-generated nude photos the identical manner it treats different types of nonconsensual sexual exploitation.”
What may be completed if an AI-generated nude is posted of you?
Protect the proof.
If you happen to determine deepfake content material of your self, display screen seize it and report it instantly.
“Probably the most sensible recommendation is to behave rapidly and methodically,” Delfino mentioned. “Protect proof ― screenshots, URLs, timestamps) ―earlier than content material is altered or eliminated. Report the picture to platforms clearly as nonconsensual sexual content material and proceed to comply with up.”
If you happen to’re below 18 in a nude or nudified picture, platforms ought to take that very severely, Pfefferkorn mentioned. Sexually express imagery of youngsters below 18 is unlawful to create or share, and platforms are required to promptly take away such imagery once they study of it and report it to the National Center for Missing & Exploited Children (NCMEC).
“Don’t be afraid to report a nude picture to NCMEC that you just took of your self when you have been underage: there may be additionally a federal regulation saying you may’t be legally punished in the event you report it,” Pfefferkorn added.
And if a minor is concerned, regulation enforcement must be contacted instantly.
“When potential, consulting with a lawyer early will help victims navigate each takedown efforts and potential civil treatments, even the place the regulation remains to be evolving,” Delfino mentioned.

Fiordaliso through Getty Photos
Know that there’s rising authorized recourse.
The Take It Down Act, signed into regulation final Could, is the primary federal regulation that limits the usage of AI in methods that may hurt people. (Satirically sufficient, Grok gave someone insight about the Take It Down Act when requested in regards to the authorized penalties of digitally undressing somebody.)
This laws did two issues, Martone mentioned. First, it made it a legal offense to knowingly publish AI-generated express movies and pictures with out the consent of the particular person depicted. Second, it required social media websites, search engines like google, and different digital platforms to create “report and take away procedures” by Could of 2026 ― nonetheless a number of months away.
“In different phrases, all digital platforms will need to have a manner for customers to report that somebody has posted an express video or picture of them, whether or not it was AI-generated or not,” they mentioned. “The platform should take away reported photos inside 48 hours. In the event that they fail to take action, they face penalties from the Federal Commerce Fee (FTC).”
Pfefferkorn famous that the regulation permits the Division of Justice to prosecute solely those that publish or threaten to publish NCII (non-consensual intimate photos) of victims; it doesn’t enable victims to sue.
As it’s written, the Take It Down Act solely covers express photos and movies, which should embrace “the uncovered genitals, pubic space, anus, or post-pubescent feminine nipple of an identifiable particular person; or the show or switch of bodily sexual fluids.”
“Quite a lot of the photographs Grok is creating proper now are suggestive, and definitely dangerous, however not express,” Martone mentioned. “Thus, the case couldn’t be pursued in legal court docket, nor wouldn’t it be coated by the brand new report-and-remove process that shall be created in Could.”
There are additionally many state legal guidelines that the nonprofit shopper advocacy group Public Citizen tracks here.

Nico De Pasquale Pictures through Getty Photos
Keep in mind that you’re not alone.
If this has occurred to you, know it’s not your fault and you aren’t alone, Martone mentioned.
“I like to recommend instantly contacting a cherished one. Ask them to return over or discuss with you on the cellphone as you undergo the method of discovering the photographs and selecting methods to take motion, they mentioned.
After getting a cherished one serving to you, attain out to your native rape disaster middle, a victims’ rights legal professional in your state, or an advocacy group that will help you determine your choices and navigate these processes safely, Martone mentioned.
“As a result of there are such a lot of variations in state legal guidelines, an area skilled will guarantee you might be receiving steerage that’s correct and relevant to your scenario,” they mentioned.
Need assistance? Go to RAINN’s National Sexual Assault Online Hotline or the National Sexual Violence Resource Center’s website.












