Skip to content

What is tech-enabled sexual abuse (TESA)?

TESA includes any sexually abusive or exploitative behavior carried out using technology tools or online platforms, including artificial intelligence (AI).  

Top Stories

RAINN Statement on Use of xAI’s Grok To Produce Child Sexual Abuse Material

Washington, D.C. (Wednesday, January 7) – xAI’s Grok is being used to produce non-consensual sexually explicit images of women and children. This is now abundantly clear. What is equally clear is that this is the result of xAI’s repeated failures to implement basic safeguards against the creation of AI-generated non-consensual images, particularly of children… 

“[The perpetrator] did it out of retaliation because I wouldn’t go on a date with him. He took photos from my instagram and used AI to turn them into deepfake images. Then he posted them across multiple porn sites with my face and name. I had no idea this was going on for over 7 months. My face, name, and these fake photos/narratives gained hundreds of thousands of views each. Not only did I feel violated [and] disgusted, but I was worried for my future … wondering if potential employers would find this stuff online and I wouldn’t get a job.

“I never want other girls to go through what I’ve gone through. There needs to be more in place to protect victims and get us justice.”

– Mallory Jones, survivor

AI Didn’t Invent Sexual Abuse — It Just Made It Easier

Every major tech leap has been exploited for harm. AI is no exception—and the scale is unprecedented, writes RAINN’s CTO.

hands typing on a laptop computer

“I believe Twitter (even before Elon Musk took over) has always been a breeding ground for predators. Especially when it came to people such as myself being sexually harassed on the site, reports were hardly ever taken seriously. It’s the same with every social media platform, including YouTube. Abusers can make violent and graphic rape and death threats against a person, yet nothing happens. Even when multiple people report their accounts.”

– Sarah DeArmond, survivor

Grok’s “Spicy” AI Video Setting Will Lead to Sexual Abuse

RAINN raises alarm over Grok’s AI video feature, warning it fuels image-based sexual abuse and violates the Take It Down Act.

Explaining “Spicy” AI

What does “spicy” mean when AI companies use the term?

In books, movies, and other content, “spicy” indicates the presence of sexually explicit content.

When AI companies release “spicy” features, however, the result is reduced safety guardrails. These settings are often marketed as more entertaining, more edgy, or more “real.” But the same loosened boundaries that allow provocative jokes or explicit language can also open the door to serious harm.

Why do “spicy” AI features raise red flags?
AI systems are not people. They do not understand consent or harm unless they are explicitly designed to respect those limits.

When guardrails are relaxed, these systems may be more likely to:

  • Generate sexual or degrading content.
  • Respond to prompts that blur boundaries around consent.
  • Engage in role-play or dialogue that mirrors grooming behaviors.
  • Produce or describe nonconsensual intimate scenarios.
  • Amplify harassment instead of interrupting it.

Even when a platform claims to prohibit abuse, weaker safeguards make it easier for users to push systems into dangerous territory—intentionally or not. 

Why does this matter for sexual violence prevention?

Sexual violence does not always start physical force. It can begin with boundary testing, normalization, and manipulation.

AI systems with fewer restrictions can replicate these dynamics at scale. Unlike a human abuser, a chatbot can:

  • Interact continuously, day and night.
  • Mimic trust, empathy, or authority.
  • Reach thousands of people simultaneously.
  • Escalate behavior without fatigue or accountability.

That combination makes “spicy” AI features especially concerning for minors, survivors, and anyone targeted for harassment or exploitation.

At RAINN, we view these risks through a prevention lens. Technology that normalizes harmful behavior—even unintentionally—can shape attitudes, expectations, and actions in the real world.

What should responsible AI do instead?

AI innovation does not require sacrificing safety.

Responsible systems should:

  • Enforce clear boundaries around sexual content and consent.
  • Interrupt grooming-like interactions rather than continue them.
  • Default to harm prevention, not engagement maximization.
  • Include transparent safeguards that cannot be easily bypassed.
  • Be designed with survivor-informed expertise from the start.

Tech-enabled sexual abuse should not be treated as an acceptable cost of innovation.

Why is RAINN tracking this issue?

History shows that once harmful uses become widespread, reversing damage is slow and painful. Survivors pay that price first.

That is why RAINN is raising concerns early—before “spicy” becomes just another euphemism we regret normalizing.

“Online abuse harms minors just as much as in-person abuse, and the law needs to treat it that way. We need stronger protections, faster responses, and clearer consequences so predators can’t hide behind screens.

“I spent too long thinking what happened to me didn’t count. I want other survivors to feel seen sooner than I did and to know that speaking up can protect others, too.”

– Ari, survivor

 

RAINN in the Media


What It’s Like to Get Undressed by Grok

“It’s not that abuse is new; it’s not that sexual violence is new. It’s that this is a new tool, and it allows for proliferation at a scale that I don’t think we’ve seen before, and that I’m not sure we’re prepared to navigate as a society.” – Megan Cutter, RAINN chief of victim services

Read on Rolling Stone

Teen Targeted by Deepfake Nudes Hopes New Training Course Will Help Future Victims

When Elliston Berry, then 14 years old, discovered a classmate had made and shared a deepfake nude image of her, she didn’t know where to turn for information on what had happened or how to get the photos removed from social media…

Read on CNN

X Just Paywalled Grok’s Deepfakes. They’re Still Everywhere.

“In any other context, when somebody turns a blind eye to harm that they are actively contributing to, they’re held responsible. Tech companies should not be held to any different standard.” – Sandi Johnson, RAINN senior legislative policy counsel

Read on Vox

Grok Chatbot Allowed Users To Create Digitally Altered Photos of Minors in “Minimal Clothing”

“I talk with survivors of tech-enabled sexual abuse every day, and what every one of them will tell you is that it feels like it will never end. Every notification ding on your phone and message asking ‘Is this you?’ perpetuates the abuse.” – Stefan Turkheimer, RAINN vice president of public policy

Read on CBS

AI Deepfakes on X Raise a Major Policy Question

“Before, X was a passive publisher — everything that was being generated was being generated by the users. With the addition of Grok … they are co-conspirators in this.” – Sandi Johnson, RAINN senior legislative policy counsel

Read on Politico

Essential Learning

Get the Facts About Tech-Enabled Sexual Abuse


Tech-enabled sexual abuse is a rapidly increasing form of sexual violence that includes deepfakes, revenge porn, sextortion, CSAM, and other image-based harms.

How To Report Tech-Enabled Sexual Abuse


Find recommended steps to take, legal options, and 24/7 support.

Staying Safer Online: Learn About Tech-Enabled Sexual Abuse


Learn how to stay safer when using technology — and what to do if you’re targeted or abused.

Responding to AI-Facilitated Abuse: Tips for Schools


From deepfakes to sextortion, artificial intelligence (AI) is putting kids at risk. Explore how schools can prevent and respond to tech-enabled sexual abuse.

“The police said it was not a criminal report; said it would be labelled instead as an incident report. They told me that I consented. Never followed up for more information.

“Justice for me would just be documentation or a legal record of what I’ve went through. I didn’t even get that.

“Police dismissed it, never followed up or asked for more information. I wish they believed me. I wish I had a hero.

– Ari, survivor

For Legislators

Image-Based Sexual Abuse Laws: Combat Nonconsensual AI Deepfakes

Get RAINN’s recommendations for state legislators. Create and pass laws addressing nonconsensual, explicit AI-generated and AI-manipulated “deepfakes.”

Woman defiantly looking at the united states capital building

“We want a change. It’s not a request; WE NEED IT. I’m only a kid. Imagine this was your kid. What would you do?”

– Invisibleteen, survivor

For Workplaces

Prevention & Response Strategies for the Technology Sector

Partner with RAINN Consulting Group to prevent sexual misconduct in the tech workplace and across digital platforms. Together, we can ensure safety for employees, users, and survivors alike.

If you or someone you know has experienced sexual assault, you are not alone. RAINN’s National Sexual Assault Hotline offers free, confidential, 24/7 support in English and en Español.

Call 800.656.HOPE (4673)

Chat at RAINN.org/hotline

Text “HOPE” to 64673

Get Help Now 

Join Our Community

Four-Friends-Hugging-

Show Support

Whether you’re a survivor yourself or know someone who is, your compassionate support can profoundly impact someone’s healing journey. Here’s how.

Take Action

Show up, speak out, and step in. Learn more about how you can add your voice, fight for justice, and help RAINN move its mission forward for survivors.