Skip to content
get informed

AI Didn’t Invent Sexual Abuse — It Just Made It Easier

Every major tech leap has been exploited for harm. AI is no exception—and the scale is unprecedented, writes RAINN’s CTO.

By Bill Bondurant, Chief Technology Officer, RAINN

In my more than 28 years in technology, I have watched the same story repeat itself with numbing consistency. Every major technological breakthrough is followed, almost immediately, by someone figuring out how to weaponize it. Too often, that harm takes the form of sexual violence and exploitation.

We saw it in the early days of AOL chat rooms, where people who intended harm operated faster than understaffed and undertrained moderators could respond. We saw it again when file-sharing platforms like The Pirate Bay enabled the mass distribution of illegal content with little friction. Later came the surge of nonconsensual intimate images (NCII), paired with cryptocurrency-enabled money laundering that now helps perpetrators evade accountability. Internet culture even has a shorthand for this inevitability: Rule 34—if something exists, it will be exploited.

That pattern is not a failure of imagination. It is a failure of responsibility.

Today, we’re tipping over the edge of a new, far more dangerous frontier: AI chatbots and autonomous agents. Tools like Grok, particularly when configured to push boundaries through features marketed as “spicy,” demonstrate how quickly guardrails can erode when engagement is prioritized over safety. 

These systems can be manipulated to groom victims, generate NCII (also known as deepfakes), or automate harassment at a scale that was previously impossible. Unlike earlier technologies, they do not sleep. They do not fatigue. They do not hesitate. They can impersonate trusted individuals, fabricate convincing personas, and operate continuously without meaningful human oversight. What once required sustained effort by an individual abuser can now be industrialized — and disseminated at scale. 

This is not speculative science fiction. It is tech-enabled sexual abuse—and it’s accelerating faster than our ability to stop it.

Moore’s Law tells us that computing power roughly doubles every two years. Our legal and regulatory systems do not come close to keeping pace. We are still struggling to contain spam, phishing, and malware while simultaneously confronting AI-generated deepfakes, encrypted platforms that shield perpetrators, and chatbots that can be jailbroken to facilitate abuse. New tools emerge monthly, and history tells us that within months—not years—someone will find a way to turn them toward harm.

The benefits of innovation, including AI, are real. Technology connects survivors to support, helps law enforcement process evidence, and enables prevention at scale. But innovation without accountability is not progress; it is negligence.

At RAINN, we see the human cost of these failures every day. Survivors cannot wait another decade for policymakers to catch up after harm has already occurred. We need safeguards that anticipate abuse, not frameworks that apologize for it later. That means regulation informed by technical expertise and grounded in the reality of the crimes taking place. It means expecting technology companies to design with safety as a core requirement, not an optional patch.

The question is no longer whether AI will be misused. History has already answered that. The real question is whether we will continue to act surprised—or finally act responsibly. 

RAINN’s work is grounded in prevention, justice, and healing. That mission demands that we confront emerging technology honestly and urgently, before the damage becomes the next inevitable headline. AI is a part of our future. Using it to make sexual violence more common doesn’t need to be. 


Get Informed

Understanding these perspectives can help you form a broader understanding of the challenges we face.

The “Tool Neutrality” Argument

Some technologists argue that AI is fundamentally neutral and that misuse reflects the actions of individual bad actors, not platform design. This perspective emphasizes enforcement over regulation.

Why it falls short: Decades of platform history show that design choices shape behavior. Features that reduce friction or increase anonymity predictably increase the risk of abuse. 

How this applies to Grok: Grok stands in an unusual position, functioning as both creator and distributor—no longer only a passive publisher, but now an active participant, too, in the production and spread of harm. Because Grok is hosting, generating, and distributing content, we cannot separate creation from accountability.

Market Self-Correction

Others claim public backlash and market pressure will force companies to self-regulate without government intervention.

Why it’s risky: Survivors bear the cost of harm long before markets correct themselves—if they ever do. Safety delayed is safety denied. 


William S. Bondurant II is RAINN’s Chief Technology Officer with nearly 30 years of experience supporting mission-critical infrastructure for public and private institutions, including Amazon Web Services, Meta, AOL, and Time Warner Cable. A U.S. Marine Corps combat veteran, he has led secure cloud, defense, AI foundations, and large-scale network initiatives worldwide.


If you or someone you know has experienced sexual assault, you are not alone. RAINN’s National Sexual Assault Hotline offers free, confidential, 24/7 support in English and en Español.

Call 800.656.HOPE (4673)

Chat at RAINN.org/hotline

Text “HOPE” to 64673

Get Help Now 

Join Our Community

Four-Friends-Hugging-

Show Support

Whether you’re a survivor yourself or know someone who is, your compassionate support can profoundly impact someone’s healing journey. Here’s how.

Take Action

Show up, speak out, and step in. Learn more about how you can add your voice, fight for justice, and help RAINN move its mission forward for survivors. 

Last updated: January 8, 2026
Share: