A photo downloaded from Instagram. A $10 app. About 60 seconds. That’s all it takes for a teenage boy to generate a fake nude image of a classmate and share it with an entire school. A new investigation from WIRED and Indicator has mapped the full scale of what experts are calling a genuine global crisis — and the numbers are worse than almost anyone knew.
The joint investigation identified deepfake sexual abuse incidents at roughly 90 schools across 28 countries, affecting more than 600 documented student victims since 2023. Researchers believe those numbers represent only the fraction of cases that were ever publicly reported.
90 Schools, 28 Countries, One App Category
The pattern is consistent across continents. Teenage boys — typically in high school — pull ordinary photos from a classmate’s Instagram or Snapchat feed and run them through a class of AI tools known as “nudify” apps. These apps do exactly what the name suggests: they strip clothing from any person in a photo and generate a synthetic nude image that, to a casual viewer, looks real.
North America has seen nearly 30 reported cases since 2023 alone — including one incident involving more than 60 alleged victims. Europe accounts for more than 20 cases. South America, Australia, and East Asia have each reported incidents as well. South Korea faced a particularly severe wave in 2024, when coordinated attacks targeted 500 schools through encrypted Telegram groups, offering a grim preview of what happens when this behavior scales without early intervention.
The Business Model Behind the Abuse

The apps fueling this aren’t obscure code shared in dark corners of the internet. Some “nudify” platforms operate openly, marketing themselves as entertainment tools, and earn their developers millions annually. They’ve been optimized for ease of use — low barrier to entry, fast results, accessible on a phone.
The images these tools produce are legally classified as child sexual abuse material when the subjects are minors. That classification carries serious weight, but it hasn’t translated into meaningful consequences for most perpetrators. WIRED’s investigation found that schools and law enforcement are frequently unprepared to respond, often handling incidents quietly or misclassifying them as standard disciplinary problems rather than sex crimes.
The Violation That Leaves No Evidence to Undo
What makes deepfake abuse distinct from other forms of image-based harassment is the absence of an original. Traditional non-consensual intimate image sharing — “revenge porn” — involves real photos that a victim once chose to take or share. There’s a traceable moment. Deepfakes require no such moment.
Victims never took a compromising photo. They never made a mistake. Their violation is entirely synthetic, yet the trauma and social fallout are entirely real. Students across documented cases have switched schools, dropped out, and entered therapy — all because of an image that was never real. The fear that the images will surface again, years later, follows them out of adolescence.
A Federal Law With a Deadline Three Weeks Away
Congress didn’t ignore this. The bipartisan Take It Down Act was signed into law in May 2025, criminalizing the publication of non-consensual intimate images — including AI-generated deepfakes — and requiring covered platforms to remove flagged content within 48 hours of a victim’s request. The law’s criminal penalties took effect immediately. Its first conviction — an Ohio man who used AI to generate and distribute deepfake imagery — was secured in April 2026.
The platform compliance deadline, however — the date by which social media and app companies must have removal processes formally in place — is May 19, 2026. Three weeks from now.
What Schools Are Actually Equipped to Handle

The honest answer is: not much yet. School policies governing student conduct were written for a pre-AI era. Administrators dealing with a deepfake incident in 2026 are largely improvising — consulting lawyers, notifying parents, and hoping local law enforcement has heard of the Take It Down Act.
The WIRED investigation is notable partly because it’s the first systematic global review of these cases. Before this month, no one had counted. Schools were treating incidents as isolated aberrations. The data suggests they were wrong about that — and that the infrastructure to respond, from classrooms to courtrooms, is still catching up to the tools already in students’ pockets.