Warning!

You have JavaScript enabled, you are putting yourself at risk!

Please disable it immediately!

Dark Web AI Tool ‘DIG’ Emerges, Capable of Generating CSAM and Weapons Instructions

Category : Other News | Sub Category : Other News Posted on 2025-12-19 15:07:24


Dark Web AI Tool ‘DIG’ Emerges, Capable of Generating CSAM and Weapons Instructions

A completely uncensored AI assistant is now helping criminals with some of their worst activities. It’s being used to generate illegal child abuse material, write malicious code, and even provide guides for making bombs.

This tool, called DIG AI, is openly marketed to cybercriminals. It’s already active on hidden parts of the internet. Security experts are saying this is a scary jump forward in tech being used as a weapon.

What’s DIG AI and How Does It Work?

DIG AI use has shot up since it was first seen this year, in late September. Since it’s on the Tor network, people can stay anonymous. Plus, you don’t even need an account, which makes it super easy to jump in and stay hidden.

The person who created it, who goes by the pseudonym “Pitch,” says it’s built on ChatGPT Turbo. This is a commercial large language model. Operators have stripped away its safety features. Promotions for DIG AI appear in dark web markets that sell drugs and stolen payment data. This confirms its target audience is criminals.

Researchers tested the tool with terms linked to banned activities. The AI provided detailed guides for making explosives and illegal drugs. It generated phishing messages and scam content at scale—a key tactic in the modern cybercriminal’s playbook for identity theft and fraud. It also produced malicious scripts to hack websites.

This automation lets criminals run complex cyber operations faster. It also requires less technical skill from them. The researchers warn that this could create a new criminal market. Criminals may soon sell AI models as a service, much like they sell botnets today.

The Alarming Rise of “Not Good” AI

DIG AI is part of a larger, troubling pattern. Mentions of bad AI tools on cybercrime forums more than tripled over the past year. Other infamous tools include FraudGPT and WormGPT.

These are used to create phishing emails and malware. They also give advice on committing payment fraud. Mainstream AI like ChatGPT or Google Gemini has strict rules. They block requests for hate speech or illegal activity.

Dark web services like DIG AI actively bypass all these controls. If regulated AI tools lock out bad actors, they simply move to unregulated spaces. The dark web provides the perfect hideout.

A Focus on AI-Generated Child Abuse Material

Perhaps the most serious finding involves child sexual abuse material (CSAM). DIG AI can help create hyper-realistic AI-generated CSAM. It can produce synthetic images or videos from text descriptions. It can also manipulate innocent photos of real children into explicit content.

Cybersecurity firm Resecurity worked with law enforcement as part of its investigation. They collected evidence of this activity. Even if creators label content “synthetic,” authorities still treat it as illegal material.

Laws are trying to catch up. The EU, UK, and Australia have all moved to criminalize fully synthetic AI-generated CSAM. This closes a previous legal gap. Last year, U.S. authorities convicted a child psychiatrist for creating similar AI-generated material. This legal push aligns with intensified global law enforcement actions, as seen in the recent operation where a global sting dismantled a dark web network sharing child exploitation material.

What’s Next for the Cyber Landscape?

After seeing tools like DIG AI in action, Resecurity has a stark warning for 2026. They’re telling anyone who will listen that the coming year looks “ominous.” Major global events could be prime targets.

The future may include criminal data centers dedicated to “not good” AI. This mirrors old “bulletproof hosting” for spam operations. The central question is becoming urgent. Can laws and safeguards keep up with criminal innovation on the dark web?

The gap is clear. Regulations like the EU’s AI Act target mainstream platforms. They often don’t reach anonymous services on Tor. As these AI tools get more powerful and easier to copy, closing that gap is the critical challenge for everyone.


Leave a Comment: