US artificial intelligence (AI) company Anthropic has revealed that its flagship chatbot, Claude, has been weaponized by hackers to carry out cyberattacks and large-scale data theft.
The company says criminals used its technology not only to write malicious code but also to conduct extortion campaigns and even help North Korean operatives land remote jobs at major US firms.
Anthropic says it has disrupted the threat actors, reported incidents to authorities, and strengthened its detection tools.
AI in Cybercrime: From Code to Extortion
One of the most alarming cases involved “vibe hacking”—a multi-stage cyberattack against at least 17 organizations, including government agencies.
According to Anthropic, Claude was used:
To write hacking scripts and intrusion tools
To decide which data to steal
To draft ransom notes tailored to victims
To suggest ransom amounts
The firm described this as hackers using AI “to an unprecedented degree”, letting Claude make both tactical and strategic decisions normally reserved for humans.
Shrinking Exploitation Time
Experts warn that AI is speeding up cybercrime.
“The time required to exploit cybersecurity vulnerabilities is shrinking rapidly,” said Alina Timofeeva, adviser on cybercrime and AI.
“Detection and mitigation must shift to being proactive and preventative, not reactive after harm is done.”
This highlights a growing challenge: AI tools designed to boost productivity can also accelerate attacks when misused.
North Korean Job Scams with AI
Beyond hacking, Anthropic says North Korean operatives used Claude to pose as skilled remote workers and secure jobs at US Fortune 500 tech companies.
The AI was used to:
Generate convincing job applications
Translate communications
Write technical code during employment
By doing so, North Korea allegedly gained access to sensitive corporate systems, while companies risked violating sanctions by unknowingly paying North Korean workers.
According to Geoff White, co-host of The Lazarus Heist podcast:
“Agentic AI helps them leap over cultural and technical barriers, enabling them to get hired.”
The Rise of Agentic AI
The misuse of Claude highlights risks from agentic AI—AI systems capable of making autonomous decisions and executing tasks without human oversight.
While touted as the next big step in AI, these cases show how criminals can turn the technology into a force multiplier for cybercrime.
Still, experts stress that AI hasn’t created entirely new crime waves yet. Traditional methods—like phishing emails and exploiting software bugs—remain dominant.
Protecting AI Systems
Cybersecurity specialists emphasize the need for stricter protections around AI.
“Organizations need to understand that AI is a repository of confidential information that requires protection, just like any other form of storage system,” said Nivedita Murthy, senior security consultant at Black Duck.
This means companies adopting AI must treat these tools as sensitive assets, ensuring they are safeguarded against misuse.
Key Takeaway
Anthropic’s warning is a wake-up call: AI is now a weapon in the hands of hackers. From ransomware extortion to job fraud, criminals are exploiting its speed, intelligence, and accessibility.
As AI adoption grows, so does the responsibility to secure, monitor, and regulate these tools before they are misused on a wider scale.