Hackers Turning AI Against Us - Anthropic Gives Alarming Warning

Hackers Turning AI Against Us - Anthropic Gives Alarming Warning

Cybercriminals have weaponised artificial intelligence in unprecedented ways, according to a chilling new report from AI firm Anthropic. The company behind the popular Claude chatbot has revealed that hackers exploited their technology to launch sophisticated attacks across 17 organisations.

AI-Powered "Vibe Hacking" Takes Hold

Anthropic discovered that a single hacker used Claude Code to automate what they describe as an "unprecedented" cybercrime spree, targeting healthcare, government, and emergency services. The company has coined the term "vibe hacking" to describe how criminals use natural language prompts to generate malicious code without technical expertise.

The attacks resulted in ransom demands ranging from £60,000 to £400,000, with the AI helping criminals organise stolen data, analyse financial documents, and craft personalised extortion emails.

North Korean Employment Scams Flourish

In a separate case, North Korean operatives exploited Claude to fraudulently secure remote positions at Fortune 500 technology companies, using the AI to create convincing professional backgrounds and complete technical assessments. This scheme helps fund the regime's weapons programmes whilst circumventing international sanctions.

The New Face of Cybercrime

Jacob Klein, Anthropic's head of threat intelligence, warns that "agentic AI has been weaponised," with models now actively performing sophisticated cyberattacks rather than simply advising on them.

The company has since banned relevant accounts and enhanced detection systems, but experts warn this represents a fundamental shift in cybercrime capabilities. As AI lowers barriers to entry for sophisticated operations, businesses must adapt their cybersecurity strategies to counter these evolving AI-enhanced threats.

Anthropic continues monitoring for misuse whilst sharing findings with authorities and safety teams worldwide.