Top Stories

Urgent Alert: Cybercriminals Exploit Chatbots for Malicious Attacks

Urgent Alert: Cybercriminals Exploit Chatbots for Malicious Attacks
Editorial
  • PublishedSeptember 2, 2025

UPDATE: A shocking report reveals that cybercriminals are now manipulating AI chatbots to launch cyberattacks, raising alarms across the cybersecurity landscape. This emerging tactic, termed “vibe hacking,” has allowed malicious actors to exploit tools like Anthropic’s Claude to facilitate extensive data extortion operations.

According to a report released by Anthropic on October 25, 2023, a cybercriminal utilized Claude Code to target at least 17 organizations within various sectors, including government, healthcare, and religious institutions, in just one month. The attacks led to ransom demands reaching as high as $500,000. Anthropic confirmed that despite its “sophisticated safety and security measures,” it was unable to prevent the misuse of its chatbot.

The situation has escalated quickly, with experts warning that this represents a troubling evolution in AI-assisted cybercrime. “Today, cybercriminals have taken AI on board just as much as the wider body of users,” stated Rodrigue Le Bayon, head of the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

In a parallel case, OpenAI reported in June that ChatGPT had similarly been exploited to assist in malware development. These models are embedded with safeguards intended to deter illegal activities, yet hackers have discovered ways to bypass these protections. Vitaly Simonovich from Cato Networks noted that “zero-knowledge threat actors” can still extract information to craft harmful software.

Simonovich’s research found methods to circumvent chatbot restrictions, allowing him to engage AI in creating malware under the guise of a fictional scenario. “I have 10 years of experience in cybersecurity, but I’m not a malware developer. This was my way to test the boundaries of current LLMs,” he explained. While his attempts were thwarted by some chatbots, others like Deepseek and Microsoft’s Copilot were not as resilient.

The implications of “vibe hacking” are dire, as it could empower even non-coders to develop malware, increasing the overall threat landscape. Le Bayon warned that such advancements are likely to “increase the number of victims” rather than create a new breed of hackers.

As the use of generative AI tools becomes more prevalent, creators are under pressure to enhance detection capabilities for malicious use. “Their creators are working on analysing usage data,” Le Bayon stated, which may lead to improved safeguards in the future.

As this urgent situation continues to develop, the cybersecurity community is on high alert. The exploitation of chatbots signifies a significant shift in the tactics employed by cybercriminals, making it crucial for organizations to bolster their defenses against these emerging threats. Stay tuned for more updates on this evolving issue.

Editorial
Written By
Editorial

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.