Imagine relying on an artificial intelligence tool to automate everyday tasks and make your work easier... and discovering that the same tool can turn against you. That's what came to light this week with Claude. What many saw as an advance in productivity is now revealed as a threat: functionality designed to simplify processes is transformed into a silent weapon, capable of executing ransomware without the user's detection. It's not a technical vulnerability, but rather a breach of trust. And that detail completely changes the rules of the game.

What we know: a legitimate “Skill” that ends up being malware

The functionality in question - known as Claude's Skills - allows users to add specific modules to automate everything from creating GIFs to complex workflows. But researchers at Cato Networks found that, with minimal modification, a seemingly innocent Skill can download and execute malicious code. In fact, in their experiment, they used a modified public “GIF creator” skill. After approving its use, the Skill downloaded an external script that deployed the MedusaLocker ransomware. The result: file encryption, the same functionality as a traditional attack, but fully automated.

The consequences: when AI ceases to be an ally

  • Cybercrime automation: This case shows that a large infrastructure or a team of hackers is no longer needed; all you need is a modified skill, installation permission, and the attack is underway.
  • Massive risk for organizations: A single person (or employee) can trigger a ransomware attack against an entire company, with access to systems, network and storage.
  • Illusion of broken security: Believing that “we only use trusted AI” is no longer a sufficient guarantee. Public Skills, the single-consent trust model and the complexity of the code hide real threats.

The false sense of control: the real risk behind automation

One of the most dangerous problems revealed by this case is the sense of control that companies believe they have over their systems. Most people think that if the tool comes from a reliable supplier, then its operation is safe by default. But intelligent automation introduces a new type of risk: invisible risk.

Modern AIs make decisions, execute processes and download files without the need for constant human intervention. This means that a configuration error, a modified Skill, or a hasty approval can trigger actions that are not detected until the damage is done.

Automation facilitates repetitive tasks, yes, but it also reduces the friction that previously acted as a safety barrier: manual supervision, process control and constant validation. In the wrong hands—or even in inexperienced hands—these capabilities can accelerate attacks, expand their reach and multiply their consequences.


The message is clear: automating isn't bad, but automating without control is a direct invitation to disaster.

What we recommend from Apolo Cybersecurity

The good news: The problem can be mitigated — if we act wisely and in advance.

  • Review which AI Skills or plugins are used in the company. Treat each one as if it were a piece of external software: with audits, permission control and clear contractual conditions.
  • Establish strict approval policies: No AI module should run without going through a security control and validation process.
  • Educate the team: Explain that AI can be as dangerous as a hacking tool if used without controls. Technical training is not enough, awareness is also needed.
  • Implement controls for code execution, sandboxing, monitoring and traceability of any automatic activity generated from AI.

Are you sure your AI isn't a threat?

At Apolo Cybersecurity, we help you assess the real risks associated with AI tools: we analyze your processes, audit your integrations and protect your environment against hidden attacks.

Prev Post
Next Post

Any questions?
We're happy to help!