Accelerated by:


Copyright © 2025 Apollo Cybersecurity

OpenAI has blocked ChatGPT accounts linked to cyber threat groups such as APT28 (Russia) and APT31 (China). This action not only responds to activities detected as part of disinformation operations, but it also sends a clear message: the misuse of generative artificial intelligence will be firmly combated.
Organizations must understand that AI is not only an efficiency tool, but also a new risk vector. This news has direct implications for companies in all sectors:
Microsoft Threat Intelligence and OpenAI reports reveal that:
Although the uses were considered “basic”, they represent a dangerous gateway if action is not taken in time.
According to OpenAI, they are actively working to “share threat intelligence, key indicators and best practices with other industry players and government bodies.”
However, this alliance does not guarantee full immunity. As AI-based offensive capabilities evolve, attackers are also refining their strategies to evade automated controls. This poses a latent risk for companies that have not yet adapted their internal policies or updated their technological defenses.
Actionable lessons you can apply today:
What OpenAI has done marks a before and after. It's not enough to protect traditional systems: now we must also protect workflows with AI. Being prepared is a competitive advantage.
At Apolo Cybersecurity we help companies to design policies for the responsible use of AI and to shield their operations against new threats. Request a free audit now!