Back to blogs
Insights

The dangers of GPTs

September 1, 2025

|

5

min read

Follow me on LinkedIn as

Highlights

GPT models like ChatGPT are being misused by cybercriminals for phishing, malware, and data leaks. This blog outlines real-world risks and what you can do to protect your business.

Generative Pre-trained Transformers (GPTs) like OpenAI's ChatGPT are revolutionising industries across the board. From writing emails to creating educational content, they're powerful tools built to understand and generate human-like text. But the same tech that makes GPTs useful also makes them risky, particularly for cybersecurity.

In February 2024, Microsoft and OpenAI spotted several state-backed hacking groups from Russia, North Korea, Iran, and China using GPTs to improve their exploitation tactics. The Strontium group, linked to Russian military intelligence, has been found using large language models (LLM’s) to understand satellite communication protocols, radar imaging technologies, and other sensitive miliatry information.

But GPTs can also be misused in everyday cybercrime and by employees or contractors who have access to sensitive data.

How GPTs can be weaponised in everyday cybercrime

  • Phishing: GPTs can generate convincing phishing emails that mimic real writing styles, making it more difficult to spot and harder for filters to block.
  • Social engineering: these models can be used in live chats, like customer support, to trick people into giving up sensitive information. Connected to text-to-speech tools, they could also be used in voice scams.
  • Malware code generation: even with filters in place, attackers can trick GPTs into writing malicious code.
  • Data leakage: when employees input sensitive company information into these models, that data gets stored and could be leaked back to others.
  • Misinformation: GPT’s can 'hallucinate', which means they present false information portrayed as fact. When spread, this can lead to real-world consequences such as political confusion or interference during a crisis.

Real-world proof this is happening

OpenAI, the company behind ChatGPT, have openly called out “SweetSpecter”, a China-based hacking gang that used ChatGPT to gather intel, research vulnerabilities, write attack scripts and more. In May 2024, they even targeted OpenAI employees themselves in a spear-phishing campaign (targeted email scam designed to trick someone into giving away access or data). This is believed to be the first publicly identified US-based target after previously only targeting Middle Eastern, African and Asian political entities.

A report by OpenAI documents this and other real world examples.

How you can protect your business's data

  1. Train your team: regularly train your employees how to spot dodgy emails. Show them how to check for suspicious links and fake email addresses. Make sure they know to report anything fishy to your security team right away.
  2. Stay alert: remind everyone that emails or calls asking for money or personal information might not be who they say they are. Always double-check by contacting the person through a different method before doing anything.
  3. Fact check: don’t trust everything you read, especially if it comes from AI tools like ChatGPT. Always verify information from several sources before sharing or using it.
  4. Control access to GPT tools: some companies block ChatGPT and similar services to stop sensitive data from leaking. If you don’t want to block them outright, set clear rules on how they can be used.
  5. Watch your data: keep an eye on how much data is leaving your company's environment. If there’s a big spike, like lots of info being sent to ChatGPT, that’s a red flag and needs checking out.

Looking for support that actually fits your business?
Vorboss offers custom cybersecurity services that match how you operate. Speak to an expert.

Get in touch.

Tell us about yourself so we can serve you best.

Got a question?

No items found.