LIVE NEWS
  • Calls for Global Digital Estate Standard as Fraud Risk Grows
  • An ode to craftsmanship in software development
  • Global economy must stop pandering to ‘frivolous desires of ultra-rich’, says UN expert | Environment
  • Some Middle East Flights Resume but Confusion Reigns From Iran Strikes
  • Clinton Deposition Videos Released in Epstein Investigation
  • Elevance stock tumbles as CMS may halt Medicare enrollment
  • Wild spaces for butterflies to be created in Glasgow
  • You can now adjust how your caller card looks for calls on Android phones
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Cybersecurity»Every Model, Every Layer Is Risky
Cybersecurity

Every Model, Every Layer Is Risky

primereportsBy primereportsFebruary 21, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Every Model, Every Layer Is Risky
Share
Facebook Twitter LinkedIn Pinterest Email


When Hillai Ben Sasson and Dan Segev set out to hack AI infrastructure two years ago, they expected to find vulnerabilities — but they didn’t expect to compromise virtually every major AI platform they targeted.

The two researchers — who work in offensive and defensive research, respectively, at cloud-security firm Wiz — wanted to experiment with how they could attack the AI infrastructure being deployed as part of foundational models, AI services, and in-house AI projects. Yet, what started as simple attacks on the AI supply chain — such as abusing the widely used Pickle format to run arbitrary code — evolved into a comprehensive threat assessment spanning five distinct layers of the AI stack.

They plan to present the lessons learned over their two years of research at the upcoming RSAC Conference in March. Perhaps the most important lesson: Focus on the infrastructure used to to train, run, and host AI services, and not on prompt-injection attacks, says Segev, a security architect in the Office of the CTO at Wiz.

Related:Attackers Use New Tool to Scan for React2Shell Exposure

“Don’t get me wrong— I think prompt injection is definitely a novel attack vector,” he says. “But technologies and services are introduced every day — MCP [model context protocol] is a good example — all those technologies are introduced with so many vulnerabilities on the infrastructure layer that, if you are not looking at the fundamentals of security [and] understanding the threat model … then you’re really missing out on the big picture.”

The presentation comes as businesses across every industry are attempting to figure out how to best use AI and not miss out on potential innovations and cost savings, and moving ahead despite security concerns. An overwhelming majority (83%) of chief information security officers (CISOs) are worried about the level of access AI has to their company’s systems, especially because most (71%) believe AI has access to core business systems and have found unsanctioned AI tools running in their environments, according to the 2026 CISO AI Risk Report.

The rapid pace of AI development has resulted in companies rushing insecure products to market, repeating past mistakes of prioritizing speed over security, Segev says.

AI Security’s in a Pickle

Take the Pickle format, for example. Often used as a way to store model weights, the format mixes data and code, allowing malicious Pickle files to readily run malware on systems. Because many of the formats and infrastructure came from data researchers, most decisions did not include threat modeling and a focus on security issues, says Hillai Ben Sasson, a senior security researcher at Wiz.

Related:‘God-Like’ Attack Machines: AI Agents Ignore Security Policies

“We were really surprised to find out that AI models and AI model formats often have security vulnerabilities by design, like the Pickle format, which is a really, really popular way to store AI models,” he says. “We were really intrigued by this and we started thinking, what if we deploy malicious models to all the big AI providers and we see what happens?”

In the end, the two researchers built up a threat model that has five layers, based on parts of the AI life cycle. The first is model training, during which data leakage is perhaps the biggest risk. In 2023, Wiz reported that an overly permissive file-sharing link allowed anyone to access a massive 38TB data store being used by Microsoft to train its models.

Next, at the inference stage, where users interact with the models, Wiz researchers discovered numerous vulnerabilities in production models, such as DeepSeek, and services, such as Ollama.

Vibe Coding’s Poor Security

The third level, the application layer, includes prompt injection, but also issues with vibe coding platforms, such as Base44. Wiz researchers found a vulnerability that could have allowed attackers to gain access to any private enterprise application. In fact, the vibe-coding platforms have a poor record of security, says Segev.

Related:Supply Chain Attack Secretly Installs OpenClaw for Cline Users

“We don’t have exact numbers, but almost every vibe-coded app we set out to look for, we were able to hack in minutes,” he says. “The reality is that AI security is — I don’t want to say broken — but it’s really compromised at the infrastructure layer.”

The researchers expanded their model to two other layers as well. The AI clouds that host models and applications have their own set of vulnerabilities. “You can compromise the AI cloud and therefore compromise all the customers of that cloud,” Ben Sasson says.

The researchers even found vulnerabilities in the hardware and systems on which AI infrastructure is based. Wiz researchers had found vulnerabilities in NVIDIA’s Triton Inference Server that could have been chained together to allow an unauthenticated attacker to gain complete access to the AI model.

“This was perhaps the craziest finds of them all … because you find one vulnerability in this library and then everyone uses this library,” Ben Sasson says. “It’s like one vulnerability for every single cloud provider, every single AI application, every single step of the AI process. Everyone was vulnerable to this.”

There are no fast fixes for the current problems with AI security, especially because so many of the issues are in the hands of others, says Segev. Wiz currently uses a security agent to conduct regular security reviews that checks any code, service, or applications. Rather than “implement and forget,” security agents could bring regular compliance checks as pieces of the AI ecosystem are created, he says.

“Having the ability to close the loop is something that will be more common and is going to introduce better protocols, better standards, and more security,” he says. “Attackers are becoming so much more sophisticated that [companies] just won’t have the ability to stay exposed with some vulnerabilities for long. It’ll take minutes before it’ll be exploited.”



Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow to Design an Agentic Workflow for Tool-Driven Route Optimization with Deterministic Computation and Structured Outputs
Next Article Venezuela receives more than 1,500 amnesty requests under new law | Politics News
primereports
  • Website

Related Posts

Cybersecurity

Calls for Global Digital Estate Standard as Fraud Risk Grows

March 4, 2026
Cybersecurity

Samsung Unpacked 2026 live blog: Updates on Galaxy S26 Ultra, preorder deals, and pricing

February 25, 2026
Cybersecurity

Marquis sues SonicWall over backup breach that led to ransomware attack

February 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20255 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Artificial Intelligence
  • Popular Now
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • Calls for Global Digital Estate Standard as Fraud Risk Grows
  • An ode to craftsmanship in software development
  • Global economy must stop pandering to ‘frivolous desires of ultra-rich’, says UN expert | Environment
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.