When Hillai Ben Sasson and Dan Segev set out to hack AI infrastructure two years ago, they expected to find vulnerabilities — but they didn’t expect to compromise virtually every major AI platform they targeted.
The two researchers — who work in offensive and defensive research, respectively, at cloud-security firm Wiz — wanted to experiment with how they could attack the AI infrastructure being deployed as part of foundational models, AI services, and in-house AI projects. Yet, what started as simple attacks on the AI supply chain — such as abusing the widely used Pickle format to run arbitrary code — evolved into a comprehensive threat assessment spanning five distinct layers of the AI stack.
They plan to present the lessons learned over their two years of research at the upcoming RSAC Conference in March. Perhaps the most important lesson: Focus on the infrastructure used to to train, run, and host AI services, and not on prompt-injection attacks, says Segev, a security architect in the Office of the CTO at Wiz.
“Don’t get me wrong— I think prompt injection is definitely a novel attack vector,” he says. “But technologies and services are introduced every day — MCP [model context protocol] is a good example — all those technologies are introduced with so many vulnerabilities on the infrastructure layer that, if you are not looking at the fundamentals of security [and] understanding the threat model … then you’re really missing out on the big picture.”
The presentation comes as businesses across every industry are attempting to figure out how to best use AI and not miss out on potential innovations and cost savings, and moving ahead despite security concerns. An overwhelming majority (83%) of chief information security officers (CISOs) are worried about the level of access AI has to their company’s systems, especially because most (71%) believe AI has access to core business systems and have found unsanctioned AI tools running in their environments, according to the 2026 CISO AI Risk Report.
The rapid pace of AI development has resulted in companies rushing insecure products to market, repeating past mistakes of prioritizing speed over security, Segev says.
AI Security’s in a Pickle
Take the Pickle format, for example. Often used as a way to store model weights, the format mixes data and code, allowing malicious Pickle files to readily run malware on systems. Because many of the formats and infrastructure came from data researchers, most decisions did not include threat modeling and a focus on security issues, says Hillai Ben Sasson, a senior security researcher at Wiz.
“We were really surprised to find out that AI models and AI model formats often have security vulnerabilities by design, like the Pickle format, which is a really, really popular way to store AI models,” he says. “We were really intrigued by this and we started thinking, what if we deploy malicious models to all the big AI providers and we see what happens?”
In the end, the two researchers built up a threat model that has five layers, based on parts of the AI life cycle. The first is model training, during which data leakage is perhaps the biggest risk. In 2023, Wiz reported that an overly permissive file-sharing link allowed anyone to access a massive 38TB data store being used by Microsoft to train its models.
Next, at the inference stage, where users interact with the models, Wiz researchers discovered numerous vulnerabilities in production models, such as DeepSeek, and services, such as Ollama.
Vibe Coding’s Poor Security
The third level, the application layer, includes prompt injection, but also issues with vibe coding platforms, such as Base44. Wiz researchers found a vulnerability that could have allowed attackers to gain access to any private enterprise application. In fact, the vibe-coding platforms have a poor record of security, says Segev.
“We don’t have exact numbers, but almost every vibe-coded app we set out to look for, we were able to hack in minutes,” he says. “The reality is that AI security is — I don’t want to say broken — but it’s really compromised at the infrastructure layer.”
The researchers expanded their model to two other layers as well. The AI clouds that host models and applications have their own set of vulnerabilities. “You can compromise the AI cloud and therefore compromise all the customers of that cloud,” Ben Sasson says.
The researchers even found vulnerabilities in the hardware and systems on which AI infrastructure is based. Wiz researchers had found vulnerabilities in NVIDIA’s Triton Inference Server that could have been chained together to allow an unauthenticated attacker to gain complete access to the AI model.
“This was perhaps the craziest finds of them all … because you find one vulnerability in this library and then everyone uses this library,” Ben Sasson says. “It’s like one vulnerability for every single cloud provider, every single AI application, every single step of the AI process. Everyone was vulnerable to this.”
There are no fast fixes for the current problems with AI security, especially because so many of the issues are in the hands of others, says Segev. Wiz currently uses a security agent to conduct regular security reviews that checks any code, service, or applications. Rather than “implement and forget,” security agents could bring regular compliance checks as pieces of the AI ecosystem are created, he says.
“Having the ability to close the loop is something that will be more common and is going to introduce better protocols, better standards, and more security,” he says. “Attackers are becoming so much more sophisticated that [companies] just won’t have the ability to stay exposed with some vulnerabilities for long. It’ll take minutes before it’ll be exploited.”
