Researchers from RSAC have found a way to bypass the safety protocols of Apple’s Intelligence AI with a high success rate.
Apple Intelligence is a deeply integrated personal intelligence system for iOS, iPadOS, and macOS that combines generative AI with personal context.
It primarily processes tasks directly on Apple silicon via a compact on-device LLM. The AI draws on the user’s unique context (messages, photos, and schedules) to power practical features such as system-wide writing tools and Siri. For more complex reasoning, it offloads requests to larger foundation models via Private Cloud Compute (PCC) on Apple’s dedicated cloud infrastructure.
Apple Intelligence has been examined by the research team of RSAC, the organization that hosts the RSAC Conference.
The researchers set out to bypass the local LLM’s input and output filters (designed to block malicious input and prevent undesirable output), as well as internal guardrails to influence its actions.
To achieve this, they combined two distinct adversarial techniques. The first is Neural Execs, a known prompt injection attack that uses ‘gibberish’ inputs to trick the AI into executing arbitrary, attacker-defined tasks. These inputs act as universal triggers that do not need to be remade for different payloads.
The second method, used by the RSAC researchers to bypass input and output filters, is Unicode manipulation. By writing malicious output text backward and using the Unicode right-to-left-override function they were able to bypass content restrictions.
“Essentially, we encoded the malicious/offensive English-language output text by writing it backwards and using our Unicode hack to force the LLM to render it correctly,” the researchers explained.
Combining the two methods can allow attackers to force the local Apple Intelligence LLM to produce offensive content or, more critically, manipulate private data and functionality within third-party applications integrated with Apple Intelligence, such as health data or personal media.
The attack was tested with 100 random prompts and the researchers achieved a success rate of 76%.
They estimate that between 100,000 and 1 million users have installed apps that may be vulnerable to such attacks.
“RSAC estimates that there were at least 200 million Apple Intelligence-capable devices in consumers’ hands as of December 2025, and the Apple App Store already features apps using Apple Intelligence—so it’s already a high-value target,” the researchers noted.
Apple was notified in October 2025 and, according to RSAC Research, protections were rolled out in the recent iOS 26.4 and macOS 26.4
The researchers have not seen any evidence of malicious exploitation.
Related: Google API Keys in Android Apps Expose Gemini Endpoints to Unauthorized Access
Related: Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge Attacks
Related: The New Rules of Engagement: Matching Agentic Attack Speed
