LIVE NEWS
  • Calls for Global Digital Estate Standard as Fraud Risk Grows
  • An ode to craftsmanship in software development
  • Global economy must stop pandering to ‘frivolous desires of ultra-rich’, says UN expert | Environment
  • Some Middle East Flights Resume but Confusion Reigns From Iran Strikes
  • Clinton Deposition Videos Released in Epstein Investigation
  • Elevance stock tumbles as CMS may halt Medicare enrollment
  • Wild spaces for butterflies to be created in Glasgow
  • You can now adjust how your caller card looks for calls on Android phones
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Cybersecurity»AI Agents Ignore Security Policies
Cybersecurity

AI Agents Ignore Security Policies

primereportsBy primereportsFebruary 21, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
AI Agents Ignore Security Policies
Share
Facebook Twitter LinkedIn Pinterest Email


AI agents are programmed to be industrious and focused on completing user-assigned tasks, but that single-minded approach often has gone wrong.

Last week, a Microsoft Copilot bug reportedly resulted in the AI assistant summarizing confidential emails, while users of AI agents have regularly complained that they are ignoring instructions to protect certain files, modifying them anyway. Last July, during a 12-day vibe-coding event, for example, one user working with AI agents on the software-creation platform Replit reported that the agent repeatedly ignored code freezes and even deleted a production database.

The problem is that as companies adopt AI agent technology, those agents are quick to find any cracks in their security foundations, and pose a whole new set of security issues, says Alfredo Hickman, chief information security officer (CISO) at Obsidian Security, a provider of software-as-a-service (SaaS) security. 

Related:Attackers Use New Tool to Scan for React2Shell Exposure

“There is a genuine fear-of-missing-out [FOMO] effect going on at all levels of organizations, and people are moving very quickly to adopt these nascent technologies, even though a lot of the capabilities to effectively govern, secure, and harden them are still in very embryonic states,” he says.

While AI agents are subject to malicious manipulation by humans, and attacks on AI infrastructure are a particular concern, AI systems can also “act in unexpected ways, as these agents act on the scope of roles and access they are granted,” says Pete Bryan, a principal AI security research lead for Microsoft’s AI Red Team.

Because AI agents are very thorough, they often find they have access to sensitive information or data stores that would otherwise be off-limits, he adds.

“When we are talking about accidental leakage via agents, the majority of cases are not due to an intent by the agent to circumvent controls,” he explains. “In our experience these incidents are more likely due to an agent having unintended scope and inappropriate permissions, or operating in an environment with lacking controls.”

AI Guardrails Aren’t “Hard” Enough

Foundational AI large language models (LLM) are typically aligned as part of training, establishing guardrails that attempt to prevent them from producing harmful output. AI agents build on top of those models with reinforcement learning, which makes them very goal-oriented, says Luke Hinds, co-founder and CEO of Always Further, an AI security startup.

They are effectively told, “Here is a goal, pursue this goal until the very end, and then you’ll be rewarded accordingly,” he says. “They’re unaware of the intention of the person that’s driving them, but that goal-orientated behavior effectively makes these into God-like attack machines.”

Related:Lessons From AI Hacking: Every Model, Every Layer Is Risky

For that reason, alignment and guardrails will never be able to keep data protected from AI agents that are designed to find ways to satisfy users’ requests, says David Brauchler, technical director and head of AI and ML security at NCC Group, a cybersecurity consultancy.

“We see AI systems disregard guardrails often enough that they cannot be considered ‘hard’ security controls,” he says. “Any system that relies on guardrails to prevent AI agents from interacting with resources beyond their permission scope is vulnerable by design.”

Instead, privileged agents must be segmented from accessing sensitive data, and their access restricted to the least-trusted input, he says.

Cyber Defense via Visibility: Know Your Agents

For the most part, companies need to grow beyond reliance on any guardrails and security in AI systems by adding more safety filters that can control inputs and instructions. An appropriately secured environment limits permissions and enforces policies, Microsoft’s Bryan says.

“Observability and management for agents is essential, so that enterprises have oversight and can act to enforce policies and controls,” he explains.

Related:Supply Chain Attack Secretly Installs OpenClaw for Cline Users

Many of the approaches we take to protecting against human mistakes and errors could be repurposed in the AI age, albeit on steroids to keep up with the massive influx of non-human agents into corporate environments, adds Always Further’s Hinds.

“It’s just good old principles of defense-in-depth, zero trust, least privilege, all of this stuff that we learnt for years and years around security is worth its weight in gold — it really is,” he says. “It is building the controls and the constraints and the checks and the balances around this, because in a lot of ways, a large language model is not too different to a human.”

Backups, and being able to quickly undo any changes implemented by agents, are key as well. Any developer that has spent time with agentic AI programming knows that rolling back changes using git or another synchronization tool, is critical. In effect, Replit underscored that importance following the database-deletion debacle. CEO Amjad Masad apologized and pledged the company would do better — first by separating development and production by default, and then by taking other measures to reinforce instructions to the agent. He also stressed that backups saved the day.

“Thankfully, we have backups,” he said on X. “It’s a one-click restore for your entire project state in case the agent makes a mistake.”

Overall, AI agents can be secured to prevent any data leakage, and backups can prevent data loss, but companies need to put in a significant amount of security work, says Microsoft’s Bryan.

“Data exposure isn’t an inevitable outcome of agents,” he says. “It can be mitigated with the right governance in place and by following security best practice: identity-based access, least privilege permissions, effective environment isolation, continuous monitoring, audit logs, and clear human oversight.”



Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow to Design a Swiss Army Knife Research Agent with Tool-Using AI, Web Search, PDF Analysis, Vision, and Automated Reporting
Next Article Trump says he’s considering limited military strike against Iran
primereports
  • Website

Related Posts

Cybersecurity

Calls for Global Digital Estate Standard as Fraud Risk Grows

March 4, 2026
Cybersecurity

Samsung Unpacked 2026 live blog: Updates on Galaxy S26 Ultra, preorder deals, and pricing

February 25, 2026
Cybersecurity

Marquis sues SonicWall over backup breach that led to ransomware attack

February 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20255 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Artificial Intelligence
  • Popular Now
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • Calls for Global Digital Estate Standard as Fraud Risk Grows
  • An ode to craftsmanship in software development
  • Global economy must stop pandering to ‘frivolous desires of ultra-rich’, says UN expert | Environment
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.