
Security agencies, like all major organisations, are wrestling with how to integrate AI into their work to boost effectiveness.
But they will need to think not just about how to improve the execution of individual tasks, which is AI’s current strength, but also how AI can help bind rather than isolate the people who make up a large organisation. To adopt AI well, security organisations will also need to invest in systems that improve communication, organisational knowledge, accountability and collective judgement.
AI is improving sharply with unpredictable implications, meaning that much commentary is full of fear and anxiety. A more productive conversation would address what becomes more important as AI handles more narrow tasks.
For the foreseeable future, major organisations, including security agencies, will continue to revolve around people interacting, cooperating, communicating and learning from one another.
AI is remarkably good at tasks such as drafting, summarising, analysing and building. Tasks that once required significant expertise and time can now be executed rapidly by AI systems. This is genuinely transformative. The efficiency gains are real and will reshape how work gets done.
But as AI is integrated, major organisations need to focus on three key areas: sharing knowledge and understanding through communication; keeping humans in the loop; and fostering the ability to exercise judgement as people move through their careers. No organisations better exemplify these needs than security agencies, which have particular expectations about accountability for decision making and a strong need for fine judgement without errors.
Organisations don’t just run on tasks. They run on shared understanding: the ability to reconstruct why a decision was made, what was agreed and what context shaped the thinking. They run on communication and coordination across teams, across time and across the messy reality of humans working together.
That coordination layer – the infrastructure through which knowledge persists and stays accessible – is where we should direct our thinking in relation to AI, rather than focusing on features or interfaces.
This is where communication systems become central.
Most communication tools are maximised for message transmission and reception, not necessarily preserving organisational knowledge over time or capturing the complex networks of relationships, within which work actually happens. The result is familiar to every professional: context fragments, decisions become impossible to reconstruct and enormous effort goes into re-explaining what different people understand about the same thing.
AI doesn’t necessarily fix this. It operates within communication environments, often based in legacy systems or on legacy assumptions, and depends on them for context. If, for example, organisational knowledge was never structured to persist, AI risks simply executing faster on incomplete understanding. Indeed, a very real scenario is that AI papers over the fundamental flaws in our communications ecosystems and hides the real problems beneath a facade of efficiency.
The second dimension that deserves attention is ensuring humans make key decisions that are explainable. As AI takes on more execution, humans need to remain genuinely embedded in thinking and decision-making. Legal accountability requires identifiable human decision-makers. Regulatory compliance demands human oversight. Effective decisions under ambiguity require human judgment about trade-offs that we cannot abrogate to AI on our behalf.
Our communication tools aren’t peripheral to the AI story; they’re a crucial part of it.
Finally, we should be asking whether AI is making us better at communicating and thinking or if it’s quietly creating dependency without building capability.
I started my career doing unglamorous work: writing draft briefs for others to speak to; taking the notes that others used to track decisions; and sitting in meetings feeling completely out of my depth.
I got more confident and competent only with practice. I had my draft policy work rewritten and learned from that. At times I was completely convinced I had the right answer in a brief, only to have it changed because I hadn’t considered the broader context or nuance.
Looking back, that was how I developed judgement. There’s no shortcut to that. Experience builds perspective, and perspective builds the kind of judgement that matters when the stakes are high.
If AI does that entry level work instead, where does judgement come from? How do we develop the next generation of people who can make genuinely good decisions under ambiguity and pressure if they never have the experience of doing the thing, making the mistakes, and learning what matters?
And judgement doesn’t develop in isolation. It develops through real human communication experiences. Through the discomfort of being in the room and not knowing enough. Through getting things wrong and understanding why. Through building the kind of relationships where honest feedback is possible. If we lose those experiences, we don’t just lose the judgement; we lose the human communication layer that was developing it.
Even if AI ends up communicating on our behalf, that doesn’t solve the problem. It doesn’t address why we communicate as human beings in the first place. Communication isn’t just information transfer. It’s how we build relationships, earn trust and make sense of the world together. An AI doing that for us isn’t a solution; it’s a loss.
Organisations need to understand and use AI, and leaders need to show the way. But those that navigate this transformation well won’t necessarily be those adopting AI fastest. They’ll be the ones investing in infrastructure that makes coordination, memory and human judgment more resilient.