First Contact
We’re making first contact with a different kind of intelligence - one that bears some resemblance to things we know, but is alien enough that our existing playbooks don’t quite fit.
There’s another meaning of “first contact” I keep thinking about. In military doctrine, there’s an adage: the plan never survives first contact with the enemy. First contact with reality. The moment when preparation meets the world.
I used to be in the Army. Intelligence Corps, working with the Parachute Regiment and special operations units. A lot of my work was people management, though I was constantly involved in the orders process - the structured approach to planning operations that’s drilled into officers at Sandhurst.
Three concepts from that world apply surprisingly well to working with AI agents.
Mission command
The core principle: tell subordinates what you want, not how to do it. State the end state, the assets available, the constraints. Then let them figure out the approach.
This is a NATO-wide doctrine, and it exists because micromanagement doesn’t scale. A commander spinning cycles on how every subordinate executes their tasks will bottleneck the entire operation. Devolving agency to lower levels brings risk - you might have underperformers making bad calls. But it also brings unexpected successes from people who’d have been constrained by rigid instructions.
The doctrine has been refined over decades. The gamble overwhelmingly pays off.
The parallel to AI agents is direct. Telling a model exactly which framework to use, how to structure its code, what patterns to follow - this probably won’t get better results than stating what you want and how you’ll verify it works.
These days, models are good at making implementation decisions. If you’re doing anything interesting, the software landscape is vast enough that you have blind spots. A great way to uncover them is to let models propose what they want to do, rather than dictating your proposal.
Leave room for being pleasantly surprised.
The orders process
At Sandhurst, you spend enormous time on planning. But under strict time constraints that make everything feel slightly rushed. And you’re required to leave at least a third of your available time for those beneath you - because they need to plan too, and the cascade has to work.
Two things follow from this:
First, planning tasks are basically bottomless. You could always do more analysis, more contingency preparation, more coordination. The time constraint forces you to prioritise ruthlessly.
Second, the plan never survives first contact with the enemy. You go in with a strong plan but a weak commitment to it. When surprises arrive - and they always do - you adapt.
This maps well to software. You need some idea where you’re going, especially in enterprise settings. But once you get moving, the usual agile instincts kick in. Estimates have always been fraught because writing the code is the only way to find out how long writing the code will take.
What’s changed: implementation is becoming extremely cheap. Agents can churn through code at a pace that makes direction the scarce resource. Maintaining a clear mental model of what you’re building - being able to hold the plan even as you adapt it - is now the difference between staying in control and letting things run amok.
Plan thoroughly. Hold the plan loosely. But hold it.
Inkblot strategy
The inkblot strategy is a counterinsurgency approach: establish small safe areas, then let them spread and merge. It was used in Malaya, Vietnam, and more recently in Afghanistan.
The relevance to AI adoption is about how agency spreads.
In any organisation, there’ll be a small number of people who intuitively grasp the significance of AI early. They start tinkering, become local experts, and because AI applications are so domain-specific, they become genuine experts fast - just by spending their days working on problems in their area with these tools.
These are your inkblots. They’re exercising agency - choosing to adopt, choosing to learn, choosing to change how they work. You can’t mandate that. Trying to force adoption is like trying to force mission command: it defeats the purpose.
Find these people before you try to change the organisation. Let agency spread from there. The inkblots merge; the territory shifts.
Why this matters
I’m not suggesting the military is the best model for everything. There’s plenty I wouldn’t generalise from that world - it can seem like a myopic subculture in many ways.
As an officer - I spent most of my career as a captain - you’re essentially junior clergy for the doctrine of the institution. There’s a spectrum of how indoctrinated you become. I was borderline heretical; I never really got institutionalised. But the ideas above stayed with me in spite of that.
There’s a skin in the game aspect worth noting. During operational periods - and the West has had an intense one since 9/11 - there’s a forcing function. Command and control practices that don’t work get people killed. The selection pressure is real. Whatever emerges from that process has been tested in ways that most organisational theory hasn’t.
It is a hierarchical structure with practices at each level that have been worn smooth by use. Practices that reflect different levels of delegation, different spans of control, different relationships between planning and execution.
As AI agents scale the abstraction ladder - as we find ourselves delegating more, managing more autonomous systems, trusting more black boxes - these ideas become more relevant, not less.
The section commander’s job was comfortable for the first year of AI coding tools. I suspect the platoon commander’s job is a better model for what’s coming next.