Collapsing Cycles
Years ago I read MCDP-1: Warfighting, a US Marine Corps doctrinal publication from 1989. I’m a software engineer, not a military strategist, but something in that document stuck with me.
The Marines describe three levels of war. The tactical level is about the immediate fight — how to take a specific hill. The operational level asks which hills matter across an entire theater. The strategic level links the whole business of war to the policy objectives of the government. These levels form a hierarchy, but as the manual puts it, “there are no sharp boundaries between the levels. Rather, they merge together and form a continuum.”
Historically, these levels operated at different clock speeds. Strategic decisions unfolded over months or years. Operational planning over weeks. Tactical execution in hours or minutes. The difference in tempo meant the levels could function somewhat independently, coordinating at a comfortable pace.
MCDP-1 observed that this was changing. The levels were compressing. In Somalia, “strategic (i.e., political) effects could result from the actions of squads or even individuals.” A soldier making a split-second tactical call on a street corner could trigger consequences that reached the policy level. And conversely, generals had to stay grounded in tactical realities that shifted faster than their planning cycles could accommodate.
The Marines’ response to this compression is interesting. They didn’t centralize control or build better command hierarchies. They went the other direction — toward what they call commander’s intent: a clearly communicated strategic purpose that enables people to act autonomously. If everyone in the chain understands why they’re doing what they’re doing, they can make good decisions without waiting for orders. Not less coordination, but a different kind. Shared understanding rather than explicit instruction.
I’ve been thinking about this framework because I see the same compression happening outside of warfare.
My father was born in the 1950s. He trained as an electrical engineer and rode that education through his entire career — a single, coherent professional path. He’s probably among the last generation for whom that was a reliable option. By the time I was studying and entering the workforce about twenty years ago, the cycles were already faster. I work in software, and I could see that I’d probably need to reinvent myself three, maybe four major times over the course of my career. The skills I started with would not be the skills I’d finish with.
My children will face something steeper. I suspect the very idea of a stable, predictable career path is becoming an artifact. The reinvention won’t happen three or four times — it’ll be more or less continuous.
What interests me is what this acceleration does to how people work together.
When I trained as a software engineer, we had the obligatory project management courses. Waterfall planning. Gantt charts. Formal methodologies for managing complexity. When I entered the actual workforce, most of this felt either antiquated or religious — people pushing frameworks like Agile with the fervor of true believers, complete with the cargo-cult rituals that inevitably followed. Standups performed as daily theater. Story points debated with theological precision. And if you deviated from the prescribed method, you had a “Scrum-but” problem — a term used like an accusation.
Eventually a lot of people gave up on formal structure altogether, which, honestly, was tolerable for a while. The cycles were still slow enough that experienced people could keep things coordinated through informal means — a meeting here, a conversation over beers there. If you had enough seasoned people who knew instinctively what needed to be communicated and to whom, you could get by without rigid processes.
I think that window is closing.
Not less coordination, but a different kind. Shared understanding rather than explicit instruction.
With AI-assisted working, the speed at which individuals can produce is increasing dramatically. It used to be that only an occasional standout engineer moved fast enough that others struggled to keep up. Now AI is putting that kind of productivity within reach of far more people — which is genuinely good, but it creates a coordination problem that is qualitatively different from what came before.
The current generation of AI tooling is focused on the individual. AI assistants running on your local machine, helping you write code, debug, iterate. The productivity gains are real — I experience them daily. But when you push the code and close the laptop, the context that informed all those decisions is either locked in cache files on your machine or simply gone. The reasoning, the tradeoffs considered, the options rejected — none of that travels with the output unless someone makes a deliberate effort to capture it.
And historically, engineers have resisted that effort. Writing documentation, updating plans, maintaining shared context — much of this has been viewed as make-work, an overhead imposed by non-technical people who don’t understand that the real work is the code. I’ve felt that pull myself. The temptation to just ship and move on is strong, especially when you’re moving fast.
The tools for sharing context already exist, of course. Linear, Notion, Confluence, pull requests, design docs. The problem isn’t absence of tooling — it’s that the value proposition has never been clear enough. If I write something up and nobody reads it, what was the point? And if I don’t see intrinsic value in the act of writing it — if it doesn’t help me think or work better — then it’s hard to motivate myself to do it at all. The effort is real and the payoff is theoretical.
I suspect this is partly a cultural problem specific to engineering. In other fields, writing is the work. In diplomacy and foreign service, from what I understand, people write constantly — reports, cables, assessments. Not because someone mandated documentation as a process overhead, but because the writing itself is the mechanism that creates value for stakeholders. The report is the product, not a side effect of the product. In software engineering, the code is the product. Documentation feels like something grafted on afterward, for someone else’s benefit, disconnected from the thing that actually matters.
If engineering cultures genuinely valued shared context — if producing good documentation were recognized and rewarded, even just socially — I think we’d see a lot less of this problem. But that’s a cultural shift, not a tooling one.
AI is changing the equation, though not in a simple direction. Producing context artifacts is becoming cheaper — summaries, documentation, rationale capture. But the volume of change is increasing too, creating more context that needs sharing. The more interesting shift may be on the consumption side: AI can absorb large amounts of shared context and surface what’s relevant to whatever you’re working on right now. Abundance becomes an asset when it can be reshaped on demand.
But — and this is the part that nags at me — making it easy to produce documentation doesn’t mean the documentation will be good. There’s a new failure mode emerging that’s almost the inverse of the old one. We used to have too little documentation because it was too expensive to write. Now we risk too much documentation that nobody cared about when producing it. AI-generated slop that technically satisfies the “share context” checkbox but captures nothing of substance. The ease of production becomes an excuse for thoughtlessness.
The quality of shared context depends on something no tool can provide: the intention of the person creating it. Did they think about who would need this information? Did they care whether it would be useful? That’s not a technology problem.
The quality of shared context depends on something no tool can provide: the intention of the person creating it.
The challenge I see forming is how to integrate entire organizations into shared context. Not just engineering teams, but the full chain: the person talking to customers who picks up on a shift in sentiment, the product thinker who translates that into an initiative, the teams who plan and execute, the stakeholders who need visibility into what’s happening and why. Today these are mostly separate streams connected by meetings, documents, and the occasional Slack thread. The connections are lossy. Important signals degrade as they pass through.
Most organizations handle this through layers of management that translate strategic context downward and tactical reality upward. That works when the layers have time to process and relay. When cycles compress, the translation chain becomes a bottleneck. By the time the signal reaches its destination, the situation has already changed. This is the exact problem MCDP-1 identified — the levels collapsing into each other, requiring people at every level to understand more of the full picture.
There’s a reasonable counterargument to all of this: maybe the answer isn’t more coordination but less. Smaller teams. Fewer dependencies. The two-pizza team taken to its extreme — one person and an army of AI agents, shipping autonomously. If a single person can do what a team of ten used to, maybe you just need fewer people in the loop.
I think that’s partly right. Smaller, more autonomous units do reduce internal coordination overhead. But they don’t eliminate the coordination problem — they shift it. The moment a second person is involved, communication becomes necessary, because we still can’t read each other’s thoughts. And most things worth building involve more than one person. Even a solo operator needs to coordinate with users, partners, and the market. The question isn’t whether to coordinate but how — and at what speed.
MCDP-1 has a line I think about often. The manual warns against “a tendency to seek technological solutions to what are essentially human problems.”
That was written in 1989, about warfare. It applies uncomfortably well to what I see in tech today.
I believe the tools will sort themselves out. They always do. The integrations will get better, the AI will get better at capturing and sharing context, the platforms will converge. The technology side of this problem is solvable and will be solved, probably faster than most expect.
What I’m less confident about is the human side.
My intuition — and I want to be honest that this is an intuition, not a conclusion — is that the bottleneck is empathy. Not empathy in the soft, sentimental sense, but as a practical capacity: the ability to model what other people need to know and the willingness to act on that understanding. When you’re moving fast and focused on your own work, it takes a conscious effort to step back and consider who else is affected by what you’re doing, what context they’re missing, what they need from you to do their own work well.
This can’t be an individual virtue alone. One or two empathetic people in an organization that doesn’t value transparency will burn out or give up. It has to be cultural — a shared expectation that proactive communication is part of the work, not overhead on top of it. And the culture has to actually believe this, not just mandate it, because mandated context-sharing without genuine care behind it is how you get the AI-generated slop I mentioned earlier.
We’ll never fully close the coordination gap. Like friction and uncertainty in MCDP-1, it’s a permanent feature of working with other humans in a complex environment. Attempts to eliminate it entirely — through surveillance, over-engineering, or totalizing process — create their own fragility. The goal isn’t perfection. It’s getting progressively better, asymptotically, while accepting that some friction is the price of working with other people.
But when a military doctrine from 1989 describes levels of activity collapsing under the pressure of increasing speed, and I recognize the same dynamic in how software teams struggle to coordinate in 2026, it’s hard to dismiss as coincidence. The Marines figured out that when the tempo outpaces the hierarchy, you need shared intent. You need people who understand the bigger picture well enough to act wisely on their own.
I suspect we need something similar. And I suspect it starts with paying attention to the people around us.
