The Absorption Gap
Technology deploys into institutions, not vacuums.
This is easy to forget when the technology is moving as fast as AI is right now. Every week brings a new capability demo, a new benchmark cleared, a new thing that wasn’t possible six months ago. But production is only half the story. The other half is absorption — the messy, slow, organizational work of actually restructuring around what’s now possible.
These two rates are diverging. AI capability is advancing faster than most organizations can reorganize to use it. The result is a growing gap between what’s technically feasible and what’s actually deployed. This is the absorption gap, and I think it’s the defining dynamic of this period.
The pattern isn’t new. The electric dynamo was commercially available in the 1880s. But factories didn’t just swap electric motors for steam engines and call it done — or rather, they did, and the productivity gains were marginal. It took roughly forty years for manufacturers to redesign their factories around the actual properties of electricity: distributed power, flexible layout, independent machine operation. The technology was ready long before the institutions were.
We’re in a similar moment. The question isn’t whether AI works. It does. The question is what happens in the gap between what it can do and what we actually do with it.
I’ve been trying to think about this through scenario planning — not to predict what happens, but to enumerate the mechanisms at work and see where they lead. Herman Kahn, the Cold War strategist who pioneered this approach, argued that the goal isn’t to be right. It’s to be usefully wrong in a structured way. I don’t know what happens next. But I can describe the paths I see.
The absorption gap generates at least four distinct scenarios.
The first is broad absorption. Organizations restructure rapidly, AI gains are realized widely, and the economy adjusts. This is the implicit assumption behind most optimistic AI commentary — “AI will transform everything.” It’s possible. It’s also historically unusual. The institutions that need to change have their own cycle times — political, structural, cognitive — that don’t accelerate just because the underlying technology does.
The second is concentrated gains. The gap between what’s possible and what’s deployed becomes a moat. A small number of actors — companies, teams, individuals — actually absorb AI into their workflows, restructure around it, and capture enormous value. Most organizations bolt AI onto existing processes without rethinking those processes. They get a marginal productivity bump and declare victory. The result is a widening distance between the few who transformed and the many who bolted on.
The third is a demand-side wall. AI accelerates production, but the human capacity to consume, evaluate, and act on that production doesn’t keep pace. We can generate more strategy documents, more content, more code, more analysis than ever before. But someone still has to read the strategy document and decide if it’s right. Someone still has to evaluate whether the code solves the actual problem. The decision cycle doesn’t compress just because the production cycle did.
The fourth emerged from thinking about arms race dynamics. Competitive pressure drives massive investment into the current paradigm — GPUs, large language models, ever-bigger training runs. The logic is straightforward: invest or fall behind. But technology is recursive by nature. W. Brian Arthur’s work on technological evolution shows that technologies are built from other technologies — they branch and recombine, evolving along specific lineages. Any branch can be superseded when a better solution emerges downstream, and when it collapses, the empires built on it collapse with it. The arms race intensifies the eventual waste — the more you invest in a branch, the more you lose when it gets cut.
This fourth scenario sits in tension with the second. If concentrated gains reward aggressive investment in the current paradigm, but paradigm displacement punishes exactly that investment, which instinct do you follow? My sense is that they’re sequential. Concentration happens first. The few who invest aggressively capture value. Then the cycle turns. Carlota Perez maps a similar arc: an installation phase driven by speculative investment, followed by a crash, followed by a deployment phase where the technology actually embeds into society.
Nokia is the example I keep coming back to. They invested heavily in mobile research. They saw smartphones coming. They hired Clayton Christensen — the man who literally wrote The Innovator’s Dilemma — to consult on how to handle disruption. And they still got displaced by Apple and Android in roughly five years. The problem wasn’t informational. Nokia knew. The problem was structural. Knowing what’s coming and being able to reorganize around it are not the same thing.
Knowing what’s coming and being able to reorganize around it are not the same thing.
That gap — between knowing and absorbing — is the same gap this entire post is about, just at the scale of a single company.
I don’t think these are competing predictions. The more I look, the more I think all four are playing out simultaneously, in different sectors and at different speeds. Which raises a question: what determines which one you get in a given context?
My intuition is that the primary variable is feedback loop tightness — how quickly you can evaluate whether the AI output is actually good.
In software development, some feedback loops are very tight. Does the code compile? Do the tests pass? Does the function return the right output? You know within seconds. AI-assisted coding has been absorbed relatively fast precisely because the evaluation is immediate and objective. You can tell if it’s working.
In strategy consulting, the loops are loose. Was this the right market analysis? Was the recommended direction correct? You might not know for a year. AI can generate plausible-sounding strategic analysis at extraordinary speed, but the speed of evaluation hasn’t changed. This is where you hit the demand-side wall — more output than you can meaningfully assess.
Within a single software project, there’s a gradient. “Does it compile?” is a tight loop. “Did we build the right thing?” is a loose one. The same team, working on the same product, experiences different absorption dynamics at different levels of the work. This is part of what makes the transition feel uneven even within a single organization.
There’s a related question: does the current paradigm plateau?
LLMs are probabilistic language models. Tokens in, tokens out. They’re remarkably capable within that frame, and the capabilities are expanding. But my intuition — and it is an intuition, not a certainty — is that they follow an S-curve rather than an unbounded exponential. Most technologies do. Rapid improvement, then diminishing returns, then a plateau.
There’s a credible counterargument. Scaling-law research suggests that model performance continues to improve predictably with more compute and data, and that we haven’t yet hit meaningful diminishing returns. People who hold this view see no plateau coming, at least not soon. I take the argument seriously. But the history of technology is full of S-curves that looked exponential in their steep phase. The fact that we’re on the steep part now doesn’t tell us how far the curve extends.
If LLMs do follow an S-curve, scenario four becomes more relevant sooner. A plateauing paradigm creates stronger pressure for architectural alternatives. Ideas that have been dormant — and AI has no shortage of those, given that many foundational concepts sat unused for decades before conditions made them practical — could activate quickly once the current approach hits limits.
I don’t know which of these scenarios will dominate. I’m not sure that’s even the right question. They’re all operating at once, in different contexts, at different speeds.
The scenarios might be too neat. The feedback loop variable might be downstream of something more fundamental that I’m not seeing. But the alternative to structured thinking about the future isn’t no thinking — it’s unstructured thinking, driven by whichever narrative is loudest.
I’d rather be usefully wrong in a structured way.
