Five years ago, AI wasn’t part of how anyone wrote software. The tools varied, the languages varied, but you wrote the code yourself.

Now there’s a spectrum - and it’s getting wider.

The spectrum

Steve Yegge and Gene Kim, in their recent book on vibe coding, distinguish between inner-loop and outer-loop agentic coding. Inner loop is tight: quick suggestions, autocomplete, the AI filling in what you were about to type. Outer loop is loose: describing what you want and letting the agent figure out how to build it.

Different domains sit at different points on this spectrum. Some work is already fully outer-loop - developers shipping features by describing them to Claude Code, barely touching the keyboard. Other work might only tolerate inner-loop assistance, or none at all.

As the models improve, I expect the spectrum to widen further. More domains will accommodate outer-loop work. But some might remain stubbornly inner-loop, or resist AI entirely. The distances between viable practices are growing.

Where people sit

Andrej Karpathy, October 2025, on his nanochat project. He described it as “intellectually intense code” where “everything has to be very precisely arranged.” AI was “net unhelpful.” He called the output “slop.”

Andrej Karpathy, December 2025:

“I’ve never felt this much behind as a programmer. The profession is being dramatically refactored.”

Two months apart. Something shifted for him.

DHH, summer 2025:

“Not letting AI write any code directly.”

DHH, January 2026:

“Half the resistance was simply that the models weren’t good enough yet… That has now flipped. Working with opencode has been a blast. Watching the thinking models nail a difficult bug is a revelation.”

Both work in web and enterprise domains - the part of the spectrum where outer-loop coding has arrived.

The other end

George Hotz, June 2023:

“GPT is great for quick generic scripts, but not for complex tasks.”

His work on tinygrad is low-level, performance-critical, novel. The kind of work that might stay inner-loop for a long time, if it ever accommodates AI at all.

ThePrimeagen, March 2025:

AI “often gets it wrong, especially with larger, more complex codes… lacks deep understanding for problem-solving and design.”

He sees some utility. Not transformative.

John Blow, January 2026:

“Current AIs can’t code. It’s clear by now that everyone who thinks they can are not good programmers themselves, and/or only ever do trivial problems.”

Game engines. Low-level optimisation. The stubborn end of the spectrum.

The expanding universe

I wrote a few days ago about crossing a threshold. For some domains, that threshold has been crossed - outer-loop coding works. For others, it hasn’t. Maybe it will eventually. Maybe it won’t.

We’re also still in what I’ve called the horseless carriage phase - new mediums imitating old ones until they figure out what they actually are. We haven’t had decades to stabilise on the right workflows. New practices are appearing at the fringes and in between, faster than anyone can keep track of. The temperature is moving so quickly that viable approaches multiply while we’re still learning the previous ones.

I look at scaling laws and feel fairly confident the tide will rise to cover every domain. But maybe I’m wrong. Maybe some work is genuinely stubborn. The skeptics might be seeing real limits rather than temporary gaps.

Either way, the spectrum is widening. A year from now, the gap between fully vibe-coded web development and manually-written game engine code will be larger than it is today. The conversation will fragment further. Practices that seem obviously correct in one domain will seem naive or irrelevant in another.

That’s the expanding universe. Not everyone converging on the same tools, but the number of viable practices growing - pulled apart at the fringes while new ones appear in the spaces between.