Bridge 77, Macclesfield Canal Bridge 77 on the Macclesfield Canal

On the Macclesfield Canal, there’s a type of structure called a roving bridge. When the towpath changes sides, the bridge lets the horse cross without being untethered from the barge. The horse doesn’t need to understand the bridge’s design. It just follows the path.

Yesterday, Doug O’Laughlin on SemiAnalysis’s Transistor Radio podcast was talking about Claude Code with Opus 4.5. He’d been using it daily, hitting his usage limit every week. All the charts in his year-end outlook were made by Claude. “It one-shots everything,” he said. Then:

“It’s a skill issue now… for the first time in my life, there is nothing - it is completely on me. I deeply believe that.”

Task vs Purpose

In 2016, Geoffrey Hinton made a famous prediction:

“People should stop training radiologists now. It’s just completely obvious within five years deep learning is going to do better than radiologists.”

Hinton was right about the task. AI now powers essentially all radiology applications. But the number of radiologists increased.

Why? Jensen Huang, on No Priors the same day as Doug’s podcast, offered a framing:

“A job has tasks and has purpose. In the case of a radiologist, the task is to study scans. But the purpose is to diagnose disease.”

When radiologists could study more scans more deeply, hospitals became more productive. More patients, more revenue, more demand for radiologists. The task got automated. The purpose expanded.

Huang extended this to software engineers:

“The purpose of a software engineer is to solve known problems and to find new problems to solve. Coding is one of the tasks. If your purpose literally is coding - somebody tells you what to do, you code it - all right, maybe you’re gonna get replaced by the AI. But most of our software engineers, their goal is to solve problems.”

Building Bridges

At work, I’m on a skunk works team. We have a project called Briefcase - named after Will from The Inbetweeners - that uses git worktrees to parallelise small units of work across multiple LLM instances. Early on, it was chaos. Agents going off-piste, coming up with their own ideas, misunderstanding instructions. You could never get it to 100%.

So I wrote a standing “octopus” prompt in a local markdown file - a structured approach where a central LLM coordinates: creates worktrees, spawns worker instances, pulls branches back, runs tests. Deterministic scaffolding around unreliable agents.

Today I actually used it. I had two threads of work to parallelise. Claude Code’s first instinct was to dive in and start exploring the codebase - but I stopped it. “I wonder if you’re already stepping in to do work that tentacles could take on,” I said.

The response: “You’re right. I’m being a tentacle when I should be the octopus.”

It spun up two worktrees, launched background agents into each, and came back with a status table. One thread finished while we talked. The other was still running - bigger scope, still exploring.

The bridge worked. But here’s the thing: I didn’t need to prompt for any of it. The behaviour was already built into the tool. The elaborate prompt file was mostly pointing at capabilities that existed anyway.

This is bridge-building. The horse can’t understand how to cross the canal on its own, so you build an elegant structure that routes it where it needs to go. Except, apparently, it can now.

The Crystallisation Hierarchy

There’s an old idea from Alistair Cockburn about preferring richer communication channels. Paper is worse than email, email is worse than a call, a call is worse than being in the room together. Higher bandwidth, faster feedback, fewer misunderstandings.

I’ve started thinking about AI workflows the same way:

  1. Manual: Use the browser yourself. Click around. Understand what’s happening.
  2. Agentic: Have a coding agent use browser automation tools - Playwright MCP, Chrome DevTools MCP. It figures things out dynamically.
  3. Deterministic: Crystallise what matters into a script or test. If you liked it, then you shoulda put a test on it - the Beyoncé rule.

Early on at work, I tried using LLMs to handle deployment tasks. It was okay-ish at first, but became untenable as things grew - the odds get worse when the surface area expands. You could never get it to 100%. The models would come up with their own ideas about how things could be better, or misunderstand instructions. It mattered.

So I wrote a script. Reliably wrong at first, therefore reliably right once fixed.

The temptation is to skip straight to agentic. Everyone’s excited about agents managing things dynamically. But you probably want to start with manual understanding before escalating - and “manual” here means both humans clicking around and agents figuring things out ad-hoc. Both are non-deterministic. Both are how you learn. And if something matters - if you’d be upset when it breaks - you shouldn’t skip the crystallisation step.

The Horse Is Getting Smarter

Here’s the thing though. As models get better at tool calling and figuring things out on the fly, I’m becoming more comfortable doing things the agentic way. The elaborate scaffolding starts to feel unnecessary.

The roving bridge was a clever workaround for a tethered horse. You wouldn’t build it today. If the horse could just hop across the canal, the bridge would be a curiosity - beautiful, perhaps, but antiquated.

I think we’re in an awkward middle period. The “skill issue” framing is real: there’s genuine alpha in mastering the current tooling, building the bridges, understanding how to route the horse. But the half-life of that knowledge is shrinking.

The bridges I’m building now may be obsolete in weeks. The octopus prompt, the worktree orchestration, the crystallised scripts - all workarounds for limitations that are actively being eroded. The bitter lesson keeps proving itself.

Maybe the real skill isn’t building bridges. Maybe it’s knowing when to stop.