Last spring, running a terminal coding agent without stopping to check every tool call and code change was considered slightly heretical. The capability was hidden behind flags like --dangerously-skip-permissions and --yolo that weren’t always documented, or were added reluctantly when users kept asking for them.

Now it’s becoming standard. The bite point moved.

The shifting horizon

YOLO implies uncertainty. You only live once - there’s valour in charging around like things are certain when they aren’t. In an enterprise setting this can send the wrong signals. But everything has tradeoffs, and the calculus changes when the risky thing stops failing.

For a while YOLO was contentious. Many people using it didn’t widely admit to it.

We’re now at a stage where the models are hyper-reliable at the level YOLO mode originally described - individual tool calls and code changes. I know people who review at the PR level rather than the commit level. Human intervention has been pushed up the stack. There’s still oversight, but it moved.

What happens when dangerous becomes safe?

The expression YOLO implies being a bit loose. But what if the dangerous things become safe? The expression has to move to wherever the new horizon is.

It’s widely said that you want to build AI products for where the models will be in a year or eighteen months, so your product isn’t dead on arrival. But how do you find that level?

Chase whatever feels like YOLO right now. That’s probably where to build. Look for what people do but won’t admit to.

The things that feel reckless today - the things hidden behind warning flags, the things people do but don’t admit to - those are the things that will be standard in a year. The bite point moves. If you can find it, you know where to aim off.