The Quiet Ratchet
A colleague mentioned they’d been surprised by their own code from a few months ago. It wasn’t to the standard they’d come to expect. The strange part: nothing about their explicit beliefs had changed. They hadn’t read a book or adopted a new methodology. The code just looked worse than it used to.
This is how standards work. They’re not things you consciously form. They’re things that happen to you.
The new workplace effect
You arrive at a new job. The codebase is cleaner than what you’re used to. The tests are more thorough. The PR reviews are more exacting. You don’t sit down and decide to up your game. You just… do. The environment sets the bar and you rise to meet it.
Look back at your work from before you joined and it feels like someone else wrote it. In a sense, someone else did. You’re a product of your context.
What’s changing
Right now we’re watching LLMs code at a level better than many of us typically operated at. We review their output. We play arbiter of taste. And in doing so, we’re being recalibrated.
In Artifice, I quoted G Fodor:
“The average programmer vastly overestimates their ability to avoid stupid mistakes or waste time on things that don’t matter. Arguably the bulk of work in the field has been entirely about preventing human programmers from doing things AI programmers won’t.”
If humans were the chaos all along, then watching AI code well is teaching us what “good” actually looks like. Our priors are shifting. The ratchet turns quietly.
The lag
The colleague’s reaction is early evidence. Their standards moved before their self-image caught up. They expected their old code to still feel fine, and it didn’t.
This will happen more. We’ll look at code we were proud of six months ago and wonder what we were thinking. Not because we learned something new. Because we watched something better.