Is/Ought
The rule that you should say “fewer” for countable nouns and “less” for uncountable ones was invented by a grammarian in 1770. Before that, people used them interchangeably for centuries. An observation about formal usage was hardened into a prescription about correctness. We do this constantly - not the philosopher’s is/ought exactly, but the same shape - slide from describing how things are to insisting that’s how they ought to be.
I think this is happening right now with AI in the software lifecycle, and it’s going to bite a lot of teams.
Your beliefs about what the models can and can’t do are going stale faster than you realise. Even if you use them daily - especially if you’ve been using them for a while. New state-of-the-art every four to six weeks. Constant minor releases crossing thresholds nobody anticipated. You tried something in September; it didn’t work; you learned not to bother. Except now it works, and you’re not going to find out because you’ve already filed it under “doesn’t work.”
The slide is subtle. “This is what the models can do” becomes “this is what the models should be able to do” and you don’t notice. Descriptive hardens into prescriptive. And then you’re the person confidently explaining why something won’t work while a newcomer just… does it.
This isn’t a flaw in experienced people - it’s how we all work. You learn something expansively, open to possibility, and then you compress it into routine so you can get things done. You have to. You can’t re-examine everything from first principles every time. But the compression is lossy, and the routine resists update. That’s fine when the world holds still. Right now, it doesn’t.
So how do you staff a team when everyone’s hard-won intuitions are actively getting in the way? I keep coming back to Winnie the Pooh characters as a frame.
Eeyore - The experienced pessimist. “It’ll never work. Have we even tested these fifteen edge cases?” Ultra-conservative, catalogues failure modes, has decades of scar tissue. Incredibly valuable. Keeps you from shipping garbage.
Tigger - The naive optimist. “Let’s try everything!” Wrong most of the time. Except when the models have moved on and the naive bet pays off. They don’t know what’s supposed to be impossible, so they try it anyway.
The dangerous place is the middle. Experienced, moderately optimistic. Knows the tools well. Has opinions. The problem is: the opinions are six months out of date and they don’t know it. Confidently wrong is worse than naively wrong, because the confident person doesn’t check.
The Tiggers try things the Eeyores would never approve. The Eeyores catch the failures the Tiggers would ship. That combination is robust. The middle gives you neither.
So if you’re building a small team right now - and teams should be small, Brooks’s law cuts harder than ever - consider a barbell. Don’t hire three people from the middle of the distribution. Get someone who’ll try things that seem stupid. Get someone who’ll tell you all the ways it’ll break. Let them argue.
The funhouse mirror point still holds: AI tools are stretching out the differences between users. Skill matters less than freshness of assumptions and willingness to be wrong. The models keep moving. The question is whether you’re updating fast enough to notice.