Show, Don't Tell

LLMs are unreasonably good at associative thinking. Almost nobody is talking about it.
The discovery
I’m a particular kind of frugal. Happy to spend money where there’s a payoff, but I frame cost efficiency on a decades level rather than skimping on purchases. Prosumer territory - the products respected but not favoured by professionals, because professionals are paid to worry about the final 5% which I never want to care about.
This can become pathological. There’s a risk of getting caught in an optimization loop, hunting for failure modes and trade-offs, moving in circles because no product is perfect. So I’m mindful of when I let myself go deep. Most categories, I satisfice with the first good-enough option. But when the category matters - something professional, something that affects quality of life - I double down.
Recently I needed something in that second category.
My process: talk to an LLM, present the problem, get suggestions, chase the question about each suggestion until I find a caveat, pose the caveat back, repeat. Two hours later I’d convinced myself three separate products were correct before finding the failure mode in each. The fourth stuck.
Good decision. Bought it.
Then I had a thought: the model knew nothing about me other than my immediate problem.
What if I’d just shown it examples of past purchases - my Withings ScanWatch, my Kamei roof box, my Makita power tools - without explaining why I’d bought them?
So I tried it. New chat, no history. Shared the need. Listed some past purchases without commentary. Asked for one recommendation.
It immediately suggested the product I’d reached after two hours of iteration.
What’s happening here
The model inferred my decision-making style from examples I never explained. It drew out a pattern I hadn’t articulated - the “premium mediocre” territory Venkatesh Rao wrote about, products respected but not favoured by professionals, the bite point between cost and value.
I know humans can do this. It feels like a very human thing. But most people I know would struggle with it - partly because being up-to-date on products matters, partly because it requires depth across unrelated domains.
The models have that depth. And they’re unreasonably good at drawing parallels across it.
Why this matters
The obvious application is marketing - anticipate good decisions and present them before people get lost in two-hour loops. Fine.
But I think that’s thinking too small.
What if a model proactively chased questions that aren’t being asked? What if it suggested to an aging martial artist with joint issues that they consider cycling - even though it seems like a category error - because the concerns they’ve raised are actually addressed well by that unexplored territory?
This connects to the golden retriever problem. You don’t necessarily want a model that accommodates you. You might want one that’s a little obstructive - like friends who shake you by the shoulders when you’re missing the point.
The diva with the entourage. The dictator surrounded by yes-men. I wonder if there’s a shade of this in what’s been called “LLM psychosis” - people drifting into strange places when they only talk to systems designed to agree with them.
Maybe one antidote is models that are better at associative thinking. That can raise the idea that what seems like a category error might actually be the point.
Still figuring it out
I’m still working out the full implications - things are moving fast. But the core observation feels solid: LLMs have a latent talent for drawing parallels across domains, and we’re not talking about it enough.
Show, don’t tell. It works on them too.