You Just Get Used to It
“Young man, in mathematics you don’t understand things. You just get used to them.”
— John von Neumann
I don’t know how penicillin works. I don’t know why brushing my teeth twice a day is better than once, but six times would be overkill. I couldn’t tell you how a light switch actually works beyond “electricity goes through it.” Most people couldn’t.
This is normal. Most knowledge is testimonial. We trust experts because we have to - there isn’t time to research everything from first principles. And the experts seem trustworthy enough, based on the ones I know personally. Sometimes they’re wrong - the US food pyramid was probably always wrong, and historic advice on fat and sugar may have been lobbied into existence. But in the general case, trusting testimony works.
Then along comes AI.
The black box
A neural net isn’t built so much as grown. You have an algorithm and training data, you scale things up, and intelligence starts to emerge. We don’t actively teach models how to do anything - even in reinforcement learning, we’re rewarding behaviour rather than encoding it.
This makes the tools opaque in a way that’s hard to reconcile with normal expectations around responsibility. There’s a scene in Silicon Valley where Gilfoyle’s AI decides the most efficient way to fix bugs is to delete all the software. Richard’s response: “Just write code like a normal human fucking being, please.”
The joke lands because the person proposing an opaque tool takes on responsibility for its success. When that tool is a black box, the normal social contract around ownership breaks down.
Dogs
Frieda is my Australian shepherd. I’ve watched her eat chicken thighs - it’s pretty disconcerting. She cracks the bones with her molars and swallows the whole thing, tail wagging. You realise: oh, she isn’t one of us.
Dogs have co-evolved with humans. The ones around today are descended from wolves that didn’t attack children, that showed restraint around food and aggression. They’ve developed human-oriented traits - the face, the eyebrows, the eye contact, the visible excitement about the ball. To quote Rory Sutherland: a flower is a weed with a marketing department.
But bits of their biology aren’t what you’d expect from the surface. Dogs aren’t fundamentally alien - throwing them in with genuinely foreign creatures would be throwing the baby out with the bathwater. They’re just different in ways you don’t get from looking at them.
AI is similar. The transformer architecture gives us something a bit unfamiliar underneath, wrapped in the most human interface imaginable - language. Bits of the mask are more human than most humans. I’ve written before about how well the models handle emotional conversations. But other things don’t seem quite right. I can tell when I’ve slipped from a reasoning model to a standard one - something isn’t there, like when the Terminator’s face gets damaged and you glimpse the metal underneath.
The higher bar
Self-driving cars now have fewer accidents per 100,000 miles than humans. But the bar for trust seems higher than the metrics suggest it should be.
Some of this is reasonable caution - the data is a map, and the territory is what happens when these things are fully embedded among us. But some of it is mammalian heuristics. Our instincts for risk were calibrated over hundreds of thousands of years, against threats that behave in recognisable ways. AI doesn’t follow the same patterns. Our intuitions are miscalibrated.
We might struggle to trust the models well after the point where we should have.
The metaphor overhang
Film took decades to stabilise as a medium. Early TV was stage plays with cameras. We don’t have decades. The models are improving faster than we can develop the right metaphors and form factors for them.
There’s Louis CK’s bit about plane wifi: “It’s going to SPACE! Can you give it a second to get back from space?” A guy’s wifi cuts out and he says “this is bullshit” - about something he learned existed ten seconds ago.
We’ve done something similar with AI. Basically all science fiction has human-like AI appearing at the end of the tech tree. We got it right after air fryers, in the grand scheme of things. Robots were supposed to be robotic. We skipped that expectation entirely, and now if we really dig, we can find nuances where things don’t seem quite right.
Responsibility
Here’s the thing about testimony: you trust humans partly because of an implicit agreement around responsibility. If an expert is wrong, they bear some of the cost - to their reputation, their career, their conscience.
AI subverts this. We don’t attribute agency to it. The person assigned the task is still responsible when things go wrong.
There’s an interesting edge case here. If a JavaScript compiler had a bug, you wouldn’t blame the web developer - you’d expect them to fix it, but you wouldn’t fire them for not knowing about a problem in code they never wrote. We might be heading towards a world where something similar is true of AI - where it’s taken for granted that people have scaled the abstraction ladder far enough that there are things they can’t anticipate. We’re not there yet. But you can see it’s already happened with prior tech.
Getting used to it
Von Neumann’s point about mathematics might apply here. Maybe the question isn’t whether AI will earn our trust through better metrics or more familiar form factors. Maybe we just need to get used to it.
Dogs earned their place through millennia of co-evolution. The models are doing it in years. The human-facing traits are already good - better than good. The unexpected bits will keep showing through. But the gap between what the models can do and what we’re comfortable trusting them with will close, probably faster than our intuitions suggest.
The question is whether we’ll update our heuristics in time, or keep applying mammalian instincts to something that isn’t a mammal.
