Gateway Drug
A caveat before we start: the same word can be a pointer to completely different concepts. “AGI” is a particularly overloaded term - it points to many things depending on who’s using it. What follows is one thread through that space.
It starts innocently. You try GitHub Copilot one summer and realise that an awful lot of logic can be expressed in plain English as a block comment - even though that’s not how it’s intended to be used. You’re prompting before prompting is a thing.
Then ChatGPT appears. You’re already aware of the utility if you channel it correctly, so your interest peaks. You find out about scaling laws. Before you know it, you’re listening to the Dwarkesh Patel podcast.
Cut forward a few months. You’re listening to conversations about the brain. Suddenly you know more than you ought to about the hippocampus.
Adam Marblestone’s recent episode is a good example: “How does the genome encode abstract reward functions?” It’s the kind of thing I’d never have encountered without AI as a gateway.
The anthropocentric anchor
Guillaume Verdon (also known as Beff Jezos) put it well on Lex Fridman’s podcast:
I dislike the term AGI, Artificial General Intelligence. I think it’s very anthropocentric that we call a human-like or human-level AI, Artificial General Intelligence… I’ve spent my career exploring notions of intelligence that no biological brain could achieve.
We’re going through a moment right now similar to when we went from Geocentrism to Heliocentrism - but for intelligence. We realized that human intelligence is just a point in a very large space of potential intelligences. It’s both humbling for humanity, and a bit scary, that we’re not at the center of this space.
I find this persuasive. I used to think of the things LLMs are good at - active listening, empathy, synthesis - as uniquely human. Now I see them as things humans instantiate pretty well, but aren’t even the best at in most cases anymore.
Large language models feel like a form of discovery - not just engineering, but uncovering something that was always possible.
And disembodied cognition forces you to ask what’s left for the brain.
Agency as filler
The most obvious way this comes out is people talking in terms of “agency.” As roon put it:
“Agency” is a bland bourgeois replacement for the high romantic language of “courage”. Too many priests and not enough warriors.
There’s something to this. “Courage” and “agency” - and Samo Burja’s “live player” - have the same reference but different sense. They point to the same thing: being an active force in the world. But with different connotations. Maybe that’s what’s left to fight over: the words we use for what feels essentially human, as the territory keeps shrinking.
In another sense - like “general” in “artificial general intelligence” - “agency” is filler for the things humans do that AI doesn’t yet. The word shifts as the capabilities shift. What’s uniquely human keeps retreating.
The question becomes: is there a core that won’t retreat, or are we just watching the territory shrink until we find out nothing’s actually left?
I don’t know. But I didn’t expect Copilot to lead here.