Ad Machinum
Argumentum ad hominem: attacking the person making an argument rather than the substance of the argument itself.
Someone submits some work. You find out they completed it using AI. How does that affect their credit for the work? Their culpability if it turns out not to be up to scratch?
I wrote about whether this is cheating before. But there’s a different angle here: whether the origin of an idea should affect how we evaluate it.
The weird middle ground
If you outsourced writing a personal email to a third party, that would be gross. But with models, you’re outsourcing to some weird mixture of Mechanical Turk and calculator. Different people have drastically different mental models around this. 45% of people think ChatGPT looks up exact answers from a database. Others understand it’s generative, but disagree wildly about whether the models are actually reasoning. The same output, judged differently depending on what you think produced it.
Call it ad machinum: rejecting (or accepting) something because a machine made it, rather than evaluating the thing itself.
The compiler precedent
Early C compilers were buggy. They made errors that required complex debugging. You had to check their output, understand their quirks, work around their limitations.
Most people don’t know this anymore. Compilers just work. We trust them. If a compiler produced subtly wrong code this year, would you reject its output out of hand next time? Would you chest-poke a developer for trusting it? Downgrade their work for having been generated through a compiler? Probably not. The hit rate is too high.
The question is going to get forced on us for language models quite soon. Opus 4.5, Gemini 3 Pro, GPT 5.2 are all surprisingly good. We’re crossing the line where it’s obviously safer to use coding agents for certain tasks. Like commercial pilots using autopilot - the automation isn’t optional because it’s more reliable at that level.
The other fallacy
Argumentum ad populum: accepting an argument because many people believe it, rather than because it’s true.
There’s an evil twin to ad machinum. Andrej Karpathy calls LLMs “people spirits” - they’re trained on human output, representing a kind of distribution over what people think. You could argue this makes them the wrong thing to ask if you want to be contrarian. If you want to do something great, you probably have to go against the crowd.
But reasoning models have largely solved this problem. We’re no longer constrained by what the model considers average. Yes, there’s sycophancy. But if you keep your head straight, models can follow you to new places and act as sparring partners to stress-test your ideas.
So: don’t reject model output because it came from a model. But don’t accept it uncritically either. That would just be ad populum with extra steps.
The phase shift
If we’re heading toward models that deserve to be considered AGI - and the question is starting to surface - I suspect it will be gradual. A scalar thing, as more people come to agree. Along the way, the question of whether this or that entity is the source becomes less relevant.
The social side is hardest to predict. We adjusted to trusting car drivers and pocket calculators. But those transitions took generations, and the outputs were less ambiguous. This is faster, and we’re negotiating trust with something that looks like it might be reasoning.
What tolerance for error do we have from something with such a high hit rate? The compilers won. The question is when the models will too.