Too Early?
I’ve spent most of my working hours for the last three years managing context windows. What goes in. What shouldn’t. When to reset. When to start a new conversation. What pollutes the model’s thinking versus what it can tolerate. All in service of reducing human effort up front.

This is the game: context windows accumulate pollution. Wrong turns, failed attempts, hallucinations. As the session grows, the model drifts - it starts looking at the transcript instead of the files.
So you optimise. Make windows longer. Improve retrieval. Improve reasoning across the window - though cost scales quadratically, we’re told. Or you compact: periodically summarise, supposedly keeping most of the semantic content. A 100-word summary of a 10,000-word conversation, hoping most of it was disposable. It’s never perfect. It’s noticeably not great.
This is the status quo.
Ralph
The Ralph pattern - named after Ralph Wiggum - proposes something different. State lives in files, not the chat. The loop is dumb. Context is fresh every iteration.
Don’t worry about the model’s ability to reason across the totality of what’s going on. Don’t worry about recall within the window. Outsource memory to the file system.
I’ll be honest: when I first heard about Ralph, I disregarded it. A while loop with verification? We’ve known for ages that LLMs in verification loops will ladder-climb to better answers. That’s not new.
But that isn’t actually the point.
Rejecting the Premise
Ralph is a break with the context window optimisation game entirely. Instead of managing the window - expanding it, compacting it, improving retrieval into it - you just throw it away. Fresh start each iteration. The file system remembers so the model doesn’t have to.
I don’t know if Ralph will take off. There’s a frothiness to the attention it’s getting, and as a late adopter of most things, that makes me suspicious.
But this is the first time I’ve had a sense that the premises of the last two or three years of LLM usage are being questioned in a meaningful way. That’s not proof of a meaningful shift. But it’s probably a necessary condition for one.
I’d gotten so used to the status quo - in my bubble of people who were early to this - that I hadn’t considered the possibility. Maybe the skills we built up were band-aids on a bigger issue. Maybe you could just reject the premise.