Interview Question
Here’s an interview task I came up with about a year ago:
Interview Task
Time: 45 minutes
Tools: Any language, stack, or AI tooling you like.Your startup is trying to disrupt the stale world of calculator web apps with something more collaborative, more modern, and more memorable.
Moments ago, the non-technical founder stepped on stage and told the audience: “After the lunch break, we’ll be demoing our new calculator—for shared use between multiple people.”
This is the first you’ve heard of it. They also promised it would be distinctive—but didn’t say how.
You have 45 minutes to build a showcaseable prototype.
That’s it. No setup. No starter code. Just that brief, delivered on a call, with the knowledge that they’d be expected to build something and could use any tools they wished.
The soundboard
What happens next is predictable. Candidates start asking about what level of reality we’re operating on.
“Can I use AI tools?” “Can I paste this into ChatGPT?” “Should I do this locally or use Lovable or whatever?” “Do you want me to actually build something or just describe what I’d build?”
We almost could have had a soundboard. The answer was always the same: Yeah, whatever you like. It’s your job to impress us within the bounds of the task.
The idea was to socially signal that we were happy for them to use AI - by giving them the explicit opportunity to ask. And then to watch what they did with that permission.
What we were looking for
We were hiring for AI engineering roles - in the sense swyx coined it: people who know how to use AI to get stuff done, not people who train models or push the frontier. Musicians rather than audio engineers or guitar builders.
We wanted to see product taste. High agency. People who take pride in product ownership and care about user experience - not just technical correctness.
The task is deliberately bounded and silly. A calculator. Collaborative. Distinctive. 45 minutes. It’s obviously a prototype situation, not an enterprise architecture decision. We wanted candidates to understand they were being asked to pull a cat out of a bag, not design a cathedral.
What we actually saw
We still measured technical confidence. We asked pointed questions where we thought someone had palmed things off. But if they could answer sensibly given the constraints - and the constraints were broad, since they were basically told to prototype rather than build something robust - we were okay with it.
The whole idea: you want them to look you in the eye and explain that they’re conscious of how AI can help, they know how it works, they’re using it deliberately. There’s no time to bullshit because of the time restriction. It’s on them how to impress us, allowing for the constraint.
The pattern we typically saw: candidates would scrape around trying to do things “properly” - the old way - before realising they had 10-15 minutes left. Then they’d bang it into a coding harness and optimise for the minimal number of bugs.
Every so often, you’d get someone who understood immediately. Get a minimum viable working version first, so you can collect your thoughts - and then brainstorm with LLMs for how to make it special. Real panache around questions of user experience and delight. Not just meeting a basic spec, but showing they genuinely care about what it feels like to use. Those were the people we were looking for.
Why it doesn’t work anymore
In March 2025, this felt almost cavalier. Now you can one-shot it with any modern coding harness.
Maybe that’s fine for some roles. For a product manager, it might still work - we’re testing product ownership, not technical stretch. But for our team, it no longer gives anyone room to demonstrate ability.
The half-life of interview questions is getting shorter.
The bigger point
There’s a big difference between someone who knows how to code and someone who knows how to create a great product. You could argue the latter is a superset of the former - especially when you shrink team sizes sufficiently.
Hiring is becoming a major concern as tools and workflows shift. What we’re looking for keeps changing. The field is becoming less specialised. The boundaries are fuzzing up.
“Calculator” used to be a job title. It’s now absolutely still a skill - but it isn’t rarefied in quite the same way. It isn’t set apart from everything else.
I suspect “software engineer” is heading somewhere similar. Not disappearing - but becoming less distinct. More of a baseline capability, less of a specialism.
We’re still figuring out what to look for. But I’m pretty sure it’s closer to product ownership than technical depth. At least for now.