What Software Is Actually For
I've been writing software for about forty years. That number still surprises me when I say it out loud. I started on a Commodore 64, graduated to an Apple IIe at school, hit a TRS-80 somewhere in the middle, and eventually arrived at the general-purpose personal computer in time to watch it become the thing everyone in the world carries in their pocket and mostly uses to look at pictures of food.
Forty years is enough time to have held a lot of opinions about software. Some of those opinions I no longer hold. A few I've held so long they've calcified into something I can't examine anymore. And a handful have evolved, slowly, into something I might tentatively call a philosophy — except I'm suspicious of software people with philosophies, so let's say a working model.
Here is my working model, arrived at after four decades: software is for reducing friction between a person and what they're trying to do.
That's it. That's the whole thing.
I realize that sounds obvious. Maybe it is obvious. But I've spent the better part of my career watching software add friction instead of remove it — through feature bloat, through abstractions that make simple things complicated, through interfaces designed to serve the product roadmap rather than the person using the product. So I hold onto the obvious thing, because the obvious thing is what people keep forgetting.
When I think about the software I'm most proud of having built or contributed to, it's almost always the software where the user friction vanished. RunPee is a weird example because it solves a problem that shouldn't exist — the movie is too long, you need to pee, you don't want to miss anything — but the friction it removes is real and the moment you remove it is real. Someone is sitting in a theater, uncomfortable, anxious, and then they're not. They know they have four minutes. They go.
That's it. That's the whole job.
I've worked on much more technically impressive things that did essentially nothing for the humans using them. Internal tools, enterprise systems, dashboards that measured KPIs nobody acted on. I've also worked on technically modest things — a form, a search box, a one-page utility — that saved people meaningful time every day for years. The correlation between technical impressiveness and actual usefulness has never been strong in my experience.
The hardest part of this model is that it requires you to think about the person, not the problem. Which sounds easy but isn't. The problem is legible; you can reason about it, spec it, estimate it, assign it. The person is fuzzy. They have context you don't have. They have workflows you didn't anticipate. They use your software wrong — meaning, they use it in the way that makes sense to them, which is different from the way you designed it, which means the design was wrong.
I've learned more about software from watching people use it than from any book or talk or conference. Watching someone look for a button that isn't where they expected it, watching the moment of slight confusion, watching them work around the thing I built in a way I didn't anticipate — that's data. That's the actual spec, after the fact. The question is whether you're paying attention.
Most of the time we're not paying attention. We ship the thing and move to the next thing. The person adapts. The friction becomes invisible because it's always been there. And then someone comes along three years later and moves the button and everyone is annoyed, because the wrong button was in the wrong place for long enough that it became the right button in the right place.
I want to say something about the current moment, because no software essay in 2026 is complete without acknowledging the AI-in-everything environment we're all navigating. I'll be brief.
The tools are genuinely useful. The friction-reduction potential is real — there are categories of tasks where a well-prompted LLM removes hours of work. I've used them and they've helped.
But I notice a trend in the demos and the announcements and the product launches that makes me uneasy: the emphasis on what the software can do, rather than what the person needs to do. Features added because they're technically possible. Assistants that proactively tell you things you didn't ask. Intelligence layered onto interfaces that hadn't solved the basic problem yet.
Software can do a lot of things. That's never been the constraint. The constraint is understanding what the person actually needs and getting out of the way of everything else.
Here's a thing I've started doing, or trying to do: when I'm building something new, I write down the one sentence that describes what friction I'm removing and for whom. Not the feature list. Not the tech stack. One sentence, one friction, one person.
"Someone sitting in a dark theater needs to know if they can leave for two minutes without missing anything important."
If I can't write that sentence, I probably don't understand the problem well enough to start building. And if the thing I build doesn't reduce that friction — if it introduces new friction, or solves the wrong problem, or optimizes for the interesting technical challenge rather than the human outcome — then I've spent time and effort and, yes, someone else's money on something that doesn't matter.
That's the failure mode I care about most, after forty years. Not the crashes. Not the security bugs. Not the missed estimates. The failure mode where you build something real and complicated and it doesn't actually help anyone.
Software is for the person. That's the whole thing.