
You can describe an app in plain English and have a working prototype before your coffee goes cold. That's not hype. I do it regularly. I've spent the past year working AI into every stage of how I build software, from first conversation to deployment, and it's made me faster than I've ever been. What used to take a week now takes a day. Sometimes less.
I am not a sceptic. I'm a believer with receipts.
But faster isn't the same as better. And the gap between "it works in the demo" and "it works in production" is just as wide as it ever was.
The Floor Dropped. The Ceiling Didn't.
AI has genuinely collapsed the barrier to getting something on screen. People who couldn't code before can now build working prototypes. Boilerplate that ate half your morning appears in seconds. This is a real shift and a good one.
But getting something on screen was never the hard part.
The hard part is what comes next. Real users, real data, edge cases nobody anticipated, scale nobody planned for, and the slow grind of requirements changing over time. Good software isn't code that runs. It's code that keeps running six months later when three people are working on it and the brief has changed twice.
AI hasn't changed that. It's changed how fast you reach the starting line. What you do from there still depends on what you know.
The 80/20
Here's what I've learned from going deep on this: AI is a multiplier, and multipliers amplify whatever's already there.
AI generates the first 80% at extraordinary speed. The last 20%, the bit that turns a prototype into something you'd actually put in front of users, still needs someone who knows what good looks like. Someone who's been burned by the specific ways software fails when it meets reality.
If you've got years of knowing which patterns hold up, which shortcuts backfire, and which questions to ask before writing a line of code, AI multiplies all of that. You move faster and in the right direction.
If you haven't got that foundation, AI multiplies the chaos. More code, more quickly, more confidence, less idea whether any of it is right.
The Stuff That Isn't Prompt-Shaped
Not too long ago, a client "remembered," three sessions into the project, that their process depends heavily on a spreadsheet. Manually updated each week. Everything else happened downstream of that file.
It was experience that gave me the hunch some dots weren't joining up. Patterns in what they said, gaps in what they didn't. No prompt was going to surface that. What would it do? Ignore the gaps? Invent the solution? Hallucinate a workflow that sounds plausible but doesn't match reality?
That's not an AI failure. It's just a human problem. Asking the right questions, reading between the lines, knowing when the brief doesn't match the building. AI is brilliant at the work that follows. Someone still has to figure out what the work is.
This is what I do. I work with businesses to build and modernise their software, often bringing older systems into the AI era. The technical bit is table stakes now. The harder, more valuable bit is understanding the problem well enough to point the tools in the right direction.
Where This Leaves Us
The gap now isn't between people who use AI and people who don't. That distinction is already fading. It's between people who can steer it and people who can't tell whether it's heading somewhere useful.
AI makes it trivially easy to produce code. It doesn't make it any easier to produce good software. Those are different things, and the space between them is where the interesting work lives.
The floor dropped. The ceiling didn't move. And honestly, that's fine. The ceiling is where it was always going to get interesting.
I'm Steve. I help businesses build and modernise software, with AI baked into every part of the process. I write here about what that actually involves.