SOFTWARE DEVELOPMENT

Distribute the Bottleneck or Become It

AI didn't just make engineers faster. It made the handoff between deciding what to build and building it too expensive to keep around.

Husam Machlovi, Melanie Plaza · April 29, 2026 · 6 min read
A solitary dark sphere resting on a stone plinth in soft sunlight, with leaf shadows on the wall behind.

The phrase "product engineer" has been having a moment for about three years. Long enough that everyone's heard it, short enough that nobody quite agrees on what it means. Some people use it to mean an engineer who cares about UX. Some use it to mean a generalist. Some use it as recruiter bait for "we don't want to hire a real PM yet."

We mean something more specific. The reason we mean it is that we keep watching the alternative stop working.

The short version: the engineering job is changing because the part of it that used to absorb most of the time, translating someone else's specifications into working software, is exactly the part AI is collapsing fastest. What's left is a different job. The person doing it has to make judgment calls that, in the old world, lived several rungs up the ladder.

Calling that person a "product engineer" is fine. The label is doing less work than the underlying shift.

The handoff is the thing that broke

For most of the last twenty years, the standard engineering org was built around a handoff. Someone (a PM, a designer, a senior engineer wearing one of those hats) figured out what to build. Someone else built it. The ladder rewarded the second group for getting better at the building part. Each rung was about more technical depth, more system complexity, more things you could implement well.

The handoff was expensive even before AI. It produced rework, telephone-game errors, and the specific genre of meeting where everyone leaves with a different idea of what got decided. But the handoff was justified by economics: building was the slow, expensive part, so you specialized people on it.

AI inverts that. Building gets cheap. Sometimes embarrassingly cheap. The bottleneck moves to deciding what to build, in what order, and whether what got built is any good. Those are judgment problems, not implementation problems. And they are exactly the problems the handoff was set up to solve elsewhere: by the PM, the architect, the senior reviewer, the client.

If you keep the handoff in place after the building gets cheap, you are paying full price for a coordination overhead the work no longer needs. Worse, you are keeping the most interesting work behind a door the IC is not supposed to open.

An older argument with new force

This isn't a new worldview at AE. For years, Melanie has been giving product-focused engineering talks at company retreats, with slide decks built around the heretical claim that the product is the point, not the code quality. The clearest way to disarm engineers who thought technical poetry was the job, it turned out, was to quote Uncle Bob back at them. Robert Martin, properly read, was never preaching craft for its own sake. He was preaching craft in service of outcomes: value, evolvability, the user, the system that has to still work in five years. Engineers who built shrines to clean code were usually quoting only half the argument.

That talk got a polite reception for a long time. The economic case for it wasn't strong yet: shipping fast and shipping clean were comparably expensive in most cases. You could insist on outcome-orientation as a values claim. You couldn't insist on it as a cost claim.

AI changed the cost claim. Implementation got cheap. The marginal value of the next refactor stayed about the same; the marginal value of the next judgment call went up sharply. The argument we used to make on principle is now one the spreadsheet makes on its own.

What the role looks like at full altitude

The end state we are building toward looks something like this. The engineer works directly with the client, not at arm's length through a PM. They listen for the real problem underneath the requested feature. They form their own view about what should be built, why, and in what order, and they argue for that view in the room. They build it (with substantial AI leverage) and ship something visible to the client every week. When something is about to go wrong, they say so before it happens. They write their own tickets, or skip them entirely because they no longer need the overhead. They are accountable for the outcome, not the activity.

What is different about that description, compared to a senior engineer ten years ago, is not the technical work. It is that judgment and execution live in the same person. The old org chart kept those in different rows of the same table. The product engineer is the row where they merge.

Why the traditional ladder can't get you there

If you're at a company that still defines engineering levels primarily by technical depth (bigger systems, harder problems, more design docs), this is the part that should make you slightly uncomfortable.

The traditional ladder is a depth ladder. Every promotion is about going deeper into the craft of implementation. That ladder still works for the small number of roles where the technical problem genuinely is the bottleneck: distributed systems, ML infra, security, novel architectures. For most product engineering roles, it doesn't, and pretending it does produces a specific failure mode. You promote people for being excellent at the part of the job that's getting automated, and then wonder why your senior engineers are bottlenecked on decisions a junior PM could make in their sleep.

The newer ladder is a judgment ladder. Each rung is about handling more ambiguity, owning more outcomes, and being trusted with bigger decisions about what to build, not just how. Technical depth still matters. It is the floor, not the ceiling. But the differentiator at the top isn't who can build the hardest thing. It's who can decide the right thing to build, defend it, ship it, and tell you whether it worked.

This change is genuinely hard to manage. It rewires how you hire, how you promote, how you compensate, and what you ask of people in interviews. Most companies haven't done that work, which is why most companies cannot just promote their way into product engineers. The old criteria don't reach.

When execution speeds up, the bottleneck moves

There is a version of this argument that frames product engineering as a humane response to AI: the engineers got faster, so let's find them more interesting things to do. That framing is true but incomplete. It treats the role change as a courtesy. It isn't.

The real reason is structural. When you make execution faster, you don't eliminate bottlenecks. You move them. Work that used to queue behind "we need engineers to ship this" now queues behind "we need someone to decide what to ship, in what order, and whether it's right." Swap one bottleneck for a smaller version of itself and you've improved nothing. Distribute the new bottleneck to the people closest to the work, the people who now have capacity to take it on, and the work runs in parallel instead of single-file.

That distribution is only viable because of a second shift. AI is famously a jagged frontier: strong in some places, surprisingly weak in others. So are people. The pattern that keeps surprising us is what happens when a jagged person uses a jagged tool. They don't stay jagged in the same places. AI fills in the IC's weak spots in writing, in synthesis, in initial strategy framing, in client-facing communication. The IC keeps their depth and gains breadth they didn't have a year ago. A team of T-shaped people who used to need a layer above them to translate, decide, and coordinate increasingly doesn't.

The throughput gain from AI is real, but it isn't the main event. The main event is that the coordination layer becomes optional for more decisions than it used to be. Which means the question isn't whether to find work for your engineers. The question is whether you let them hold the work that's already piling up two desks over.

Where we actually are

We don't have a building full of fully realized product engineers yet. Nobody does. What we have is a spectrum.

Some of our ICs are most of the way there. They run their own client conversations, push back on assumptions, write the tickets nobody else thought to write, and ship work the client didn't know to ask for but immediately recognizes as right. Others are partway: strong on the building, still finding their voice on diagnosis, still learning when and how to say the hard thing. Others are earlier still: excellent technical work, but waiting for someone else to define what excellent technical work means in this context.

This is not a complaint. It is a description of what a transition looks like. The job changed. The people in it are catching up to the new shape of it at different rates, and a lot of what they're being asked to do (make recommendations, run client demos, push back on requests, name root causes out loud) was explicitly not part of the old job description. You can't expect someone to have years of practice at a thing nobody asked them to do until last year.

What we are explicit about is the direction. We tell people that the goal is the version of the role described above. We coach toward it. And because the old ladder doesn't reach, we built a deliberate development path around the specific sub-skills that separate the people who are most of the way there from the people still on their way.

Why this matters before you hire anyone

If you are a leader thinking about which AI consultancy to work with, or which engineering org to hire into, or how to redesign your own team, the product engineer question shows up earlier than you'd expect.

It shows up the first time you give an engineer an ambiguous problem and watch what they do. Do they ask three sharper questions and then propose a path? Or do they wait for a spec? It shows up in the demo, when something doesn't work and the engineer either says "the requirements were unclear" or says "here's what I think we should do differently." It shows up in the ticket queue, in whether anyone is writing tickets that wouldn't otherwise have gotten written.

The companies who spend 2026 figuring this out will spend 2027 hiring from the companies who didn't.

software developmentproduct development

LET'S TALK

Bring us the hard problem.

We'll bring the team that ships.

Get in touch