Skip to content

Writing code is no longer the bottleneck

· 8 min read

Block just laid off over 4,000 people — nearly half the company — while reporting $2.87 billion in quarterly gross profit, up 24% year-over-year. Jack Dorsey didn’t blame a downturn. He blamed a shift. “Intelligence tools have changed what it means to build and run a company. A significantly smaller team, using the tools we’re building, can do more and do it better.”

In a recent video, Theo Browne (T3) unpacks what that shift actually looks like from the ground floor — not as a CEO writing a memo, but as someone building products every day with AI tools and watching the economics of software development change in real time. His central argument is simple and uncomfortable: writing code used to be the most expensive part of building software. It isn’t anymore. And everything downstream is broken because of it.

The funnel collapsed in the middle

Theo lays out the traditional development pipeline as a funnel. At the top: every user problem. At the bottom: shipped solutions. In between, a series of narrowing steps — describe the problem, identify a solution, scope the work, write the code, review it, test it, release it. Each step filtered out more work, and the most expensive filter was writing code. It was too costly to waste, so companies invested heavily in everything above it to make sure only the right work made it to engineering.

That filter is gone. You can now take a screenshot of a user’s bug report, paste it into an AI agent, and skip straight from problem to code. Theo describes doing exactly this — kicking off background agents against user reports and ending up with hundreds of pull requests just sitting there, because everything below the code-writing step didn’t get any easier.

The funnel doesn’t narrow at “write code” anymore. It narrows at “review code,” “test code,” and “ship code.” And if your team has more people who need to approve things, the funnel gets even steeper. More engineers now means more PRs in review, more sign-offs required, and slower shipping.

Smaller teams, faster ships

The evidence for this is moving beyond anecdote. Theo built Lawn, an open-source alternative to Frame.io — a product category Adobe valued at $1.275 billion when they acquired it in 2021. He built it in two weeks, part-time, without writing a single line of code. He describes structuring APIs and application logic by telling the agent how things should work, watching it write a proposal, approving it, and moving on. The result isn’t slop — it’s a product his team uses daily that he says genuinely feels better than the thing it replaced.

Cloudflare’s Vinext project tells a similar story at a different scale. A single engineering manager, Steve Faulkner, rebuilt 94% of the Next.js public API surface in roughly one week using Claude Code. The cost: about $1,100 in API tokens. The output: 1,700+ unit tests, 380+ end-to-end tests, 4.4x faster builds, and 57% smaller bundles. The project README is refreshingly honest — “humans direct architecture, priorities, and design decisions, but have not reviewed most of the code line-by-line.”

OpenAI documented something similar internally. Their Harness Engineering experiment put three engineers on a project using Codex exclusively. They produced roughly one million lines of code in five months, averaging 3.5 PRs per engineer per day, with zero manually written lines. OpenAI estimates it was built in about a tenth of the time manual coding would have required.

The mythical man-month isn’t just validated — it’s amplified. Wes McKinney argues that AI agents are “probably the most powerful tool ever created to tackle accidental complexity,” but essential complexity — the genuinely hard design problems — remains unchanged. Brooks’ 1975 insight that small, focused teams build better software than large ones has never been more relevant.

The new bottleneck is everything around the code

The data backs up what Theo describes anecdotally. Faros AI’s productivity research across 10,000+ developers found that PR volume is up 98% — but PR review time is up 91%. PR sizes increased 154% when AI is involved. And at the organizational level, DORA metrics — deployment frequency, lead time, mean time to restore, change failure rate — show no correlation with AI adoption. Code is being generated faster. It isn’t shipping faster.

AWS’s enterprise strategy team warned explicitly that AI coding assistants will overwhelm delivery pipelines built for lower volumes. The 2025 DORA report found that 90% of developers now use AI daily, but code review time has nearly doubled as PR volume surges. The bottleneck migrated downstream, and most organizations haven’t restructured to handle it.

There’s a deeper irony here. The METR randomized controlled trial — one of the most rigorous studies on AI-assisted development — found that experienced open-source developers were actually 19% slower when using AI tools, despite believing they were 24% faster. The perception-reality gap is striking. Developers feel more productive. The stopwatch disagrees. The likely explanation: AI compresses the typing but adds cognitive overhead in review, context-switching, and course-correction that isn’t accounted for.

Why big companies can’t keep up

Theo tells a pointed story about trying to use Gemini 3.1 Pro in GitHub’s Copilot CLI. He tagged a senior Microsoft employee, who looped in two more engineers. The response: “This model is only available in VS Code and not in the Copilot CLI. We are working on it, but not an easy thing.” Adding one model to a CLI — functionally one line of configuration — blocked by organizational complexity. Theo’s three-person team would have shipped it in minutes.

This is Dorsey’s thesis in microcosm. He had two options: cut gradually over months as the shift played out, or act decisively. Gradual cuts — taking a team from ten to eight — don’t change process. They just reshuffle assignments. Cutting to two forces rethinking everything: how work is scoped, who approves what, how releases happen, what cadence you ship at.

Wall Street agrees. Block’s stock surged 24% on the announcement. Axios noted that the market reaction may give other CEOs “permission, or even an incentive, to consider the same thing.” Dorsey himself predicted that “within the next year, the majority of companies will reach the same conclusion and make similar structural changes.”

The skeptics aren’t wrong to push back. Block tripled headcount during the COVID hiring binge — from 3,800 in 2019 to over 10,000 — and some of this is clearly a correction. Ethan Mollick at Wharton noted that “it is hard to imagine a firm-wide sudden 50%+ efficiency gain that justifies massive organizational cuts.” Sam Altman himself warned that some companies are “AI-washing” — using AI as cover for layoffs driven by other factors. But the structural argument — that large teams slow down shipping in a world where code generation is cheap — holds whether the layoffs were purely AI-motivated or not.

What actually matters now

Theo’s advice for developers is blunt: start talking to your users. Start getting involved in the release process. Start automating your own job, because through automation you’ll discover everything around it — and that’s where the value is shifting.

The developer role is moving from author to conductor. The skills that matter most now are the ones most developers have historically avoided: turning user problems into clear, actionable plans. Thorough QA and testing. Understanding your customers well enough to know when the AI got it wrong. Owning the release process end-to-end rather than throwing code over the wall.

Even Andrej Karpathy — who coined “vibe coding” in February 2025 and watched the term accumulate 4.5 million views — has moved on from the concept. He now calls it “passé,” replaced by “agentic engineering,” and hand-coded his latest project after finding that agents “just didn’t work well enough.” The point isn’t that AI coding tools are a fad. It’s that the craft is shifting from generating code to directing and verifying it.

Theo puts it sharply: “If the agent knows more about your customers than you do, you don’t have a job anymore.”

Practical takeaways

  • Invest in testing infrastructure. Tests are the one unambiguous feedback mechanism for AI-generated code. A failing test tells the agent exactly what’s wrong. Without tests, you’re reviewing every line manually — and the data shows that doesn’t scale.
  • Talk to your users. The ability to translate user problems directly into agent-ready specifications is becoming the core developer skill. You can’t do that if you don’t know your users.
  • Learn to review AI-generated code critically. CodeRabbit’s analysis of 470 pull requests found AI co-authored code contained 1.7x more major issues and 2.74x higher security vulnerability rates. The code gets written either way. Your job is catching what’s wrong with it.
  • Rethink team structure around ownership, not headcount. Theo and his co-developer take turns owning main — one person ships all day while the other works on other things. That model outpaces teams of twenty with approval chains and code ownership boundaries.
  • Treat slow-moving incumbents as opportunity. Any company that hasn’t rethought its engineering process by the end of this year is a company whose product a two-person team can outship. Theo rebuilt Frame.io in two weeks. Cloudflare rebuilt Next.js in one. The window is open.

We’re not six months away from AI taking developer jobs. We’re at the point where the job itself is becoming something different — less typing, more thinking, more testing, more talking to the people who actually use the thing. The companies and developers who figure that out first will have an enormous head start. The ones who don’t will wonder why their ten-person team can’t ship what someone else built over a weekend.

If you enjoyed this, you might also like

👤

Written by

Daniel Dewhurst

Lead AI Solutions Engineer building with AI, Laravel, TypeScript, and the craft of software.