The most exciting time to be a developer
2026 is the most exciting time to be a software engineer. It's not even close. But it could also be the last time to be a software engineer.
Over winter break, while big tech was on a code freeze, Anthropic did something brilliant. They gave away their supply for free. Everyone got 2x tokens during the time of year when people finally have time to do personal projects.
And Anthropic's supply is real good. They know once someone tries it they'll come back for more.
As a result, Claude Code is having a moment.
And I want to share the general workflow I've landed on to use a team of Claude Code agents to ship personal productivity tasks, small personal projects, and the podread.app.
Much has been written about the software development lifecycle. What I've put into practice is my preferred version of it, codified as markdown-defined agents. Each agent has a role. I have a Pathfinder that takes "I wonder if we could..." and produces actionable recommendations with file paths and trade-offs.
I have a Tech Lead, who is my primary interface with the team. The Tech Lead breaks down work, creates tasks and delegates to the rest of the team.
I have a Scout who is similar to the Pathfinder, but specializes in finding where this "thing" should live in the codebase. The Researcher finds prior art and existing patterns in the codebase to avoid duplicating methods or introducing unnecessary new patterns.
The Tester writes failing tests. I've always been a fan of test-driven development (TDD) since Jeff Casimir introduced the idea to me. This is doubly true in the age of coding agents, which thrive when they have the verification loop TDD provides.
I also have an Implementer, and I deliberately keep the test writing and code implementation separate work streams. This mimics the pair programming paradigm: one developer writes tests, the other gets them to pass. I find, without being really deliberate, Claude Code defaults to just adding tests after the fact, which misses the whole point of the iterative verification loop.
I have a few types of reviewers on my team. The first is a normal reviewer. The second is a skeptical reviewer who is primed to doubt everything it sees. Then I also have a Pedantic Reviewer, who is essentially the mirror image of the Scout and Researcher and ensures that everything in a new chunk of code has precedent in past work. In most cases the normal and Skeptical Reviewers are good enough.
One pain point I've experienced is creating commands in one project and then porting them over to another. To avoid this, I've started to run my Claude Code sessions from my agent-team directory. Claude can change directories and create worktrees, so I think of my agent-team directory like a conference room where I assemble the team to get started on the next big (or little) thing.
I typically don't run just one agent session at a time. I have multiple tmux windows open each with an agent team thinking through a different problem with me.
Another pain point this agent team attempts to solve is context persistence. Running /compact always gives me a sinking feeling since I'm essentially concussing a teammate. To deal with this while the big AI labs work on a real solution, I combine a few record-keeping tools. First, I use Steve Yegge's beads project, which is the best agent to-do list manager I've encountered.
My agent team also has a documentation practice. Each team has a brief that they are working on. The brief is a structured document with the goal of the project and sections that each agent owns. When I have two agent teams working, they each have a brief. I also have journals, which are meant to capture cross-project learning per role. Journals are probably the least important part of the system, and they may just get folded into the agent markdown specification in a future iteration.
Let's look at a feature I recently added to podread.app - more granular status updates upon episode creation. The project started with me opening a new Claude Code session in my agent-team directory. I used my customized /brainstorm command, which I forked from the obra/superpowers skill library. Using this command, I work with my Tech Lead to hash out design decisions (data model, formula, UI behavior, and edge cases). The Tech Lead then captures the design in the brief and creates beads for the team's individual tasks. In this case, the Tech Lead created an epic with 6 child beads and a reasonable dependency graph. The Tech Lead spins up parallel subagents (Implementers and Testers) for independent tasks, and sequential agents for dependencies.
About 30 minutes later, I ask the Tech Lead to spin up multiple rounds of review. This results in additional beads and agents spun up to fix issues raised in review. Finally I read the code as a PR on GitHub and provide feedback to the Tech Lead, who will fix immediately or create beads and delegate, depending on the issue size.
In the future I'm thinking about better messaging between agents, more visibility, and a task enqueuing system.
I remain convinced that I will not write a single line of code by hand in 2026.

You can listen to this post on PodRead, the app that takes your reading list and converts it to audio that shows up in your personal podcast feed.
Member discussion