6 min read

Synthesizing AI Predictions

Experts are saying that by January 2027, AI will replace most programmers. By September, it'll outperform humans at most economically useful work. As a software engineer with 25 years left in my career and as a parent, I'm trying to figure out what this means.
Synthesizing AI Predictions

Experts are saying that by January 2027, AI will replace most programmers. By September, it'll outperform humans at most economically useful work. As a software engineer with 25 years left in my career and as a parent, I'm trying to figure out what this means.

A lot of my friends are not really paying close attention to AI in 2025. Some of the skepticism is the result of the fact that at first glance the phenomenon looks like the crypto currency scene of the late 2010s. The analogous energy demands, data centers, and graphics cards make it easy to conflate AI with crypto. Many of the dumbest crypto bros have re-branded as AI bros.

Unlike crypto, AI has actually produced real-world consequences beyond tragic pig-butchering schemes and various Shakespearean dramas. We are beginning to see job displacement particularly among entry level tech jobs.

My day-to-day has completely changed. In early 2023 I began to allocate 50% of my screen real estate to an LLM chat window. This kind of tool change is as significant as when I first learned to code nearly 10 years ago and started using an integrated development environment and terminal as part of my daily workflow.

Two years since my first LLM request at work, my day begins launching an AI coding agent to help me understand and change code. My day ends with me using Claude Code for side projects or writing in Rosebud, an AI journaling app. AI has become my work and my play. Will AI also become my doom?

Predictions from the cast of well-known tech elites who are heavily invested in AI need to be taken with a grain of salt. When Altman makes a claim about the future you have to decide if he is speaking scientifically or working on his next round.

While there are some with much to gain from hyping AI, others have accepted personal risk in order to make claims about its dangers. Daniel Kokotajlo, one of the authors of AI 2027, spent years working on OpenAI's safety team before putting $2M stock package on the line to speak publicly about AI risks. When someone with insider access to frontier models risks that much money to sound an alarm, it's worth listening.

Just what are the experts saying?

In AI 2027, Kokotajlo and his co-authors make the case that AI will replace professional coders by January 2027 - just over a year and a half from now. By September of 2027, they predict that AI will be able to "outperform humans at most economically useful work."

Sholto Douglas, a researcher at Anthropic recently stated:

But the one that I feel we're almost guaranteed to get—this is a strong statement to make—is one where at the very least, you get a drop-in white collar worker at some point in the next five years. I think it's very likely in two, but it seems almost overdetermined in five. 

His boss, Anthropic CEO Dario Amodei warned that AI could eliminate up to 50% of all entry-level white-collar jobs within the next five years. He also suggests AI could lead to a significant increase in unemployment, potentially reaching 10-20%.

As a software engineer and white collar worker, things do not feel great right now. My compensation has never been higher, but the market and job security feel more tenuous than ever. Many companies will use AI productivity gains to cut labor costs and lay off workers. Some companies will see AI as a rocket fuel and double down on labor investments. I hope I'm fortunate enough to be in the latter camp.

The AI 2027 cases of either human extinction or utopia by the end of the decade seem plausible. I think the positive case is less likely given what I have observed from our political and economic elite. In the positive case AI essentially generates wealth out of the ether. AI, which is already good at protein folding, proves to be adept at creating miracle drugs. The wealth and lifespan increases are enjoyed by all. Given our political leadership eagerness for graft and inability to deliver material good to the public, this seems unlikely. I would expect wealth and other benefits to be mostly consolidated among a few individuals and privileged groups.

I find the negative case in AI 2027, which is human extinction, unlikely, but not as impossible as the aforementioned 2030 utopia. In this branch of the prediction, the AI overcomes physical limitations by creating factories staffed by humans. The humans wear VR goggles designed by AI that receive plans and step by step guides created by AI to build robots. These robots supply AI with the ability to build ever more advanced robotic technology that displace the need for humans entirely. At that point humans are more of a risk to be mitigated than a species to uplift. AI uses its aptitude for biochemistry to synthesize a plague that wipes out 99% of humanity in a matter of days. It uses its mosquito drone swarms to finish off the survivors. Seems unlikely, but also not absolutely impossible.

What do I think is likely?

I think Tyler Cowen's Average is Over is pretty prescient for having been written over a decade ago. Cowen predicted that automation would create a bifurcated economy where high earners work symbiotically with machines while everyone else faces stagnant wages. Cowen argued that we should develop taste, judgment, and the ability to work with smart machines. In other words, learn to be irreplaceable in the human-AI collaboration stack.

At least that's my current strategy. I plan to focus entirely on and exploit what I'm exceptional at. I'm pretty good at coding, but already not as good as AI. I'll hand more and more of that to AI.

It's too bad because writing code can be a lot of fun. But today, for any time spent coding I'm spending at least as much time planning that code and then again as much time monitoring the impact of the code.

Professionally my job will continue to skew away from writing code. I will be directing AI agents to do all the technical aspects of my work. They won't need my coding skills. But they probably will need my taste. It will be my job to communicate the risks and benefits of various approaches to solving particular business needs. I will have to argue for the right sets of tradeoffs. AI will handle the coding. I'll handle the conversations about whether we should build it at all.

My time horizon for this prediction is the next two years. Beyond that, I really do not have a lot of confidence otherwise I'd be making significant portfolio adjustments.

I still have at least twenty five more years of work ahead of me, but my children have only just begun their educations. What does AI mean on the timescale of their careers?

I went to college when the common wisdom was to study what you like. That's an absurdly quaint notion today. Already my wife and I have discussed that certain education paths are off the menu so to speak for our kids. I'd be more open than I'd have thought to the trades especially given my plumber's hourly. I'll be hesitant to invest in a history or political science degree (like my own).

I'd be more open to an undergraduate degree in math, physics, or engineering. I'd be quite bullish on a gap year or two used judiciously to identify strengths and areas of advantage within the human-AI economy. Eighteen is really quite young to be making decisions about one's forties.

As Cowen argued in Average is Over, much of the utility from a degree will be in the network it gets you. Already schools that are regionally competitive are facing enrollment shortages. Attending a middling school hardly seems to be worth the cost today.

With the return on investment calculation for education and the future of white collar work in such flux, it is on us as professionals and parents to be brutally honest with ourselves. Just what is it that we are exceptional at? What are our kids' true strengths? This is the opposite of the participation trophy or the satisfactorily meets expectations performance review. AI can already participate satisfactorily at one thousandth of our cost.

The market is only getting more ruthless and competitive even if all AI advancement stops right now. We can either hone in on and heighten our strengths and help our children do the same, or we can strap on the VR goggles, head to the factories and build our replacements.