3 min read

June 2025 bakery

We're living through AI's Model T moment.
June 2025 bakery

A record of what has caught my attention and ideas I've been thinking about this month.

Half baked

Some written thoughts, but not enough for their own post.

Last month's bakery vibes and my post about ai predictions were high on existential dread. You'll notice in the links below that the tone of this month's baker is more optimistic. And that's because:

We're living through AI's Model T moment.

In 1920, any mechanic could pop the hood and understand how a car worked. Today's cars require proprietary diagnostic tools and software. The fundamentals of pistons, cylinders, and combustion are buried under layers of abstraction.

The same thing will happen with AI. Right now, LLMs feel magical, but they're actually doing something surprisingly simple: predicting the next token in a sequence using statistics. Today's models are the slowest, most expensive, and simplest we'll ever use.

This creates an opportunity. We're here early enough that "next token prediction" still explains how the magic works. That intuitive grasp of the mechanics will become increasingly valuable as AI systems grow more complex and opaque. Systems of agents will coordinate workflows, user interfaces will be created on the fly, and we will be able to give computers instructions just as we would a human assistant, but under the hood it is all token prediction.

The people who understand cars today learned the fundamentals when engines were simpler. The people who will understand AI tomorrow are learning those fundamentals right now.

Raw

Naked links

AI

Getting better at LLMs, with Zvi Mowshowitz
Patrick McKenzie and Zvi Mowshowitz share practical techniques for getting better results from AI tools, from writing effective system prompts to using LLMs as research partners and writing collaborators.

There is a lot of software to build and rebuild. This hopeful talk introduces the idea of software 3.0.

Learn how to use AI systems as a force multiplier.

A really thorough and easy to understand overview of modern LLMs covering pre training, supervised fine tuning, and reinforcement learning. I walked a way with a better understanding for why LLMs hallucinate.

After months of coding with LLMs, I’m going back to using my brain • albertofortin.com
I’ve been building MVPs and SaaS products for 15 years. Let’s work together on your next project.

This developer is not convinced by current AI tooling. I'm not convinced this developer used the tools effectively. The LLM won't do it all on its own. Right now the LLM does not have great taste and absolutely no memory. Without carefully scoping its work and using your own taste for where things go, how they should be named and how they should be structured, you are going to hit up against the current weaknesses. The LLM will duplicate functionality and create solutions that don't actually solve the problem.

Politics

Work

Say “but yes”, not “yes but”
When you’re agreeing with someone but you have a caveat, don’t say “yes, but”. Instead, say “but yes”. For instance, if you’re happy with a suggested approach…

"yes, but..." sounds less positive than "..., but yes"

Music

I was there with my brother right up front. It. was. awesome.

Other

A weird guest for Tyler Cowen, but a cool conversation about art, video games, and media.