The Model Monsoon: How to Keep Building When Everything Keeps Changing

Model Monsoon is the deluge of new AI tools, models, and frameworks that release every month week day hour; each one promising to change how you build, each one tempting you to toss away your current process.

By the time you finish reading this, another AI tool will have been announced. A new model, a new editor, a new agent loop, a new workflow…each one positioning itself as the new center of gravity. Each one subtly suggesting that whatever you were doing before is now wrong.

And if you’re in the middle of building something real: a Must-Express Project, a product at work, anything with actual stakes that churn while welcome, is also annoying…if not destabilizing.

The instinct is to chase. To swap editors, rewrite prompts, try the new agent framework, migrate to the hot model. And every time you do, you lose a week. Not because the new tool is bad, but because you rebuilt your process around the last one.

That’s the trap. The tool isn’t the system. The tool slots into the system. And if you don’t have a system, you’ll keep starting over.

I’ve shipped several projects through this environment, and I’ve landed on a shape that survives the churn. Here’s how it works.


The Shape: Looks Like → Bridge → Works Like

Every one of my MEPs has gone through a version of this phased approach. It’s not about one-shotting things. It’s about making something I can feel and experience first, then doing the work to make it actually function…with a critical translation step in between.

Looks Like

I start with the idealized user experience. Not a spec. Not a PRD. The question is: can someone react to this without me narrating it?

I work through a quick mini-flow. First, I’ll talk to an LLM; not to design the solution, but to sharpen the problem. I’ll ask it to poke holes, surface assumptions, point out where I’m hand-waving. Half the value is just hearing the objections out loud.

From there, I’ll pull a “mood board” of sorts: screens, flows, products that feel adjacent. I’m not trying to be original. I’m trying to get concrete.

Then I’ll concept a few key screens using generative design tools. These matter more than they look. They quietly encode priorities: what’s first-class, what’s secondary, what doesn’t exist at all. That judgment ends up guiding everything downstream, including what the AI produces later.

Finally, I’ll develop an interactive prototype using whatever design-to-code tool feels right: Figma Make, Loveable, borrowed code from other projects, hardcoded values, pasted-together pieces. It’s rarely clean. It just needs to be believable enough to argue with.

At this stage, AI is helping me move faster and see more angles. It’s not deciding anything for me.

The Bridge

This is the part most people skip. And it’s the part that matters most.

The “Looks Like” prototype is a lie. A useful one — it shows what the experience should feel like — but it has no real architecture underneath. No schema. No API contracts. No separation of concerns. If you hand it to a coding agent and say “build this,” you’ll get spaghetti, because the agent has no idea what the system is. It only knows what it looks like.

The Bridge is where you decompose the prototype into the following buildable layers.

Functional specs: the entities, types, relationships, flows, and business logic underneath the screens. I use LLMs heavily here. When I have a lot of scattered artifacts and notes, tools that operate directly on the filesystem are useful for extracting structure from both the prototype and my head.

Tech specs: schema, API endpoints, core logical functions. I’ll generate these, then consistency-check them across multiple tools and sessions. This is where custom skills and iterative refinement earn their keep.

Blueprints: making everything “shovel-ready” for agents. This means creating a set of plain-text artifacts that act as the contract between me and the coding agents:

A roadmap that outlines phases, intent, and sequencing. Epic and sprint files that break the roadmap into executable chunks. Feature files, one per feature, containing the spec, acceptance criteria, edge cases, and an explicit definition of “done.” And a prompt file; a single, stable prompt that tells the agent how to operate, so I’m not rewriting instructions every session.

This is the unsexy middle. But it’s where the leverage actually lives. Without it, you’re feeding vibes to an agent and hoping for structure. With it, you’re feeding structure to an agent and getting velocity.

Works Like

Now I can let the agents rip; but not blindly. I point them to the implementation plan. They find the next undone sprint, loop through its features, and work until the acceptance criteria are met. Then I step in.

I review. I refine. Usually I’ve scoped a feature too big or left the criteria too vague; that’s normal. The point isn’t perfection on the first pass, it’s that the system catches drift early.

After each sprint, I have the agents update a set of living artifacts: a work log (what got done), a learnings file (what broke, what surprised us, patterns to remember), and persistent agent instructions (conventions, constraints, things to avoid). These compound across sprints. They’re how the project gets smarter over time, even as individual agent sessions are stateless.

Then I loop again. Next sprint, same shape.


How I Think About “MVP” Now

I don’t think in terms of MVPs, shipped / not-shipped, or versions anymore. I think in terms of who can use this thing. There’s a big difference between something I can use myself, something I can demo without apologizing, something a friend can figure out, and something strangers can trust.

This process reliably gets me to the second or third level. Beyond that, you’re solving a different class of problems…and that’s a different post.


On Tools

I’m deliberately not naming specific tools here. They’ll be outdated before you finish reading this. I’ve already swapped most of mine at least once.

What doesn’t change is the shape. Decompose work so AI can reason about each layer independently. Make the artifacts plain text and the contracts explicit. When tomorrow’s tool shows up (oh, and it will!) it slots in to help, without breaking anything.

I treat models like teammates with different strengths and I swap freely. The process absorbs the change. That’s the whole point.


The Throughline

If you’re building with AI and feeling the monsoon…that constant sense that you’re already behind, and your setup is already obsolete…the problem probably isn’t your tools. It’s that your tools are your process.

Separate them. Create a shape that’s stable enough to survive churn but modular enough to absorb new capabilities. Assign tools to what they’re best at. Provide guardrails: either to manually intercede or to have agents quality-check each other’s work.

The AI landscape will keep moving. Your process doesn’t have to move with it.

Build the process. Swap the tools. Just keep shipping.

Leave a Reply

Discover more from Breaking Glass by Rishi Dean

Subscribe now to keep reading and get access to the full archive.

Continue reading