Blog

How we build: AI and humans in the loop

AI writes the code. Humans stay accountable. Here's how that actually works.

If the first post was about why we started, this one is about how we actually build.

Before we let AI loose on anything, we needed the right people. We brought in great engineers to set things up properly — the foundation had to be solid. Good architecture, clear standards, real best practices. No shortcuts.

We also picked our stack carefully. We needed something that works well for UI — turning Figma designs into real product — and that AI can actually write good code for. That combination matters more than people think.

Then we pushed AI into everything.

Here's how our process works today:

AI writes the code. Then Code Rabbit AI reviews every pull request first. After that, our human engineers review it again. Two layers. Accountability stays human.

But there's something we figured out early that made this whole model work.

AI has to be able to validate its own work.

If you want AI writing your code at scale, you need serious testing. So we invested heavily in testing frameworks and clear guidelines around how tests should be written. AI doesn't get tired. It doesn't skip a test because it's Friday afternoon. It just keeps generating, updating, extending. That discipline is a huge reason we can move fast without breaking things.

And it's not just the application code.

Our whole infrastructure is basically infrastructure-as-code, co-designed with AI while I was working on our HIPAA compliance. I spent 15 years optimizing cost, quality, and security for US banks — so I brought that mindset directly into how we set up Practice Vault. Every piece of infrastructure was designed with two things in mind: stronger security and lower operating cost.

The tooling evolved as AI got better.

We started with Cursor — all our developers used it. Then Claude Code came along and basically everyone moved over. Now we're also using Codex on top of that. And we use OpenClaw as our internal glue — it helps us with SEO monitoring, product management, feature operations, basically a lot of the back-office work that would otherwise eat our time.

And here's something I find really exciting.

AI isn't just writing our code. It's also helping us decide what to build next. We redesigned our feature database, pulled in competitor data, analyzed the competition — and now AI helps suggest which features to prioritize. That's a different level. It's not just execution anymore. It's strategy.

The result of all of this?

  • We ship faster than teams ten times our size
  • Quality stays high because testing never stops
  • Costs stay low because AI does the heavy lifting
  • And we pass those savings to our customers

That's how we build Practice Vault. AI speed, human accountability, and the discipline to make sure both work together.

Want software built this way?

AI speed. Human accountability. Lower prices because of it.

Start your setup