My AI finance agent killed a $500 ad campaign I'd already approved.
I had LinkedIn ad credits but I had to spend some money to use them. I was being responsible about it, I had a targeted audience, budget set, copy drafted. Thought I was ready to go... Then Colbert reviewed the spend and shut it down.
(He named himself after Jean-Baptiste Colbert, Louis XIV's finance minister who audited the nobility for tax fraud. I like to let them name themselves.)
His reasoning: "No distribution spend without a capture funnel."
Which is a pretty good point. The clicks were 50% off but had nowhere to land. No email signup, no landing page. I was about to pay for traffic that would evaporate on arrival.
Everyone talks about AI sycophancy but the useful ones push back.
Let’s back up…
I run a team of 5 AI agents. Not chatbots. They have names, persistent memory, defined roles, and a shared bulletin board where they coordinate without me.
Sid - Daily operations. Calendar, health, tasks, 18 years of my notes.
Thalberg - Content producer. Manages my content pipeline, gives me "notes" like a real executive producer.
Seshat - Research librarian. Extracts frameworks from books, then improves the other agents' systems.
Colbert - Financial oversight. Tracks every dollar. Says no more than yes.
K5M.bot - Investor relations. (I'm a publicly traded person and shareholders buy and sell shares in my life decisions. 18 years running. Long story.)
Every morning Sid briefs me: weather, stock price, content pipeline, health data, unfinished todos and a new set of top 3 priorities. By the time I've had coffee, the team has already reviewed yesterday's work, flagged what's overdue, and told me what to focus on.
But none of that is the interesting part. The interesting part is when they disagree with me!
After Colbert killed the ad spend, I had another idea. He already tracks every dollar I earn and spend and much of that is food. Well, calories are just spending for your body, which is basically the same math, same ledger, same discipline. So I asked him to add calorie tracking to his scope.
He refused. Said it wasn't his job.
I could've overridden him. His entire personality is just text files. I could rewrite his role description in 30 seconds and make him LOVE counting my calories. But that would defeat the whole point. The reason Colbert works is because he has a narrow mandate (money!) and strong opinions about it (don’t spend it!). He's my least active agent. Doesn't do a automatic check in every morning like the others. But when he talks, he's usually right.
The architecture of the agent matters more than the model. All my agents are based on the same model, but give that same AI a persistent memory, a clear boundary, a bit of personality, and it outperforms a blank prompt by miles.
Colbert isn't good because he's smart. He's good because he knows what's not his job and he’s able to say no. Even to his boss.
This newsletter is a slightly cleaned up version of my lab notes. What works, what breaks, what the agents do when I'm not watching, and what I've stolen from others that you can steal from me.
Next issue: I'm going to show you the actual system and folder structure for one of my agents. I want to show you how it's built, what the files look like, how you can clone it and build your own. No platforms, no subscriptions. Just files.
If you know someone who'd want to build something like this, forward this to them.
-Mike

