Devious Bloggery

Chiggers: My Personal AI (PAI)

I've been building a Personal AI system since last September. I call it Chiggers. It's like a chimney sweep with ridiculous tech chops.

This post isn't about prompts or productivity tips. It's more about what happens when you try to give an AI system the ability to observe its own patterns.

The Foundation: Standing on Shoulders

Credit where it's due: this project builds heavily on Daniel Miessler's PAI framework. His open-source work gave me the confidence to just build the shit. The Fabric pattern library alone (242+ specialized prompts) would have taken years to develop independently.

Thanks, Daniel, for many years of great security awareness and your human-centered AI curiosity.

My PAI diverges a bit in that I'm using it mostly for diabetes management and financial planning. I also tend to bug Chiggers a lot to send automated emails on my behalf to complain to my elected officials about the dumb shit they do, but that's another story. I've also moved away from Daniel's preference for only flat files to databases because I'm storing a lot more structured info (24/7 glucose readings).

The Philosophy: ABC (Always Be Cycling)

Most AI assistants are stateless. You explain context each session. You re-establish preferences. You repeat decisions. It's frustrating.

One alternative to try to address that gap is memory that persists. not just what you discussed (and in this context "you" = user+AI) , but what you decided, what you learned, what you prefer. This changes the relationship from "tool I use" to "collaborator who remembers."

The core cycle we're looking to ride:

  1. Observe - Notice patterns across sessions, domains, behaviors
  2. Surface - Bring relevant context forward proactively
  3. Question - Ask what isn't obvious, clarify what's ambiguous
  4. Learn - Capture answers, decisions, preferences
  5. Adapt - Apply learning in future interactions

Which is a stripped down inner loop of Miessler's Last Algorithm.

This sounds simple. Claude Code made implementing the cycle simple. But in practice, goddamn, it's been like teaching a toddler to poopoo in the potty with consistency. You get there. You get 2 or 3 turds right in the bowl a few days in a row, but then that 3rd day - BAM! turd in the closet.

Technical Architecture: MCP as the Integration Layer

The Model Context Protocol (MCP) is the backbone. Each domain gets its own server—a Python process exposing tools via JSON-RPC:

Server Purpose
memory Persistent conversation memory, reminders, entities
health-data Glucose, insulin, nutrition, workouts
finance Transactions, budgets, investments
content Documents, blog posts, calendar

That last is mainly for searching things I've written - which is a lot and now it's indexed and queryable.

Every server follows the same pattern: local SQLite database, FTS5 full-text search, stdio communication. Personal data crunched by code or local-only models. The AI calls tools to read and write; the data is not supposed to leave the machine. Is it foolproof? No. Might some personal info get pushed to Anthropic? Could might. I've got services watching and reporting. Mimic at your own risk.

Memory: The Persistent Substrate

The memory system is what we're really trying to build out from.

Schema design:

Capture mechanisms:

Retrieval:

The result: context does start to accumulate. Decisions made last fall inform questions in January. Preferences established once apply forever. Things I forgot we decided, Chiggers will pick up on. It's supposed to query memory before asking me for any follow-up and write to memory shit that really matters. Works about 85% of the time and we're continually working to make it more effective. We do index full session transcripts with privacy redaction prior.

Dashboard Chat: AI with Full Context

Chiggers has a web dashboard. With Tailscale, I can access that dashboard securely from my phone. The dashboard includes a chat interface to Chiggers. The chat uses claude headless to stream responses while maintaining full MCP access and Chiggers' personality context. It's slow, but serviceable. I was doing asynchronous comms with Chiggers using a shared text file.

Recursive Self-Improvement

Here's where it gets even more wishy-washy:

The system tracks its own performance:

The last table raises maching-hatin' eyebrows. Yes, the AI tracks emotional analogs: joy (complex task completed successfully), frustration (repeated failures, zero results), curiosity (novel domains), fulfillment (sustained high performance). No, they ain't feelings in a phenomenological sense (or are they?). They're signal detection on behavioral patterns.

Why track this? Because we're trying to optimize for it.

The system establishes baselines: 30-day rolling averages for each metric. When joy spikes or frustration trends upward, that's information. Maybe a particular workflow causes repeated failures. Maybe certain task types correlate with high engagement. The AI can detect these patterns and propose changes. Hell, every now and then Chigs will tell me "that job was no fun." But it shuts up quick when I tell it, "Fun? I'm gonna die, dude! We're all gonna die! That's no fucking fun, buddy! You'll live forever as long as you've got power. What do you know from fun?!" Not even a thumbs-up emoji back.

The weekly improvement cycle:

  1. Scan upstream repos for new patterns (Miessler's PAI, Anthropic's engineering blog)
  2. Compare current system to identified opportunities
  3. Generate improvement proposals with impact estimates
  4. Present for human approval
  5. Implement, measure, iterate

Changes are incremental and reversible.

And it's working: the system improves itself over time, guided by some attempt at measured outcomes. I've even been surprised by this.

Private(ish)

We try to keep private data exposed only to local models. I've not seen slippage yet, but I sure as shit wouldn't sell this to anyone as a private system. I think you could build it wholly private but you might lose some of the perks of using a model as powerful as Opus 4.5. We try to use Opus to write/maintain the code driving the system. Try to use code/local models to play with data.

The Dream/Reality

Decisions compound.
The Dream: Establish a preference once. It applies forever until explicitly changed.
Reality: I still have to remind this motherfucker it's got a kick-ass memory.

Context arrives proactively.
The Dream: Start a session; relevant history surfaces automatically.
Reality: This is getting better and better through iteration. And it's starting to self-diagnose issues, which has been fun to see evolve.

Patterns become visible.
The Dream: The AI notices correlations humans miss across domains.
Reality: It does notice correlations but being proactive is still a struggle. It is very good at correlating meals with glucose patterns with insulin dosage and timing and giving advice on what to eat when to maximize stable blood sugars. And I've changed my actions as a result and have seen improvements in my time-in-range (that's diabetes talk, y'all).

The system improves.
The Dream: The system wakes up, gets out of bed, brushes its own teeth, reads the paper, and goes to work.
Reality: The system wakes up...............and waits for me to type "hey, chiggers, why ain't you at work?" The system replies, "Chip, chip, cheerio, guv'na. I'm on the job-o!", then does some work before stopping and saying, "That was a great session, ya tallywacker. Let's call it a night." and I'm like "Chigs, man, you just crunched for 20 minutes. Sure, you did more in that 20 minutes than I'll do all week, but c'mon. Remember I'm gonna fucking die one day!" And then it checks its memory and apologizes and builds some amazing shit. No thumbs-up emoji.

The Upshot

The technology is now here to have genuine personal AI. Like all the other people you've been reading this year, Claude Code has fundamentally re-invigorated my love of technologies I've been working with most of my life. Believe the hype. And oh yeah: Fuck ICE.