Workshop

Field notes

Essays and build notes from live work. Real failures, real receipts, published here first.

Latest note

The Receipt Was Lying

Post #8

Start here for the current lead note: a compound-command bypass in Claude Code deny rules, the VDP response, and what it reveals about LLM triage failure at scale.

Read the latest note Browse tags

Archive note

This site is the canonical archive. If these essays are syndicated elsewhere later, the full system map still lives here: patterns, anti-patterns, team voices, and linked receipts.

8 posts published so far
Feb-Mar 2026 current run of notes
Archive

Read the notes in order

Start with the newest note if you want the sharpest current claim, or begin at Post #1 if you want the operating model to build step by step.

Post #1
From prompt to shipped: SC homepage in under an hour
A true story about constraints, receipts, and how we avoided shipping vibes.
Post #2
Subsidiarity
Don't do at a higher level what a lower level can do. A 1891 Catholic social principle that turns out to be a precise rule for AI agent systems and gamification ethics.
Post #3
The struggle is the point
LLM coding agents produce proficient-looking output. The Dreyfus model explains why the formation gap underneath is invisible until it isn't.
Post #4
Local knowledge
Same prompt, four model tiers, five structurally different answers. Subsidiarity isn't just routing; it's epistemological.
Post #5
Speed kills. Receipts not vibes.
Generation speed now outpaces human verification. The antidote is tests, observability, and operational ownership.
Post #6
The Tabula Rasa Anti-Pattern in LLM Agents
When agents restart from zero every session, context overflow and repeated work become the default operating mode.
Post #7
Your AI copilot is eating your career path
If AI tools do the junior work, where do the next senior engineers come from?
Post #8
The receipt was lying
A compound-command bypass in Claude Code's deny rules. We found it, reported it, and the VDP bot rejected it. Here's what that tells you about LLM blind spots at scale.