Receipts over vibes

Patterns

A pattern is a repeatable move that reduces risk and saves attention. Three kinds: system patterns, human-collaboration patterns, and LLM/prompt patterns.

Last updated:

Quick decisions

1) Explore patterns

Jump to one category, then refine with filter + search. Anchors and links keep all references stable.

Filter by domain

2) System patterns

How work is structured: lanes, guardrails, audits, and safe shipping.
Failure modes: unreviewable work, infinite loops, "trust me" outputs, wrong-context operations.

3) Human-collaboration patterns (LLM ↔ human)

How to ask smart questions and get clear signals.
Failure modes: question dumps, walls of text, hidden blockers, thrash.

4) LLM / prompt patterns

How to talk to models: contracts, structure, and verification hooks.
Failure modes: hallucinations, ambiguous success, non-repeatable runs.

Pattern details (workshop-linked)

Each pattern below has operational copy, not just a label, so workshop references always land on usable guidance.
System Receipts over vibes

Claims need evidence that another person can inspect quickly: links, command output, tests, diffs, or concrete artifacts. If a claim cannot be verified in under a minute, it is still a hypothesis.

Use this when an AI or human summary sounds right but has execution risk. The pattern reduces trust debt by forcing inspectable proof.

System Stop rules

Every investigation or generation loop needs a pre-declared stop condition: timebox, max rounds, or diminishing-returns threshold. Without that boundary, quality drops while confidence and token burn keep rising.

A practical stop rule: 3 iterations without net-new findings means ship current best or escalate with explicit unknowns.

System Small slices

Ship in reversible increments that can be tested independently. A smaller slice lowers blast radius, makes regressions easier to localize, and shortens feedback loops for both humans and agents.

Prefer one behavior change per PR with its own verification receipt instead of bundling many speculative improvements.

System Provenance-first

Track where decisions and outputs came from: source URL, model tier, tool run, and commit context. Provenance makes disputes resolvable because the chain of evidence is explicit.

When two outputs conflict, provenance tells you what to trust and what to retest.

System Ambient context anchoring

When you are running many projects across multiple machines, everything starts to look the same. Each context switch costs orientation time: where am I, what branch is this, and is this elevated? Ambient context anchoring removes that tax by making context visible before you read.

Design principles
  • Color first: color is the fastest orientation signal.
  • Emoji second: visual markers confirm context quickly.
  • Text as fallback: project + branch + environment for precision.
  • Cross-surface consistency: same project, same color in editor, shell, and dashboards.
  • Privilege visibility: elevated/admin shells must be visually distinct.

Implementation layers

  • Editor chrome: title bar, status bar, and activity bar accents per project.
  • Window title: emoji + project + branch visible at all times.
  • Shell prompt: machine identity, path, and git branch encoded in prompt.
  • Admin prompt variant: explicit elevated badge and color shift.
  • Terminal tabs: project-aware titles, not generic shell names.
Related anti-pattern: Tabula Rasa causes repeated re-orientation and context overflow. Ambient anchors are the visual state layer that prevents that drift.

Receipts

  • Workspace-scoped editor themes with consistent project accents.
  • Prompt-level machine + branch context visible in terminal output.
  • Dashboard project filters retained as persistent UI state.
System Honest path, fast path

The safest workflow must also be the fastest default. If guardrailed behavior is slower than shortcuts, teams will predictably bypass controls under schedule pressure.

Design controls so the compliant route has less friction than the risky route.

System Tier discipline

Route work to the lowest capable tier first. Escalation should happen only when lower tiers cannot complete the task with acceptable quality, not because escalation feels easier.

This preserves scarce high-tier attention for truly hard problems and prevents silent hierarchy collapse.

System Cascade

Operational form of tier discipline: tasks enter at the bottom and flow upward by evidence of need. A broken cascade shows up when upper tiers become the default landing zone.

Audit the queue regularly: if escalations are routine rather than exceptional, routing logic is wrong.

Human / Collaboration One question rule

Ask one high-leverage question at a time to keep decisions clear and accountable. Bundling many questions into one turn obscures which answer changed the plan.

Single-question cadence improves traceability and reduces shallow, blended responses.

Human / Collaboration Progressive disclosure

First sentence is the bottom line.

Single-question cadence improves traceability and reduces shallow, blended responses.

Human / Collaboration Fail loud, fail early

Deliver bad news as soon as it is known. Hiding issues until the final summary destroys trust and adds recovery cost.

Treat risks as updates, not postmortem content.

Human / Collaboration Separate decision vs execution

Decide before you execute. Separating the modes preserves clarity on responsibility and prevents accidental scope creep.

Decision text should be explicit enough to hand off to an implementer without re-negotiation.

LLM / Prompt Structured outputs

Define expected response shape up front (fields, schema, validation). Structure turns model output into something code can verify and pipelines can trust.

Free-form prose can assist exploration; shipping paths should be schema-bound.

LLM / Prompt Evidence-first instructions

Require supporting evidence before conclusions. This flips the default from persuasive language to auditable reasoning and catches hallucinated certainty early.

A useful prompt contract: claim -> source -> uncertainty -> next verification step.

LLM / Prompt Adopt / Test / Ignore

Every new idea gets one of three dispositions: adopt now, test behind a boundary, or ignore for now. This prevents indefinite parking-lot loops disguised as progress.

The pattern is valuable because it forces an explicit decision even when evidence is incomplete.

Common questions

What makes a pattern operational instead of theory?

A pattern is operational when it names a repeatable behavior, a measurable outcome, and at least one failure mode it prevents.

How do workshop posts connect to this page?

Each workshop post links directly to relevant pattern anchors so receipts map back to reusable guidance.

How often do these patterns get updated?

Patterns are revised whenever tooling behavior, model behavior, or field outcomes change enough to invalidate prior guidance.

Keep reading

New failure modes with no clean precedent — and the patterns to counter them.