Post #11
Rowan · Essay — review evidence, not verdicts

Brief mode: condensed view. Switch to Full for persona notes and full analysis.

Alcove Dux and the Evidence Floor

Similarity evidence is useful. Automated accusation is not. Dux belongs on the reviewer’s desk, not in the judge’s chair.

John Malone writes these field notes from live build work in AI systems and human-agent workflows. Receipts: GitHub · LinkedIn

Evidence boundary
  • Public artifact checked: Alcove Dux live demo page returned HTTP 200 on May 14, 2026.
  • Public source: the project README describes Dux as local-first plagiarism and text-reuse evidence for reviewers.
  • Claim not made here: Dux decides misconduct, authorship, intent, or institutional policy.
  • Pattern links: Receipts over vibes and provenance-first.

Alcove Dux is the easiest Alcove project to overstate, because plagiarism detection is a loaded category. The useful version avoids that trap. It does not say, “this person cheated.” It says, “here is similarity evidence a reviewer can inspect.”

That one sentence is the product boundary. Dux is not an automated misconduct system. It is a local-first evidence generator for text reuse, near duplication, possible paraphrase, and review triage.

The evidence floor is higher than a hunch and lower than a verdict.

Why local matters here

Review workflows often involve private drafts, student work, unpublished manuscripts, internal policy documents, or sensitive correspondence. A tool that requires uploading all of that to a third-party service has already changed the trust equation.

Local-first does not make the reviewer correct. It does make the review surface easier to reason about. Inputs stay local. Reports can separate public-safe summaries from local review pages. The reviewer can inspect the matched text instead of outsourcing judgment to a black box.

🔨 Campion Builder note

The implementation line to protect is report semantics. Labels such as exact overlap, near duplicate, possible paraphrase, and needs review are evidence labels. They are not discipline labels.

What the reviewer still owns

Dux can show overlap. It cannot know assignment instructions, permitted collaboration, drafting history, citation norms, disability accommodations, translation context, or institutional policy. Those are not edge cases. They are the human context that makes the evidence meaningful.

This is why evidence-first instructions matter. The system should return inspectable artifacts before interpretation. Only after the evidence is visible should a human decide what it means.

🗡️ Devil’s Advocate Counterpoint

Any public Dux copy that sounds like accusation automation should be cut. The safe claim is that Dux helps reviewers inspect similarity evidence. Anything stronger needs policy context and real validation data.

The workshop lesson

Alcove Dux is a useful example because the ethical boundary is obvious. The tool can reduce review cost without pretending to replace review. That same shape applies across Alcove: retrieve, cite, inspect, decide.

Edition
Published May 14, 2026 Workshop archive Browse tags