The Extended Frontier: Extensions Diffuse Through People

·Daniel Griffin·Hypandra·7 min read

Working Draft

This post is a working draft, developed collaboratively with Claude Opus 4.6 (1M context) in Claude Code (Anthropic) across multiple sessions, using a variety of extensions: persona-based reviews, citation verification, AI detection analysis (Pangram Labs), deep research reports, and dissertation search (qmd). The argument, evidence curation, and editorial direction are Daniel’s; much of the prose was initially generated by Claude and is being iteratively rewritten. A changelog tracks the development. We will continue to write about how we’re exploring this idea—the process is part of the argument.

Extensions don't emerge because a domain is 'technical.' They travel through people and their networks—the doctor who codes, the lawyer who builds tools, the technical writer who brings QA norms into documentation. The frontier smooths not because the AI improves but because someone carried verification practices from one domain into another.


Changelog
  • 2026-03-24 — Stub published.
*This is part of [The Extended Frontier](/2026/03/24/extended-frontier) series.*

The earlier posts in this series treated extensions as properties of practices and domains. But practices don't exist in the abstract. They're carried by people, and people move between domains. This post is about what happens when they do. Extensions are in practices, but practices are in people, and people carry extensions across boundaries in ways that reshape the frontier.

The cross-trained perspective

Most people working within a single domain don't see extensions clearly, because they're just "how the work is done." The extensions become visible when someone works across two domains with very different extension profiles. Cross-trained practitioners—doctors who also write software, lawyers who build tools, robotics engineers who move into legal tech—carry extensions from one domain into another, and the contrast makes the mechanism legible.

What's striking about these practitioners is that none of them explain AI's uneven performance as a capability difference in the model. They explain it as a difference in what the surrounding practice provides. They talk about verification speed, feedback loop structure, consequence architecture. They're describing extensions without using the theoretical language—which is, in some ways, the strongest confirmation that the framework captures something real about how work is organized.

Chukwuma Onyeije: medicine + software

Chukwuma Onyeije is a maternal-fetal medicine specialist who also develops software. His way of characterizing the difference between domains is temporal: "You can test your hypothesis in 30 seconds instead of waiting for morning labs." That's not a claim about what LLMs can do. It's a claim about what each practice gives you for catching mistakes. In software, the feedback loop between generation and verification is measured in seconds—write code, run it, see the result. In medicine, the loop is measured in hours or days. Morning labs. Imaging results. Patient follow-up.

Onyeije sees LLMs excelling at what he calls "imposing structure on complexity"—synthesis, prioritization, documentation—while clinicians retain the judgment calls. The transplant pattern he describes is treating software like a clinical toolchain: "prescribe" existing abstractions, use AI to eliminate syntax friction, keep domain intent in the driver's seat. Medical culture expects stability; software culture expects rapid iteration. The temporal structure of the extensions differs, and that temporal difference predicts where AI will be useful in each domain. Not the model's capability. The practice's speed of verification.

Chris Bridges: law + software

Chris Bridges, a lawyer who also develops software, makes the systems-level version of the argument. Legal work won't reach the same "vibe coding" state as software, he argues, because it "lacks comparable foundations." He isn't saying legal AI is worse. He's saying the practice environment around legal work doesn't provide the same infrastructure for rapid, cheap verification—the point Post 1 made about the compiler contrast, but arrived at independently through cross-domain experience.

Mark Barrett, who works across software, law, and healthcare, reinforces the point from a different angle: legal and healthcare are more similar to each other than to software because they require long-accumulated professional knowledge and operate under regulation and compliance structures that create a different kind of verification envelope. The question isn't just "can you check the output?" but "who is accountable, what standards must be met, and how are errors detected and sanctioned?" The institutional verification structure is itself an extension—one that shapes what AI use looks like in practice.

Otto von Zastrow: robotics → legal tech

Otto von Zastrow came from a robotics background and moved into legal technology. His formulation is blunt: "Make it so you can open the case and verify it." What he did was transplant the auditable-retrieval pattern from engineering into legal research. In engineering, you don't trust a system's output—you build in ways to inspect it. Von Zastrow applied the same logic to AI-generated legal research: move from plausible synthesis (the AI produces something that sounds right) to auditable retrieval plus synthesis (the AI produces something with citations you can follow to the primary source and spot-check).

This is a specific, transferable design move. The AI's output didn't change. What changed was the extension around it—the requirement that every claim be traceable to a verifiable source. Von Zastrow didn't bring legal expertise into robotics or robotics expertise into law. He brought an extension pattern: the practice of making outputs inspectable.

How verification travels

What's interesting is that these practitioners are all doing versions of the same thing, though they wouldn't describe it that way.

Silva, the technical writer who is also an engineer, developed what he calls "Docs as Tests"—treating documentation as testable claims about product behavior. He borrowed this directly from software QA norms. The key distinction he draws is between deterministic tests (repeatable, stable signal) and probabilistic tools that "can't be trusted on their own." Without the test, the output is unanchored. His professional role becomes curation and verification, not production. That's a familiar move if you come from engineering. It's a strange one if you come from writing.

Von Zastrow does something structurally similar but in a completely different domain—he forces legal AI output to include citations with links to the actual case database. Instead of asking the AI to be right, he builds the output format so the human can check whether it's right. You don't need legal expertise to follow a link and see if the cited case says what the AI claims it says. You need the system to give you the path back to ground truth. That's the robotics engineer's instinct: make it so you can open the case and verify it.

Cam Soper, a technical content engineer, puts it more bluntly: "Claude orchestrates; the script executes." If you need deterministic output, don't let a probabilistic tool handle the execution step. Use the AI for planning, structuring, drafting—then hand off to a deterministic system for the part that has to be right. The extension is the separation itself, and it's a design pattern Soper carried from software architecture into content engineering.

Jason Crawford, a writer and former software engineer, found that AI agents behave like "junior engineers"—they skip basic professional practices like tests, branches, and development environments. His response was to "level up" the agents with modular "skills" documents encoding best practices from software: regression tests, TDD, branch-based changes. He's not teaching the agents engineering knowledge. He's reconstructing the extensions that a proper software environment would provide—encoding them into the agent's working context rather than hoping the model internalized them during training.

What these cases share, if you step back, is that each practitioner recognized that what makes AI work in one domain is the surrounding verification practice—and then carried that practice, or something analogous, into a domain that lacked it.

Who builds extensions and who can't

The cross-trained practitioners who transplant extensions are a specific population. They're technically fluent, often with formal training or deep experience in software, and they have access to multiple professional domains. Von Zastrow's robotics background gave him the auditable-retrieval pattern. Silva's engineering experience gave him testable-claim thinking. Crawford's software career gave him the TDD and branch-management practices he encoded for agents.

This raises the equity question that the extensions framework makes visible but doesn't resolve. Extensions diffuse through networks—professional networks, educational institutions, open-source communities, conference circuits. The cross-trained transplant pattern depends on having been in the right rooms. A solo practitioner without technical training, without access to the communities where these patterns circulate, faces a jagged frontier and has no mechanism for smoothing it. They're not less capable. They have fewer extensions to draw on.

Greg Lambert, who describes himself as a "librarian-lawyer-programmer," built his cross-domain identity around "connecting unrelated items into a process"—treating workflows as composable systems rather than isolated craft acts. That compositional thinking is itself an extension, one he accumulated across careers. The question for the extended frontier framework is whether that kind of knowledge can be made available to people who haven't had those careers.

This is where the argument gets structural. The jagged frontier isn't just a property of the model or even of the domain. It's a property of the practice ecology the person inhabits. And practice ecologies are unequally distributed. The people who will smooth the frontier first are the people who already have the richest extension networks. Everyone else waits—or gets the jagged version.

The Narayanan heuristic

Arvind Narayanan, via Simon Willison, offers a heuristic that's been widely cited: AI is helpful when "it is faster to verify the output than it is to do the work yourself." This is a good heuristic. It's also, within the extensions framework, a workflow claim rather than a benchmark claim. It predicts where AI will work based on the extensions available to the practitioner, not on the model's raw capability. A veterinary version of the same idea: use AI where "doing the work yourself is time consuming, but verifying is easy."

But the heuristic has a hidden assumption that the extensions framework surfaces. "Faster to verify" assumes you know how to verify. Verification is not a generic skill—it requires domain-specific extensions (Shepardizing for law, a configured compiler for code, clinical judgment for medicine), and those extensions are unevenly distributed.

Shepardizing — Checking whether a legal citation is still "good law" — whether it has been overruled, distinguished, or otherwise affected by subsequent decisions. Wikipedia

The ability to verify is itself an extension. The Narayanan heuristic works—but only for people who already possess the verification extensions. For everyone else, the output looks plausible and the verification looks impossible. The frontier is smooth for the person who can check the work and jagged for the person who can't, even though they're looking at the same output from the same model.

What the cross-trained practitioners keep showing, from different directions, is that the frontier moves when the practice around the model changes. The model stays the same. Onyeije's 30-second feedback loop, von Zastrow's auditable citations, Silva's testable docs—none of these change what the AI can do. They change what you can catch, how fast you can catch it, whether there's a path back to ground truth. The jaggedness isn't in the model. It's in everything else.