This library entry is part of The Extended Frontier thesis. Entries are curated with AI assistance and human review; most initial entries were prepared with Claude (Anthropic), while individual entries may note other assisting systems. Metadata and annotations are editorial, not peer-reviewed. Entries flagged as unverified may contain placeholder dates, authors, or classifications.

From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering

Dana Feng, Bhada Yun, April Yi Wang··paper·source
Agency in software engineering is preconfigured at the organizational layer (policies, tooling defaults, CI guardrails) before individual preferences matter.

Three-phase mixed-methods study with 20 software engineers (10 junior, 10 senior) examining how agency is allocated between humans and agentic AI. Finds that organizational policies and norms preconfigure agency before individual preferences, with seniors maintaining control through delegation and juniors oscillating between over-reliance and resistance.

Classification

Role
field-observation
Domain
software
Source type
paper
Harness types
social-harnessratification-harnessinterface-harness
Validation position
before-actionpost-deployment
Validation mode
socialempirical
Prescription stance
mixed
Relation to argument
institutions-shape-capabilityfirst-mile-input-formationobservability-mattersbreakdown-when-harness-absentdiffusion-adoption-bottleneck
Tags
agencysoftware-engineeringmentorshipjunior-senioragentic-aicode-reviewtacit-knowledgecursororganizational-policyprompt-review

Extended capability commentary

Input legibility
Seniors shape inputs well — scoping, constraining, arriving with a plan. Juniors struggle because they do not know what to ask. The familiar/unfamiliar split (Figure 1) is essentially about whether the human can form a good first-mile input.
Task structure
Figure 1 frames the entire paper around familiar (well-structured) vs unfamiliar (poorly-structured) tasks and how agency allocation diverges across them.
Repairability
What happens when the agent goes wrong. Seniors iterate and refine; juniors spiral. J6: "it started spiraling ... I just stopped it. The fix was a three-line change."
Observability
Prompt history review is a major theme. S7: "accept, accept, accept is a very different thing versus generating all that content and then not actually reading it." Seniors read diffs, reject bloat, cross-check with other models.
Institutional ratification
The headline finding. Agency is configured before the first prompt is sent: "company policies, security regimes, and mandates to use AI weekly set default loci of control." Three non-negotiables: interruptibility/override, legible provenance, small test-bounded diffs.

Why it matters

Empirical evidence that agency in AI-mediated software engineering is configured at the organizational layer — policies and norms — before individual tool use begins. Directly supports the Extended Frontier thesis that institutions shape capability and that the social harness (mentorship, code review, norms) is constitutive of effective AI use, not a post-hoc check.

Annotation

CHI '26 paper. Three-phase qualitative study (ACTA interviews, Cursor debugging task, blind senior review of junior artifacts) with 20 software engineers examining how agency — the distribution of decision authority and accountability — is allocated between humans and agentic AI tools.

The central finding is that agency in software engineering is preconfigured at the organizational layer before individual preferences matter. Company policies, security regimes, tooling defaults, and norms set the boundaries. Within those bounds, seniors and juniors take divergent routes: seniors maintain control through detailed delegation and iterative refinement in familiar contexts, and strategic oversight in unfamiliar ones. Juniors oscillate between over-reliance ('spamming the agent button') and defensive resistance.

Policies and norms

The paper surfaces both formal policies and informal norms as shaping agency. On the policy side: approved tool lists, data-sharing restrictions, mandates to use AI. On the norms side: senior code review practices (S2: 'I try and give concise comments on code and try and also point to documentation'), social signals about AI-generated code (S6: 'building things we don't need right now, you can just tell, nowadays ... it's probably AI'), and the emerging norm of 'constant vigilance' when reviewing junior PRs that may be AI-assisted.

The mentorship reframe

The senior role transforms from answering questions to asking them — 'Socratic guides and organizational anchors.' Junior growth reframes as 'earning judgment through deliberate restraint' — knowing when not to delegate, when to trust instincts, when to seek human guidance. The traditional pipeline of gradual technical mastery is disrupted: juniors now engage with production systems immediately but with accountability mechanisms like the proposed Prompt & Code Reviews (PCRs).

Imposter syndrome and ownership

Juniors report that AI undermines ownership: J8: 'It has my name on it, but I have no idea why it works.' J3, after being praised for speed: 'Cursor did the work ... I just tried to find the problem.' J9: felt 'like a fraud' after a hackathon. The paper frames this as authorship without understanding — code emerges from prompts, accountability becomes diffuse.

Open questions

  • How do PCRs scale? The paper acknowledges code review fatigue is already a problem.
  • The study is a snapshot from summer 2025 — how quickly do these dynamics shift as tools improve?
  • Does the junior experience of agency loss parallel deskilling concerns in other domains, or is software engineering structurally different?

Related entries

Overlap is computed on tags, relation-to-argument, and harness types — not on role or domain, because contrasts are often the most useful neighbours.