IDF · Alpha · 2026

Intent
Driven
Flow

Intent, Cycle, Iteration — a three-level model for AI-assisted delivery.

Intent → Cycle → Iteration
Intents survive flag go-live and close on signal observation
Agnostic — agents, spec-driven, or hybrid

Traditional SDLC was built for human bandwidth. This isn't.

IDF — Intent Driven Flow — is a delivery governance framework for teams with fast execution cycles. Whether that speed comes from AI agents, spec-driven automation, or a hybrid approach, the execution model is yours to choose. What IDF governs is the layer above: how outcomes are defined, how work is reviewed, and how features reach users.

Core idea 1 — Outcomes, not tasks
Define what success looks like. Let the team figure out how.

The person responsible for the product writes one statement: what needs to change for the client, and how you'll know it happened. The team works toward that outcome — not a list of tasks. The work isn't done when a feature ships. It's done when the outcome is confirmed.

Core idea 2 — Deploy first. Release when ready.
What's built and what users see are two separate decisions.

Every piece of work reaches production invisible to users, behind a switch. A responsible person reviews the completed work, then decides when — and whether — to turn it on. This keeps risky releases out of the equation: work can ship continuously without exposing anything until a human says it's ready.

Core idea 3 — Humans in control
AI assists the execution. Humans own every critical decision.

A human sets the goal. A human reviews the work and tries it before it moves forward. A human decides when clients see it. At every critical transition, a person owns the call. Speed comes from the execution layer. Control never leaves the team.

Core idea 4 — Iterate to the signal, not to perfection
The first attempt is directional. Evidence tells you when you're done.

AI makes execution cheap. The first attempt does not need to be right — it needs to be directional. Ship something working, observe what it tells you, and go again. Progress is measured by evidence, not plans.

Core idea 5 — The only block is "not ready for production"
Build around every constraint. Workarounds are temporary by design.

No external constraint justifies stopping work. If you don't have it, mock it. If it isn't designed, build it ugly. The quality bar is production-safe, not complete. Constraints are temporary; blocked work is permanent waste.

Every piece of work is an Intent (the outcome), a Cycle (one delivery unit), or an Iteration (one implementation pass). The sections that follow define each level, each gate, and each role in full.

Corporate perspective

IDF governs the delivery layer. For the corporate governance model — True North, the intent cascade from strategy to team, and the agentic org structure — see The Agentic Organization →

Strategy moves slowly. Delivery moves fast. Gates activate value safely.

IDF operates at two different tempos. The Slow Flow is where direction is set — understanding what clients need, deciding which outcomes matter, writing the intent. The Fast Flow is where delivery happens — building, reviewing, releasing. They run independently but serve the same goal: the Slow Flow decides what to pursue, the Fast Flow delivers it.

Slow Flow — strategy · weeks to months
Ideation
True North
Intent
True North sits at the top of a corporate cascade — IDF consumes the Intent that emerges from it, not the cascade itself
Intent enters Fast Flow at Gate 1
Fast Flow — execution · hours to days
Gate 1
Execute
Gate 2
Gate 3
Execute = Cycles and Iterations, spec-driven, AI-assisted, or fully agentic — whatever the team uses. Gates govern the transitions, not the method.
Gate 3 triggers monitoring — the flag is live, the signal window is open
Monitoring — continuous · triggered by Gate 3
Flag live · signal window open
PO watches signal
Signal observed?
↑ Slow Flow Signal achieved → intent closed or new intent begins
↺ Fast Flow Signal not achieved → new cycle begins
What are gates?

Gates are the three human checkpoints in the Fast Flow. Each one is a decision point — nothing moves to the next phase without a person making the call. Gate 1 aligns intent and scope before execution begins. Gate 2 reviews each implementation pass. Gate 3 authorizes the release and activates value for clients. The full definition of each gate is in §07–09.

Slow Flow
Where strategic goals become committed Intents.
⟶ Tempo: weeks to months
⟶ Produces: Team Intent — the outcome statement that enters Gate 1
Fast Flow
Where Intents become working software behind a feature switch.
⟶ Tempo: hours to days
⟶ Produces: feature ready for release — execution model is yours to choose
Monitoring
Where the flag is live and the outcome is being measured.
⟶ Triggered by: Gate 3 — flag goes Live-ON
⟶ Closes when: PO confirms signal achieved or initiates a new cycle
Corporate perspective

Intents arrive at Gate 1 from a corporate cascade — True North → Strategic Intent → Domain Intent → Team Intent. IDF governs from Team Intent downward. See The Agentic Organization →

Where Intent fits in the flow
Intent
Gate 1
Cycle
Gate 3
Monitoring
completed

An Intent is a commitment to a client outcome — not a feature, not a task.

Everything in IDF starts with an Intent. An Intent answers two questions: what needs to change for the client, and how will you know it happened? Both must be answered before any work begins.

Intent template
[Client outcome]  ·  by [mechanism]  ·  measured by [signal]

Every intent must define how the outcome will be measured. IDF calls that instrument a signal — a specific, observable fact declared before delivery begins, not chosen after. If you can name the outcome but not the signal, you don't have an intent yet.

Example
Intent: Reduce checkout abandonment at the payment step.
Signal: abandonment rate drops below 12% (currently at 22%).
Outcome statement
What needs to change for the client

One sentence. Not what to build — what to achieve. The team commits to this statement, not to any specific implementation. How it gets built is the Fast Flow's job.

Measurement signal
The observable fact that confirms it

A number, a rate, a behaviour — something real that can be observed after the work ships. If the signal can't be measured, the intent can't be closed. Vague signals produce open-ended commitments.

Intent lifecycle
open in-progress monitoring completed or abandoned
open → in-progress when Gate 1 passes · monitoring when the flag goes live · completed or abandoned by explicit PO declaration
An Intent is not closed when the feature ships. It enters monitoring when the flag goes live — that is when the measurement window begins. It closes only when the Product Owner observes the signal and explicitly declares it complete. No one else can close an Intent.

A feature is what you built.
An Intent is why you built it.
IDF tracks both — but only the Intent tells you whether the work was worth it.

Where Cycle fits in the flow
Intent
Gate 1
Cycle
Gate 2
Gate 3
Monitoring

One delivery unit. One controlled release. Starts at Gate 1, ends at Gate 3.

A Cycle is one end-to-end delivery run toward an Intent. It begins when the team aligns on scope at Gate 1 and ends when the Product Owner authorizes the release at Gate 3. Work deploys continuously to production behind a release switch — accumulated safely, released when the PO decides. Cycles are not time-boxed — they close when the work is done and authorized, not when a timer runs out.

Cycle — one delivery unit, one feature switch
Status lifecycle
planned executing gate-2 gate-3 completed
executing after Gate 1 · gate-2 after each iteration completes · gate-3 after Gate 2 approved · completed after Gate 3 authorized
An Intent contains one or more Cycles. A complex outcome may require several Cycles before the signal is achieved.
Release switch — how the Cycle's output reaches users
Pattern What it means Gate 3 action
Toggle
Deployed OFF, held in production. The default IDF pattern — continuous deployment without continuous exposure. Work accumulates safely until the PO is ready.
Flip ON for all users.
Canary
Deployed OFF, released to a controlled subset first — a percentage of users, a beta cohort, or a specific region. Monitoring opens on partial audience. The PO broadens the rollout as the signal builds confidence.
Authorize rollout scope. PO defines the audience before Gate 3 passes.
Always-on
No switch. Core system changes — infrastructure, database migrations, architectural refactors — cannot be toggled off without breaking the application. They deploy directly.
Authorize deployment. No flag to flip — Gate 3 approves the deployment itself.

The release pattern is agreed at Gate 1 — before execution begins. For canary releases, monitoring opens on the subset immediately after Gate 3. For always-on work, Gate 3 authorizes the deployment, not a flag flip. The governance is identical across all three patterns; only the mechanism differs.

Where Iteration fits in the flow
Cycle
Iteration
Gate 2
↺ rework → new Iteration
or
Gate 3
Monitoring

One implementation pass. Every rework is named, not hidden.

An Iteration is one complete execution run within a Cycle. The team builds, tests, and produces a gate report. That report goes to Gate 2, where a human reviews it and tries the feature. If it passes, the Cycle advances. If not, feedback is written down and a new Iteration begins. Rework is never informal — every pass is a recorded artifact with a reason.

Iteration — one implementation pass
Status lifecycle
pending executing completed
pending on creation · executing when the team starts · completed when the gate report is produced
Trigger — distinguishes first pass from rework
initial — first Iteration in the Cycle
gate-2-feedback — created from Gate 2 feedback, with the reviewer's written reason attached
A Cycle contains one or more Iterations. The first is always initial. Each rework pass is gate-2-feedback with the specific issue recorded.

Intent, Cycle, Iteration — assembled

Every piece from §03–05, in one view. An Intent contains one or more Cycles. Each Cycle contains one or more Iterations. Gate 3 opens the monitoring window. Monitoring closes the Intent — or starts a new Cycle if the signal isn't achieved.

flowchart LR classDef intent stroke:#7c5cbf,stroke-width:2px classDef cycle stroke:#4a7abf,stroke-width:2px classDef iter stroke:#3a9e78,stroke-width:2px classDef gate stroke:#e8734a,stroke-width:2px classDef monitor stroke:#bf8f3a,stroke-width:2px classDef done stroke:#5a9e4a,stroke-width:1px classDef abandon stroke:#6a6878,stroke-width:1px I([Intent]):::intent --> G1{Gate 1}:::gate G1 -->|approved| C[Cycle]:::cycle G1 -->|not approved| RV([↺ Review Intent]):::intent C --> IT[Iteration]:::iter IT --> G2{Gate 2}:::gate G2 -->|approved| G3{Gate 3}:::gate G2 -->|not approved| RI([↺ new Iteration]):::iter G3 -->|approved| M[Monitoring]:::monitor G3 -->|not approved| RC([↺ new Cycle]):::cycle M -->|signal achieved| DONE([completed]):::done M -->|abandoned| AB([abandoned]):::abandon M -->|not achieved| NC([↺ new Cycle]):::cycle
Example walkthrough

One intent, two cycles, one rework iteration

This is an example — not a new concept. The model is already complete. This walkthrough traces a single intent through its full lifecycle using everything from §03–05: two cycles, a Gate 2 failure with rework, and a signal observation that closes the intent.

Intent: Reduce checkout abandonment at the payment step. Signal: abandonment rate below 12% (currently 22%).

I
Intent opens
PO writes the intent and logs it
The Product Owner translates a client signal (22% abandonment rate at payment step) into a single outcome statement. Status: open. The intent enters the system but no work begins until Gate 1.
INTENT_LOG — new entry
Intent: Reduce abandonment at payment step by simplifying card input. Signal: abandonment rate below 12% (currently 22%) Status: open
G1
Gate 1 — Intent & Cycle Alignment
Cycle 1 begins — Intent status: in-progress
The PO and Craft Engineer review the intent and the proposed Cycle 1 scope. Is this the right outcome to pursue? Is the scope coherent and deliverable? Gate 1 approves. Intent status transitions from open to in-progress. Iteration 1 is created with trigger: initial.
C1
Cycle 1 — Iteration 1
Agents build · Guardian reports · Delivery Team reviews and uses it · Gate 2 approves
The Orchestrator decomposes the intent into a task list within Iteration 1. Builders execute. The Guardian runs automated checks and produces a gate report. The Delivery Team reads the report — tests pass, performance within budget, no security issues — then opens the deployed feature and uses it. The simplified card input works as intended. Gate 2 decision: APPROVED. Cycle status advances to gate-3.
GATE_REPORT — Cycle 1 / Iteration 1
What was built: Single-field card input, billing address deferred. Tests: PASS · Security: CLEAN · Perf: within budget Recommendation: APPROVED — ready for Gate 3.
G3
Gate 3 — Flag Authorization
PO authorizes flag. Cycle 1 closes. Intent: monitoring.
The PO reads the gate report and authorizes the flag flip. checkout_simplified_payment moves from Pending-OFF to Live-ON. Cycle 1 status: completed. Intent status: monitoring. The intent does not close here. The measurement window begins.
monitoring · waiting for abandonment signal to stabilize
M
Signal observation — 72 hours later
Signal at 15%. Target was below 12%. PO initiates Cycle 2.
The abandonment rate improved from 22% to 15% — meaningful but still above the 12% threshold. The intent remains in monitoring. The PO decides the outcome is worth pursuing further and initiates Cycle 2 under the same intent. Gate 1 fires again — this time reviewing only the Cycle 2 scope, not re-approving the intent itself (it is already in-progress).
C2
Cycle 2 — Iteration 2a (trigger: initial)
Gate 2 requests a rework iteration
Cycle 2 aims to improve the mobile checkout experience. Iteration 2a builds a keyboard scroll fix. The Guardian produces a gate report — automated tests pass. A Delivery Team member then opens the feature on a mobile device and uses it: the keyboard obscures the card input field on older iOS, making it impossible to complete the form. The automated tests couldn't catch this. Gate 2 decision: REQUEST ITERATION with written feedback from hands-on experience.
GATE_REPORT — Cycle 2 / Iteration 2a
What was built: Mobile keyboard scroll fix on card input. Tests: PASS · Security: CLEAN · Perf: PASS Recommendation: REQUEST ITERATION Feedback: Keyboard overlaps input on iOS 15 Safari — needs scroll padding fix.
2b
Cycle 2 — Iteration 2b (trigger: gate-2-feedback)
Rework iteration — addresses the iOS keyboard issue
The Orchestrator creates Iteration 2b with trigger: gate-2-feedback and the Delivery Team's feedback text attached. Builders apply the scroll padding fix. Guardian re-runs automated checks. Gate 2 reviews again — this time the fix is clean. Gate 2 decision: APPROVED.
ITERATION — Cycle 2 / 2b
trigger: gate-2-feedback feedback: "Keyboard overlaps input on iOS 15 Safari — needs scroll padding fix." status: completed
G3
Gate 3 — Flag Authorization
PO authorizes. Cycle 2 closes. Intent back to monitoring.
The PO authorizes the Cycle 2 flag update. The flag (already Live-ON from Cycle 1) incorporates the improved mobile behaviour. Cycle 2 status: completed. Intent remains in monitoring. A second measurement window opens.
monitoring · second measurement window — targeting <12%
Signal observed — 48 hours later
Abandonment at 9.4%. Intent: completed.
The PO observes the abandonment rate at 9.4% — below the 12% target. The PO makes an explicit completion decision and records the observed signal in the Intent Log. Intent status: completed. The intent is closed.
INTENT_LOG — completion entry
Status: completed Signal observed: abandonment 9.4% (target: <12%) Cycles: 2 · Iterations: 3 (2 initial + 1 rework) Closed by: Product Owner · 2026-04-13

That's the complete model. The sections that follow define each element in detail — the three gates, all roles, the artifact set, and the reference zone for measurement, communication, and context management.

Intent & Cycle Alignment

Gate 1 runs before any execution begins. Its job is to confirm two things: that the intent is worth pursuing, and that the proposed cycle scope is coherent. Gate 1 has slightly different behaviour depending on whether this is the first cycle for an intent or a subsequent one.

Gate 1 — First Cycle
Intent + Scope Alignment
Trigger
New intent created. Cycle 1 scope proposed.
Participants
Product Owner (approves intent and scope) · Delivery Team (concern-or-pass)
Decision shape
Conversational. The PO may receive concerns and refine the intent before the cycle begins. Silence from reviewers = pass.
Outcomes
Approved → Intent status: in-progress. Iteration 1 created (trigger: initial). Execution begins.
Concern raised → PO refines intent and re-opens gate review.
Does not
Evaluate implementation quality or test results — that is Gate 2. Does not approve output, only direction.
Gate 1 — Subsequent Cycle
Cycle Scope Alignment
Trigger
Intent is in monitoring or in-progress. PO initiates a new cycle under the same intent.
Participants
Product Owner · Delivery Team
Decision shape
Lighter than first cycle. The intent is already approved and in-progress. Only the new cycle scope is reviewed.
Outcomes
Approved → New cycle begins. Iteration 1 created for this cycle (trigger: initial).
Concern raised → Scope is revised before cycle starts.
Does not
Re-approve the intent. The intent was already approved in the first Gate 1. This gate only validates the new cycle scope.

The Iteration Review Loop

Gate 2 is where human judgment meets execution output. It runs after every Iteration completes — not once per cycle. The decision has two inputs: the Guardian's automated report (what was built, test results, performance, security) and the reviewer's hands-on experience — they actually use the feature before deciding. APPROVED means both: technically sound, and serves the intent. Either signal can trigger a REQUEST ITERATION.

Iteration completes
Builder done · Iteration status: completed
Guardian runs automated checks
tests · security · perf · cost · client impact
↓ produces
Gate report (≤3 paragraphs)
what was built · automated signals · recommendation
↓ report + hands-on review
Delivery Team reviews — the team decides who, based on what the iteration needs
PO may join — especially for outcome or UX concerns
APPROVED
Technically sound + serves the intent. Cycle advances to Gate 3.
REQUEST ITERATION
Technical issue or intent mismatch. Feedback text written. Orchestrator creates new Iteration with trigger: gate-2-feedback.
↺ Loop repeats from "Iteration completes" until APPROVED

What the Gate 2 review involves

Step 1 — Read the gate report

Review what the Guardian automated checks found: test results, security scan, performance, architectural signals. Catch anything technically unsound before touching the feature.

Step 2 — Use the feature

A Delivery Team member accesses the deployed feature (behind its release switch) and uses it. Does it serve the intent? Does the interaction feel right? Are there hidden errors, awkward flows, or implementation choices that the automated report couldn't surface?

If approving

Write: APPROVED. Both conditions met: technically sound and serves the intent. Cycle advances to Gate 3.

If requesting another iteration

Write: REQUEST ITERATION followed by specific feedback — technical finding or experiential observation, either is valid. The feedback text becomes the feedback field in the new Iteration record. Vague feedback produces vague rework.

REQUEST ITERATION
Feedback: Keyboard overlaps card input on iOS 15 Safari — add scroll-padding-bottom to the form container.

REQUEST ITERATION
Feedback: The flow requires three taps where one would do. Billing address should default to saved address, not blank.

The PO may join Gate 2 to assess outcome fit or UX concerns — their perspective on whether it serves the client is valid input. Gate 2 decision authority sits with the Delivery Team. The team decides who reviews based on what the iteration needs — a developer for technical output, a UX designer for interaction quality, a BA for domain fit. The release decision belongs to the PO at Gate 3.

Flag Authorization

Gate 3 is binary and belongs exclusively to the Product Owner. It has one question: authorize this feature to go live? The PO reads the gate report and makes the call. There is no partial approval, no conditional pass. The release switch activates or it doesn't. The specific mechanism — toggle flip, canary rollout, always-on — was fixed at Gate 1. Gate 3 authorizes the activation, not the mechanism.

Gate 3 closes the cycle, not the intent. When the flag flips to Live-ON, the cycle completes and the intent enters the monitoring state. The intent stays open. Signal observation starts.

Gate 3 — Flag Authorization
Release Decision
Trigger
Gate 2 approved. Feature deployed and ready for release via the switch pattern authorized at Gate 1.
Participants
Product Owner only. No other role makes this decision.
Decision shape
Binary. Authorized or rejected. The PO reads the gate report and decides whether the feature is ready for client-visible release. No committee, no advisory — sole PO authority.
Authorized
Release switch activated. Cycle status: completed. Intent status: monitoring. PO begins signal observation.
Rejected
PO provides written reason. Orchestrator re-plans. Cycle may loop back to Gate 1 if scope needs revision, or to agent execution if the issue is implementation-level.
Does not
Close the intent. Confirm the outcome. Gate 3 authorizes the release — it does not verify that the outcome was achieved. That verification happens during monitoring, through the PO's signal observation, and is a separate explicit action.

Monitoring, signal observation, and explicit closure

The flag going Live-ON is not the end of the story. It is the beginning of the measurement window. An intent in monitoring state is an active commitment — the PO is watching signals and waiting for evidence that the outcome was actually achieved. This is the part of delivery that the 1:1 model skipped entirely.

Intent status lifecycle
open in-progress monitoring completed
↳ from monitoring or in-progress: abandoned  (explicit PO decision)
open — intent created, no cycle started yet
in-progress — Gate 1 approved, at least one cycle executing
monitoring — latest cycle's flag is Live-ON, outcome signal not yet confirmed
completed — PO observes signal and explicitly declares outcome achieved
abandoned — PO decides the outcome is no longer worth pursuing

What monitoring means in practice

When an intent enters monitoring state, the PO takes responsibility for watching the measurement signal defined in the intent. This is not passive waiting — it is active observation. The PO should have a specific signal, a specific threshold, and a specific timeframe in mind. "Abandonment rate below 12% measured over a 72-hour window" is a clear monitoring criterion. "Seems to be working" is not.

If the signal moves in the wrong direction after the flag goes Live-ON, the PO has options: initiate another cycle under the same intent, flip the flag back to Pending-OFF, or abandon the intent and log the reason. None of these require closing and reopening the intent. The intent stays open and the decision is made within its lifecycle.

Monitoring is not a waiting room

The monitoring state does not pause the system. Other intents can be in-progress while one is monitoring. The PO may be running Cycle 2 on one intent while monitoring the signal from Cycle 1 on another. The monitoring state is a property of the intent — not of the team's capacity. Multiple intents can be in monitoring simultaneously.

Explicit completion — and why it matters

Completion is not automatic. When the PO observes that the signal was achieved, they make an explicit declaration and record the observed signal value in the Intent Log. This closes the intent. It creates a durable record: what was the outcome we committed to, what signal did we observe, and when did we confirm it.

This record is what separates outcome-driven delivery from feature factory delivery. It is the evidence that the work produced the result it was meant to produce — not just that code was shipped.

INTENT_LOG — completion entry
Status: completed
Signal observed: abandonment 9.4% (target: <12%)
Observed: 2026-04-13 · Closed by: Product Owner

The abandoned path

Not all intents complete. Sometimes the signal doesn't move despite multiple cycles. Sometimes the strategic context changes and the outcome is no longer relevant. Sometimes the cost of further pursuit exceeds the expected value. In all of these cases, the PO can abandon the intent — but the decision must be explicit and the reason must be logged. An abandoned intent is not a failure to be erased; it is information about what the team tried, what it observed, and why it stopped. The Drift Register carries this forward.

Governance roles and the Delivery Team

IDF separates governance authority from delivery skills from execution. Governance roles are IDF-defined and gate-bound — they cannot be delegated to an agent, collapsed, or renamed without changing what the framework says. The Delivery Team is assembled per product: IDF defines which responsibilities must be covered; the team decides who covers them. The execution layer follows the same principle — IDF defines four functions that must be covered in every team; how each is implemented (AI agent, human, or pipeline tool) depends on the execution model.

Governance layer — IDF-defined, gate-bound
Stakeholder Portfolio horizon and strategic direction — stewards the Strategic Context from which Team Intents are derived
Product Owner Full intent lifecycle; sole Gate 3 authority
Architect Technical guardrails agents operate within
Delivery Team — responsibilities are IDF-defined, titles are not who covers each is team-defined
Code craft and technical standards Review iteration output for quality, maintainability, and intent fit
Quality and acceptance Define gate criteria; validate before the release switch activates
Domain and requirements Translate business context into cycle scope
Design and experience Define what "serves the intent" looks like for the user
Delivery infrastructure Own pipelines, environments, and the release switch mechanism
Execution layer — functions are IDF-defined, implementation is flexible AI agent · human · pipeline tool · any combination
Decompose and route
Orchestrator
Break cycle scope into iteration tasks; route to executors; track completion
Execute tasks
Builder
Implement iteration tasks; produce code and tests; operate without cross-task state
Automated quality gate
Guardian
Run checks after each iteration; produce a gate report keyed to Cycle + Iteration for human review
Cross-team dependency tracking
Dependencies Broker
Monitor cross-team needs; route requests; track resolution so teams don't wait
Select a role to explore its responsibilities
Humans — Govern
Stakeholders Portfolio horizon human
Product Owner Intent lifecycle + release authority human
Delivery Team Skills assembled per product team
Architects / TLs Technical guardrails human
Intent flows down  ·  Approval flows up
Execution layer AI agent · human · pipeline tool · any combination
Orchestrator Decompose & route agent
Builder(s) Execute tasks agent
Guardian Gate & quality check agent
Dep. Broker Cross-team signals agent
Role map

Three ways to run IDF

IDF's execution layer is model-agnostic. The four functions must be covered in every team — what changes is who or what performs each one. The gate structure, the three-level hierarchy, and human governance at every gate are constant across all three models. All three models involve AI execution. The spectrum is about how much human specification work happens before agents begin.

Function
Fully Agentic
Hybrid
Spec-Driven
Decompose and route
Orchestrator
AI agent reads the intent, decomposes scope into tasks, creates iteration records, and routes work to Builders.
Human tech lead defines scope and task breakdown; AI may draft the iteration record or assist routing decisions.
Human writes a detailed spec — outcomes, scope boundaries, constraints, task breakdown, and verification criteria. An AI Coordinator agent reads the spec and decomposes it into sub-tasks routed to Implementor agents.
Execute tasks
Builder
AI builder agents implement tasks from the iteration record; stateless per task; code and tests produced as a unit.
Human developers use AI pair programming or code generation tools; the human owns the output.
AI Implementor agents execute tasks within the spec's constraints. Scope boundaries and verification criteria in the spec govern what agents build — and what they don't.
Automated quality gate
Guardian
AI quality agent runs the full check suite and produces a gate report per iteration for Delivery Team review.
CI/CD pipeline and test suite run automatically; AI may assist with report generation and summarisation.
AI Verifier agent checks output against the spec's verification criteria and produces the gate report. Delivery Team reviews alignment with spec intent before Gate 3.
Cross-team dependency tracking
Dependencies Broker
AI-assisted monitoring surfaces cross-team dependencies during decomposition; routing and escalation may still require human judgment. Full automation of this function is aspirational.
Project coordinator uses AI monitoring tools to surface dependencies; routing and tracking remain human-led.
Dependencies are surfaced at spec-writing time — the spec's scope boundaries define what's in and out. Cross-team needs are resolved before execution begins, reducing mid-cycle blockers.

Teams often start spec-driven and move toward hybrid or fully agentic as confidence in the tooling grows. IDF doesn't prescribe a starting point — it requires the four functions to be covered and the gates to be honored. The execution model can evolve without changing the framework.

Persistent records

The system's memory doesn't live in anyone's head. Some artifacts are governance records — they track what was decided and what happened. Two are specifically about agent continuity — they exist because AI agents have no memory between sessions.

Framework records — required by IDF regardless of execution model

Strategic Context

The upstream input that produces Team Intents. May take the form of a True North document, Domain Intent, OKRs, or equivalent — depending on the organization's governance model. IDF does not govern its format; it only requires that Team Intents are traceable to it.

Contains
Strategic objectives or Domain Intent · Product positioning and key constraints · What is explicitly out of scope · Update date and next review date
Steward: Stakeholders

Client signal log

A continuously updated record of real client feedback, usage patterns, support issues, and research findings. Without it, intents are written from memory or assumption — and the PO has nothing to read during monitoring to assess whether the released feature is moving the intended signal.

Contains
Signal date and source · Signal description · Signal type (complaint / request / behavioral / research) · Priority flag · Related intent ID if applicable
Steward: PO · Updated: continuously

Intent Log

The single source of truth for every intent the team has pursued — open, in-progress, monitoring, completed, or abandoned. Without it, the team loses track of whether outcomes were actually achieved; an intent that went Live-ON but was never closed is an unchecked assumption about delivery.

Contains — per entry
Intent statement · Status · Associated cycle IDs · Signal target and observed value · Closed by and closure date · Abandonment reason if applicable
Steward: Orchestrator (writes) · PO (owns monitoring entries and closure)

Iteration Record

A record of one implementation pass within a Cycle — every Gate 2 rework produces a new one. Without it, rework is invisible; the Gate 2 reviewer's feedback exists only as oral instruction and the loop cannot be traced or learned from.

Contains
Cycle ID and iteration number · Trigger (initial / gate-2-feedback) · Feedback text if rework · Task list with status per task · Overall status (pending / executing / completed)
Steward: Orchestrator (creates and updates) · Builder (executes against)

Gate report

A structured report produced after each Iteration completes — the Delivery Team reads it alongside hands-on review to make the Gate 2 decision. Without it, Gate 2 decisions are made without evidence; the report also creates an audit trail for every evaluation made during a cycle.

Contains
What was built in this iteration · Automated check results (tests, security, performance, cost, client impact) · Recommendation: Approved or Request Iteration with specific reason
Steward: Guardian (produces) · Delivery Team (reviews and decides)

Feature Governance Registry

A record of every release switch in the product — its current state, the intent it serves, and its cleanup status. Without it, release switches accumulate silently; dead switches become untraceable technical debt and the audit trail for when each feature went live is lost.

Contains — per entry
Feature name and switch ID · Switch pattern (toggle / canary / always-on) · State (Pending-OFF / Live-ON / Dead-OFF) · Intent ID it serves · Activation date · Cleanup deadline for Dead-OFF entries
Steward: PO (release decision) · Builder (creates switch) · Tech Lead (dead switch cleanup)

Drift register

A log of detected divergences between what was intended and what was built, with the correction taken and the root cause recorded. Without it, the same drift patterns recur silently — it is the primary mechanism for improving intent writing and agent training over time.

Contains
Date detected · Intent ID and Cycle ID · What was intended vs. what was built · Root cause (ambiguous intent / agent interpretation / scope creep / external change) · Correction taken · Pattern flag if the same class of drift has appeared before
Steward: PO · Updated: at drift checks, gate failures, and intent abandonment
Execution context — required for AI agents to operate without institutional decay

System Memory

The persistent record every agent reads at the start of every session to reconstruct project context. Without it, agents begin each session with no knowledge of prior decisions — every session is day one.

Contains
Tech stack and folder structure · Run and deploy commands · Architectural decisions and their reasons · Conventions the team follows · Running log of what was built and when
Steward: Orchestrator (writes) · Tech Lead (governs)

Capability Library

A collection of domain-specific instruction sets that encode how this team builds things. Without it, agents default to generic patterns — ignoring the conventions, constraints, and decisions the team has made.

Contains — one entry per domain
Domain name and scope · How to build in this domain (patterns, conventions, constraints) · What to avoid and why · Examples of correct output
Steward: Tech Lead (governs) · Orchestrator (reads, triggers harvest)
Reference — for teams implementing IDF

The sections below are technical reference material. Most readers won't need them on a first read.

Signal Events

IDF replaces scheduled ceremonies with event-driven triggers. A signal event fires when its condition is met — not because the calendar says so. Every signal event has a trigger, a specific output, and a clear owner. Intent Completion is a first-class signal event — the explicit PO declaration that an intent's outcome signal was observed.

Event Cadence Trigger + Participants Output
Slow Loop
Strategic Alignment
Monthly / quarterly
Triggered by: calendar or strategic pivot
Participants: Stakeholders + POs
Updated Strategic Context. Any change propagates to client signal log within 1 cycle.
Client Signal Review
Weekly minimum
Triggered by: new feedback, usage data, support signals
Participants: PO (+ UX/Research if available)
Updated client signal log. Priority adjustments fed into next intent injection. Also feeds monitoring decisions — PO uses this to assess whether intent signals are moving.
Fast Loop — per Cycle
Context Integrity Gate
Per cycle
Triggered by: cycle start, before Orchestrator begins
Participants: Delivery Team
10-minute pre-flight. Delivery Team confirms System Memory and Capability Library reflect current reality. Logged as Context check: CLEAN / FLAGGED. FLAGGED triggers a scoped Context Reset before cycle proceeds.
Intent Injection
Per cycle
Triggered by: new intent or continuation of in-progress/monitoring intent
Participants: PO (+ Orchestrator reads output)
First cycle: PO writes intent (outcome + measurement signal). Template: [outcome] by [mechanism], measured by [signal]. Intent status: open → in-progress on Gate 1 approval. Subsequent cycle: intent already in-progress or monitoring. Gate 1 reviews cycle scope only.
Gate 2 — Iteration Review
Per iteration
Triggered by: Iteration completes
Participants: Guardian (automated checks) · Delivery Team (human decision)
Guardian produces gate report. Delivery Team reviews the report and uses the feature. Decision: Approved (technically sound + serves intent) → Cycle advances to Gate 3. Request iteration (technical or experiential feedback) → new Iteration created with trigger: gate-2-feedback; Cycle holds. Loop repeats until Approved. PO may join; final Gate 2 authority is the Delivery Team.
Gate 3 — Release Gate
Per cycle
Triggered by: Gate 2 approved, Deploy (flag OFF) complete
Participants: PO (sole decision authority)
Authorized: release switch activated. Cycle status: completed. Intent status: monitoring. Outcome NOT yet confirmed — that is Intent Completion, a separate event. Rejected: Orchestrator re-plans with Guardian notes.
Monitoring & Completion
Intent Completion
Event-triggered (post-monitoring)
Triggered by: PO observes measurement signal and determines outcome achieved (or decides to abandon)
Participants: PO
Intent status → completed (or abandoned). Observed signal value recorded in Intent Log. Completed: triggers next intent or cycle. Abandoned: reason logged in Drift Register.
Continuous & Exception
Context Reset
Event-triggered
Triggered by: detected context drift or staleness signal
Participants: Delivery Team (leads) + PO (acknowledges)
Pruned System Memory with a diff showing what was removed. Confirmed Capability Library accuracy. Dead-OFF release switches identified and removed.
Drift Check
Every 10–20 cycles
Triggered by: cycle count or PO concern
Participants: PO + Orchestrator
Drift register entry. Corrections fed back into System Memory and next cycle's intent. Pattern of drift informs Strategic Alignment agenda.
Capability Harvest
When pattern repeats
Triggered by: Guardian or Orchestrator detecting a repeated pattern
Participants: Delivery Team + Orchestrator
New or updated capability file added to the Capability Library. Reduces re-teaching same conventions each cycle.
Dependency Sync
Event-triggered
Triggered by: broker detecting a cross-team need
Participants: Affected POs + Tech Leads
Resolution path defined within same cycle. Blocker log updated. Teams work around while awaiting resolution.

Information flow and communication

Communication across the team is explicit, documented, and async by default. No role should require a synchronous meeting to receive information from another. These defined communication patterns are what make fast cycles possible without creating noise or missed signals.

StrategicProduct

Direction flows down as updates to the True North document. POs read it before intent injection. No meeting required — PO is responsible for tracking changes.

Trigger: Strategic Alignment output or strategic pivot
Format: Updated True North document
Frequency: Monthly or on strategic change
ProductExecution

Cycle intent travels as a written outcome statement. The Orchestrator converts it to an Iteration task list. The intent's lifecycle status travels back through the Intent Log — Orchestrator writes it, PO acts on it.

Trigger: Gate 3 closes previous cycle
Format: Intent statement in the Intent Log
Frequency: Per cycle
ExecutionProduct

Gate reports surface from Guardian to the Delivery Team. A Delivery Team member reads the report and uses the feature before deciding. Feedback — whether from the report or from hands-on experience — travels back as a REQUEST ITERATION with written text. The PO may join for outcome or UX assessment but is not the decision authority at Gate 2. The PO's release decision happens at Gate 3.

Trigger: Iteration completes
Format: Gate report (≤3 paragraphs) keyed to Cycle + Iteration
Frequency: Per iteration
ProductStrategic

Client signals and drift register entries escalate upward when they reveal a strategic concern. PO does not escalate individual features — only patterns or conflicts that require portfolio-level decisions. Intent abandonment reasons also surface here when they reveal a strategic gap.

Trigger: Drift check, persistent client signal, strategic conflict, intent abandonment
Format: Escalation note + supporting log entries
Frequency: As needed
Team ATeam B

All cross-team communication is mediated by the Dependencies Broker. No direct team-to-team coordination channels — this creates noise and missed signals.

Trigger: Orchestrator detects cross-team need during decompose
Format: Dependency alert via broker
Resolution: Affected POs within 1 cycle
ClientProduct

Client signals travel to the PO continuously through whatever feedback channel exists. These signals are also what closes intents — the PO is explicitly looking for the measurement signal defined in the intent statement.

Trigger: Any client feedback, usage data, support pattern
Format: Entry in the Client signal log
Frequency: Continuous

What good looks like — in numbers

In an agent-driven system, execution effort is nearly free — what matters is how fast intent becomes client value, how stable releases are, and how well the system learns from each cycle. Intent completion rate is a key signal: are intents actually achieving their outcomes, or just shipping features?

DO NOT TRACK
Story points  ·  Sprint velocity  ·  Burndown charts  ·  Estimation accuracy
These are human-bandwidth proxies. Agents absorb execution bandwidth. Tracking effort-based metrics in IDF produces false signals.
Was: Deployment Frequency
Cycle throughput
Flags turned ON per day. Source: Feature Governance Registry.
Elite > 3 / day
Warn < 1 / week
Was: Lead Time
Intent-to-flag time
Intent written → flag ON, in hours. Source: INTENT_LOG.md timestamps.
Elite < 2 hours
Warn > 8 hours
Was: MTTR
Flag-OFF response
Issue detected → flag OFF, in minutes. Source: Guardian incident log.
Elite < 5 minutes
Warn > 30 minutes
Was: Change Failure Rate
Escape rate
Flags turned ON that required flag OFF. Source: flag registry ON→OFF events.
Elite < 5%
Warn > 20%

Gate 2 first-pass rate

% iterations cleared without rework

Tracks how often an Iteration clears Gate 2 on the first pass. A declining rate signals either ambiguous intent or degraded capability files.

Target: >80% · Warning: <60%

Intent clarity rate

% cycles without PO clarification loop

Tracks how often the Orchestrator must escalate back to the PO before decomposing. High escalation is the most common IDF failure mode.

Target: >85% · Warning: <70%

Intent completion rate

% intents reaching completed vs. abandoned

The primary outcome signal in IDF. An intent that ships a flag but never reaches completed status means the outcome wasn't confirmed. This is the metric that separates delivery from outcomes.

Target: >70% completed · Review abandoned intents at drift check

Noise is the primary cause of agent failure

Technical debt accumulates silently and reveals itself as bugs. Context debt accumulates silently and reveals itself as agent confusion — correct behavior executed against the wrong understanding of the system. An agent running against a polluted System Memory does not know it is confused. It produces confident, wrong output.

System Memory Pollution

Stale entries that were true at a point in time but no longer reflect the system. The most dangerous because they are specific and authoritative — agents trust them.

  • TYPEAbandoned architectural decisions never removed
  • TYPEResolved ambiguities documented but never cleaned
  • TYPEDead-OFF feature context still referenced as if live
  • TYPEContradictions between sections added at different times

Capability Library Staleness

Capability files that describe patterns the team no longer uses, or miss patterns the team always uses now. Causes Builders to produce output that passes automated checks but fails Craft Review.

  • TYPEPatterns superseded but not replaced in the library
  • TYPECapability files with no usage signal in the last N cycles
  • TYPERecurring Craft Review corrections not yet encoded

Context Debt audit checklist

Run at every Context Reset signal event. Each question is a pruning decision — if the answer is no, the entry is removed.

System Memory — per entry
CHECKIs this still true? Does the current codebase reflect this entry?
CHECKIs this still relevant? Would an agent need this for a cycle today?
CHECKDoes this contradict any other entry? Contradictions must be resolved, not left coexisting.
CHECKIs any Dead-OFF flag still referenced here? Remove the entry — dead features are not context.
Capability Library — per file
CHECKHas this capability been used in the last N cycles?
CHECKDoes this capability reflect current patterns? Compare against recent Builder output.
CHECKAre there recurring Craft Review corrections not yet encoded? Harvest them before closing the reset.

This framework is a living document

IDF is a working framework — not a finished specification. If you are using it in practice and find that something doesn't hold, or that the model needs to extend further, the right move is to surface it.

This page is hidden by design — direct URL only, not listed in the nav. It exists for review before any promotion decision. If the model holds up under your delivery scenario, raise a promotion request.

License

CC BY 4.0 — free to use, adapt, and share with attribution. creativecommons.org/licenses/by/4.0/

Author

Roberto Pillon Franco · LinkedIn