Intent, Cycle, Iteration — a three-level model for AI-assisted delivery.
IDF — Intent Driven Flow — is a delivery governance framework for teams with fast execution cycles. Whether that speed comes from AI agents, spec-driven automation, or a hybrid approach, the execution model is yours to choose. What IDF governs is the layer above: how outcomes are defined, how work is reviewed, and how features reach users.
The person responsible for the product writes one statement: what needs to change for the client, and how you'll know it happened. The team works toward that outcome — not a list of tasks. The work isn't done when a feature ships. It's done when the outcome is confirmed.
Every piece of work reaches production invisible to users, behind a switch. A responsible person reviews the completed work, then decides when — and whether — to turn it on. This keeps risky releases out of the equation: work can ship continuously without exposing anything until a human says it's ready.
A human sets the goal. A human reviews the work and tries it before it moves forward. A human decides when clients see it. At every critical transition, a person owns the call. Speed comes from the execution layer. Control never leaves the team.
AI makes execution cheap. The first attempt does not need to be right — it needs to be directional. Ship something working, observe what it tells you, and go again. Progress is measured by evidence, not plans.
No external constraint justifies stopping work. If you don't have it, mock it. If it isn't designed, build it ugly. The quality bar is production-safe, not complete. Constraints are temporary; blocked work is permanent waste.
Every piece of work is an Intent (the outcome), a Cycle (one delivery unit), or an Iteration (one implementation pass). The sections that follow define each level, each gate, and each role in full.
IDF governs the delivery layer. For the corporate governance model — True North, the intent cascade from strategy to team, and the agentic org structure — see The Agentic Organization →
IDF operates at two different tempos. The Slow Flow is where direction is set — understanding what clients need, deciding which outcomes matter, writing the intent. The Fast Flow is where delivery happens — building, reviewing, releasing. They run independently but serve the same goal: the Slow Flow decides what to pursue, the Fast Flow delivers it.
Gates are the three human checkpoints in the Fast Flow. Each one is a decision point — nothing moves to the next phase without a person making the call. Gate 1 aligns intent and scope before execution begins. Gate 2 reviews each implementation pass. Gate 3 authorizes the release and activates value for clients. The full definition of each gate is in §07–09.
Intents arrive at Gate 1 from a corporate cascade — True North → Strategic Intent → Domain Intent → Team Intent. IDF governs from Team Intent downward. See The Agentic Organization →
Everything in IDF starts with an Intent. An Intent answers two questions: what needs to change for the client, and how will you know it happened? Both must be answered before any work begins.
Every intent must define how the outcome will be measured. IDF calls that instrument a signal — a specific, observable fact declared before delivery begins, not chosen after. If you can name the outcome but not the signal, you don't have an intent yet.
One sentence. Not what to build — what to achieve. The team commits to this statement, not to any specific implementation. How it gets built is the Fast Flow's job.
A number, a rate, a behaviour — something real that can be observed after the work ships. If the signal can't be measured, the intent can't be closed. Vague signals produce open-ended commitments.
A feature is what you built.
An Intent is why you built it.
IDF tracks both — but only the Intent tells you whether the work was worth it.
A Cycle is one end-to-end delivery run toward an Intent. It begins when the team aligns on scope at Gate 1 and ends when the Product Owner authorizes the release at Gate 3. Work deploys continuously to production behind a release switch — accumulated safely, released when the PO decides. Cycles are not time-boxed — they close when the work is done and authorized, not when a timer runs out.
The release pattern is agreed at Gate 1 — before execution begins. For canary releases, monitoring opens on the subset immediately after Gate 3. For always-on work, Gate 3 authorizes the deployment, not a flag flip. The governance is identical across all three patterns; only the mechanism differs.
An Iteration is one complete execution run within a Cycle. The team builds, tests, and produces a gate report. That report goes to Gate 2, where a human reviews it and tries the feature. If it passes, the Cycle advances. If not, feedback is written down and a new Iteration begins. Rework is never informal — every pass is a recorded artifact with a reason.
initial. Each rework pass is gate-2-feedback with the specific issue recorded.Every piece from §03–05, in one view. An Intent contains one or more Cycles. Each Cycle contains one or more Iterations. Gate 3 opens the monitoring window. Monitoring closes the Intent — or starts a new Cycle if the signal isn't achieved.
This is an example — not a new concept. The model is already complete. This walkthrough traces a single intent through its full lifecycle using everything from §03–05: two cycles, a Gate 2 failure with rework, and a signal observation that closes the intent.
Intent: Reduce checkout abandonment at the payment step. Signal: abandonment rate below 12% (currently 22%).
trigger: initial.checkout_simplified_payment moves from Pending-OFF to Live-ON. Cycle 1 status: completed. Intent status: monitoring. The intent does not close here. The measurement window begins.trigger: gate-2-feedback and the Delivery Team's feedback text attached. Builders apply the scroll padding fix. Guardian re-runs automated checks. Gate 2 reviews again — this time the fix is clean. Gate 2 decision: APPROVED.That's the complete model. The sections that follow define each element in detail — the three gates, all roles, the artifact set, and the reference zone for measurement, communication, and context management.
Gate 1 runs before any execution begins. Its job is to confirm two things: that the intent is worth pursuing, and that the proposed cycle scope is coherent. Gate 1 has slightly different behaviour depending on whether this is the first cycle for an intent or a subsequent one.
Gate 2 is where human judgment meets execution output. It runs after every Iteration completes — not once per cycle. The decision has two inputs: the Guardian's automated report (what was built, test results, performance, security) and the reviewer's hands-on experience — they actually use the feature before deciding. APPROVED means both: technically sound, and serves the intent. Either signal can trigger a REQUEST ITERATION.
Review what the Guardian automated checks found: test results, security scan, performance, architectural signals. Catch anything technically unsound before touching the feature.
A Delivery Team member accesses the deployed feature (behind its release switch) and uses it. Does it serve the intent? Does the interaction feel right? Are there hidden errors, awkward flows, or implementation choices that the automated report couldn't surface?
Write: APPROVED. Both conditions met: technically sound and serves the intent. Cycle advances to Gate 3.
Write: REQUEST ITERATION followed by specific feedback — technical finding or experiential observation, either is valid. The feedback text becomes the feedback field in the new Iteration record. Vague feedback produces vague rework.
The PO may join Gate 2 to assess outcome fit or UX concerns — their perspective on whether it serves the client is valid input. Gate 2 decision authority sits with the Delivery Team. The team decides who reviews based on what the iteration needs — a developer for technical output, a UX designer for interaction quality, a BA for domain fit. The release decision belongs to the PO at Gate 3.
Gate 3 is binary and belongs exclusively to the Product Owner. It has one question: authorize this feature to go live? The PO reads the gate report and makes the call. There is no partial approval, no conditional pass. The release switch activates or it doesn't. The specific mechanism — toggle flip, canary rollout, always-on — was fixed at Gate 1. Gate 3 authorizes the activation, not the mechanism.
Gate 3 closes the cycle, not the intent. When the flag flips to Live-ON, the cycle completes and the intent enters the monitoring state. The intent stays open. Signal observation starts.
The flag going Live-ON is not the end of the story. It is the beginning of the measurement window. An intent in monitoring state is an active commitment — the PO is watching signals and waiting for evidence that the outcome was actually achieved. This is the part of delivery that the 1:1 model skipped entirely.
When an intent enters monitoring state, the PO takes responsibility for watching the measurement signal defined in the intent. This is not passive waiting — it is active observation. The PO should have a specific signal, a specific threshold, and a specific timeframe in mind. "Abandonment rate below 12% measured over a 72-hour window" is a clear monitoring criterion. "Seems to be working" is not.
If the signal moves in the wrong direction after the flag goes Live-ON, the PO has options: initiate another cycle under the same intent, flip the flag back to Pending-OFF, or abandon the intent and log the reason. None of these require closing and reopening the intent. The intent stays open and the decision is made within its lifecycle.
The monitoring state does not pause the system. Other intents can be in-progress while one is monitoring. The PO may be running Cycle 2 on one intent while monitoring the signal from Cycle 1 on another. The monitoring state is a property of the intent — not of the team's capacity. Multiple intents can be in monitoring simultaneously.
Completion is not automatic. When the PO observes that the signal was achieved, they make an explicit declaration and record the observed signal value in the Intent Log. This closes the intent. It creates a durable record: what was the outcome we committed to, what signal did we observe, and when did we confirm it.
This record is what separates outcome-driven delivery from feature factory delivery. It is the evidence that the work produced the result it was meant to produce — not just that code was shipped.
Not all intents complete. Sometimes the signal doesn't move despite multiple cycles. Sometimes the strategic context changes and the outcome is no longer relevant. Sometimes the cost of further pursuit exceeds the expected value. In all of these cases, the PO can abandon the intent — but the decision must be explicit and the reason must be logged. An abandoned intent is not a failure to be erased; it is information about what the team tried, what it observed, and why it stopped. The Drift Register carries this forward.
IDF separates governance authority from delivery skills from execution. Governance roles are IDF-defined and gate-bound — they cannot be delegated to an agent, collapsed, or renamed without changing what the framework says. The Delivery Team is assembled per product: IDF defines which responsibilities must be covered; the team decides who covers them. The execution layer follows the same principle — IDF defines four functions that must be covered in every team; how each is implemented (AI agent, human, or pipeline tool) depends on the execution model.
IDF's execution layer is model-agnostic. The four functions must be covered in every team — what changes is who or what performs each one. The gate structure, the three-level hierarchy, and human governance at every gate are constant across all three models. All three models involve AI execution. The spectrum is about how much human specification work happens before agents begin.
Teams often start spec-driven and move toward hybrid or fully agentic as confidence in the tooling grows. IDF doesn't prescribe a starting point — it requires the four functions to be covered and the gates to be honored. The execution model can evolve without changing the framework.
The system's memory doesn't live in anyone's head. Some artifacts are governance records — they track what was decided and what happened. Two are specifically about agent continuity — they exist because AI agents have no memory between sessions.
The upstream input that produces Team Intents. May take the form of a True North document, Domain Intent, OKRs, or equivalent — depending on the organization's governance model. IDF does not govern its format; it only requires that Team Intents are traceable to it.
A continuously updated record of real client feedback, usage patterns, support issues, and research findings. Without it, intents are written from memory or assumption — and the PO has nothing to read during monitoring to assess whether the released feature is moving the intended signal.
The single source of truth for every intent the team has pursued — open, in-progress, monitoring, completed, or abandoned. Without it, the team loses track of whether outcomes were actually achieved; an intent that went Live-ON but was never closed is an unchecked assumption about delivery.
A record of one implementation pass within a Cycle — every Gate 2 rework produces a new one. Without it, rework is invisible; the Gate 2 reviewer's feedback exists only as oral instruction and the loop cannot be traced or learned from.
A structured report produced after each Iteration completes — the Delivery Team reads it alongside hands-on review to make the Gate 2 decision. Without it, Gate 2 decisions are made without evidence; the report also creates an audit trail for every evaluation made during a cycle.
A record of every release switch in the product — its current state, the intent it serves, and its cleanup status. Without it, release switches accumulate silently; dead switches become untraceable technical debt and the audit trail for when each feature went live is lost.
A log of detected divergences between what was intended and what was built, with the correction taken and the root cause recorded. Without it, the same drift patterns recur silently — it is the primary mechanism for improving intent writing and agent training over time.
The persistent record every agent reads at the start of every session to reconstruct project context. Without it, agents begin each session with no knowledge of prior decisions — every session is day one.
A collection of domain-specific instruction sets that encode how this team builds things. Without it, agents default to generic patterns — ignoring the conventions, constraints, and decisions the team has made.
The sections below are technical reference material. Most readers won't need them on a first read.
IDF replaces scheduled ceremonies with event-driven triggers. A signal event fires when its condition is met — not because the calendar says so. Every signal event has a trigger, a specific output, and a clear owner. Intent Completion is a first-class signal event — the explicit PO declaration that an intent's outcome signal was observed.
Communication across the team is explicit, documented, and async by default. No role should require a synchronous meeting to receive information from another. These defined communication patterns are what make fast cycles possible without creating noise or missed signals.
Direction flows down as updates to the True North document. POs read it before intent injection. No meeting required — PO is responsible for tracking changes.
Cycle intent travels as a written outcome statement. The Orchestrator converts it to an Iteration task list. The intent's lifecycle status travels back through the Intent Log — Orchestrator writes it, PO acts on it.
Gate reports surface from Guardian to the Delivery Team. A Delivery Team member reads the report and uses the feature before deciding. Feedback — whether from the report or from hands-on experience — travels back as a REQUEST ITERATION with written text. The PO may join for outcome or UX assessment but is not the decision authority at Gate 2. The PO's release decision happens at Gate 3.
Client signals and drift register entries escalate upward when they reveal a strategic concern. PO does not escalate individual features — only patterns or conflicts that require portfolio-level decisions. Intent abandonment reasons also surface here when they reveal a strategic gap.
All cross-team communication is mediated by the Dependencies Broker. No direct team-to-team coordination channels — this creates noise and missed signals.
Client signals travel to the PO continuously through whatever feedback channel exists. These signals are also what closes intents — the PO is explicitly looking for the measurement signal defined in the intent statement.
In an agent-driven system, execution effort is nearly free — what matters is how fast intent becomes client value, how stable releases are, and how well the system learns from each cycle. Intent completion rate is a key signal: are intents actually achieving their outcomes, or just shipping features?
Tracks how often an Iteration clears Gate 2 on the first pass. A declining rate signals either ambiguous intent or degraded capability files.
Tracks how often the Orchestrator must escalate back to the PO before decomposing. High escalation is the most common IDF failure mode.
The primary outcome signal in IDF. An intent that ships a flag but never reaches completed status means the outcome wasn't confirmed. This is the metric that separates delivery from outcomes.
Technical debt accumulates silently and reveals itself as bugs. Context debt accumulates silently and reveals itself as agent confusion — correct behavior executed against the wrong understanding of the system. An agent running against a polluted System Memory does not know it is confused. It produces confident, wrong output.
Stale entries that were true at a point in time but no longer reflect the system. The most dangerous because they are specific and authoritative — agents trust them.
Capability files that describe patterns the team no longer uses, or miss patterns the team always uses now. Causes Builders to produce output that passes automated checks but fails Craft Review.
Run at every Context Reset signal event. Each question is a pruning decision — if the answer is no, the entry is removed.
IDF is a working framework — not a finished specification. If you are using it in practice and find that something doesn't hold, or that the model needs to extend further, the right move is to surface it.
This page is hidden by design — direct URL only, not listed in the nav. It exists for review before any promotion decision. If the model holds up under your delivery scenario, raise a promotion request.
CC BY 4.0 — free to use, adapt, and share with attribution. creativecommons.org/licenses/by/4.0/
Roberto Pillon Franco · LinkedIn