Workflow

Completion Gate

How Pulse AI checks task evidence before work can move into review.

The promise

The completion gate is Pulse AI's answer to "how do I know this work is actually ready to review?" It turns completion into a testable handoff instead of a chat claim.

For a normal task, the gate validates the board record, the Knowledge Item, the deliverables, the narration, the rubric, task-specific checks, and git proof. When it passes with promotion, the item moves to review. It does not move to done. The reviewer still decides whether the evidence is good enough.

What the gate checks

Evidence areaWhat the reviewer should expectSource of truth
Lifecycle statestartedAt, completedAt, activeTimeMs, dates.completed, and review timing fields are present and coherent.Task frontmatter and scripts/completion-gate.js
Role and rubricThe task is graded against the role's skills with evidence, not a generic score.Task frontmatter plus .pulse/behavior/skills/rubrics/
Knowledge ItemThe task links to a KI with metadata.json, timestamps.json, overview.md, and artifacts..pulse/knowledge/
DeliverablesThe board item has a deliverables array and the artifacts exist in the KI.scripts/finalize-deliverable.js and task frontmatter
Voice proofNarration exists when voice preferences require it, and it sits beside the deck.KI artifacts and voice settings
Task-specific checksThe Phase 1 manifest names runnable checks and the gate runs them..pulse/actions/tasks/.manifests/
Git receiptThe closeout identifies the commit, pushed remote, branch, touched files, and clean-status caveat.gitReceipt frontmatter and git output

This is a review readiness gate. It is deliberately stricter than "the code compiled" because Pulse AI work is often content, documentation, research, package validation, or user-facing proof.

What a passing result does not prove

A passing gate does not prove the task is correct, polished, or launch-approved. It proves the reviewer has enough attached evidence to make that decision.

Review can still reject a gated task when the visible UI is wrong, the copy overclaims, the screenshots are stale, a Knowledge Item contradicts live source, or the wrong task was solved. The gate prevents missing evidence; review judges the evidence.

Example: request to review

Use web-fix-reset-password-memory-leak as a concrete proof thread:

  1. The request was a small bug closeout: verify the reset-password auth listener is unsubscribed.
  2. Phase 1 created task-specific validation checks before claiming the fix.
  3. The implementation evidence pointed to the live reset-password page and the checks that verified the listener contract.
  4. scripts/finalize-deliverable.js packaged an HTML deck, narration, metadata, and overview into .pulse/knowledge/web-fix-reset-password-memory-leak/.
  5. The task record linked the KI through deliverables, added rubric grading, and documented the scope boundary.
  6. scripts/completion-gate.js became the final wall before the item could enter review.

The useful part is not the artifact count. The useful part is the chain: request, validation, current source, KI, deliverables, narration, board link, and review status all point at the same claim.

Where media belongs

Completion-gate media should be evidence, not decoration. Good media includes:

Media needBest source
Gate outputTerminal capture or saved validation log from the task's final run.
Board proofScreenshot of the task drawer showing status, deliverables, and review history.
KI proofScreenshot of the Knowledge Item artifact list or the deck cover slide.
Product proofScreenshot or screen recording of the actual product surface changed by the task.

If media is not available, say that plainly in the task evidence. Do not create fake screenshots or publish internal closeout checklists as public docs.

Reviewer use

Before approving a task, ask four questions:

  1. Does the gate output match the current task?
  2. Do the KI and deliverables prove the same scope as the request?
  3. Are the task-specific checks strong enough for the risk?
  4. Is anything important missing from the review trail?

If any answer is weak, use Needs Revision or Rejected and keep the finding on the same task.

Related links