Completion Gate
How Pulse AI checks task evidence before work can move into review.
The promise
The completion gate is Pulse AI's answer to "how do I know this work is actually ready to review?" It turns completion into a testable handoff instead of a chat claim.
For a normal task, the gate validates the board record, the Knowledge Item, the deliverables, the narration, the rubric, task-specific checks, and git proof. When it passes with promotion, the item moves to review. It does not move to done. The reviewer still decides whether the evidence is good enough.
What the gate checks
| Evidence area | What the reviewer should expect | Source of truth |
|---|---|---|
| Lifecycle state | startedAt, completedAt, activeTimeMs, dates.completed, and review timing fields are present and coherent. | Task frontmatter and scripts/completion-gate.js |
| Role and rubric | The task is graded against the role's skills with evidence, not a generic score. | Task frontmatter plus .pulse/behavior/skills/rubrics/ |
| Knowledge Item | The task links to a KI with metadata.json, timestamps.json, overview.md, and artifacts. | .pulse/knowledge/ |
| Deliverables | The board item has a deliverables array and the artifacts exist in the KI. | scripts/finalize-deliverable.js and task frontmatter |
| Voice proof | Narration exists when voice preferences require it, and it sits beside the deck. | KI artifacts and voice settings |
| Task-specific checks | The Phase 1 manifest names runnable checks and the gate runs them. | .pulse/actions/tasks/.manifests/ |
| Git receipt | The closeout identifies the commit, pushed remote, branch, touched files, and clean-status caveat. | gitReceipt frontmatter and git output |
This is a review readiness gate. It is deliberately stricter than "the code compiled" because Pulse AI work is often content, documentation, research, package validation, or user-facing proof.
What a passing result does not prove
A passing gate does not prove the task is correct, polished, or launch-approved. It proves the reviewer has enough attached evidence to make that decision.
Review can still reject a gated task when the visible UI is wrong, the copy overclaims, the screenshots are stale, a Knowledge Item contradicts live source, or the wrong task was solved. The gate prevents missing evidence; review judges the evidence.
Example: request to review
Use web-fix-reset-password-memory-leak as a concrete proof thread:
- The request was a small bug closeout: verify the reset-password auth listener is unsubscribed.
- Phase 1 created task-specific validation checks before claiming the fix.
- The implementation evidence pointed to the live reset-password page and the checks that verified the listener contract.
scripts/finalize-deliverable.jspackaged an HTML deck, narration, metadata, and overview into.pulse/knowledge/web-fix-reset-password-memory-leak/.- The task record linked the KI through
deliverables, added rubric grading, and documented the scope boundary. scripts/completion-gate.jsbecame the final wall before the item could enter review.
The useful part is not the artifact count. The useful part is the chain: request, validation, current source, KI, deliverables, narration, board link, and review status all point at the same claim.
Where media belongs
Completion-gate media should be evidence, not decoration. Good media includes:
| Media need | Best source |
|---|---|
| Gate output | Terminal capture or saved validation log from the task's final run. |
| Board proof | Screenshot of the task drawer showing status, deliverables, and review history. |
| KI proof | Screenshot of the Knowledge Item artifact list or the deck cover slide. |
| Product proof | Screenshot or screen recording of the actual product surface changed by the task. |
If media is not available, say that plainly in the task evidence. Do not create fake screenshots or publish internal closeout checklists as public docs.
Reviewer use
Before approving a task, ask four questions:
- Does the gate output match the current task?
- Do the KI and deliverables prove the same scope as the request?
- Are the task-specific checks strong enough for the risk?
- Is anything important missing from the review trail?
If any answer is weak, use Needs Revision or Rejected and keep the finding on the same task.