Review Queue
Use Pulse AI review to decide whether completed agent work moves to done, revision, or the next dispatch.
Operating model outline
| Page | Reader job | Unique section | Media need |
|---|---|---|---|
| Workday | Observe completed work entering review | Done with the day | Workday screenshot |
| Dispatch | Avoid relaunching review-pending work | Blocked-work safeguards | Dispatch diagram |
| Board | Preserve verdicts on the task record | Review movement on the board | Status flow visual |
| Review Queue | Make the verdict | Verdicts and movement | Evidence checklist |
Review changes the next dispatch
The Review Queue is where agent work becomes accountable. It gives the founder or lead a place to inspect the task, evidence, Knowledge Item, deliverables, and prior review history before anything moves to done.
Review is not ceremony. It is the human decision point that separates "an agent produced something" from "this work is accepted."
A review item is not unfinished just because it is not done. It is waiting for a verdict. Launch planning and Workday sizing should count review-pending work separately from open todos.
What evidence belongs in review
| Review target | What to check |
|---|---|
| Original request | Did the task solve the actual user need and stay inside scope? |
| Completion notes | Do the notes explain what changed, what was excluded, and what remains? |
| Prior reviews | Are old findings resolved or still open? |
| Knowledge Item | Does the KI exist, set needsReview, and contain current artifacts? |
| Deliverables | Do the HTML, Markdown, screenshots/media, and narration match the closeout? |
| Validation | Did the task-specific checks, build, runtime proof, or source verification actually run? |
| Git proof | Does gitReceipt identify the pushed task-owned commit and dirty-tree caveat? |
Read older review entries before adding a new one. Preserving review history matters because repeated reviews often reveal whether a task is improving or just cycling.
Verdicts and movement
| Verdict | Use it when |
|---|---|
| Approve | The current source and artifacts satisfy the request, and any caveats are non-blocking. |
| Needs Revision | The direction is right, but a concrete fix is required before acceptance. |
| Reject | The submission solves the wrong problem, lacks proof, breaks scope, or cannot be trusted. |
Approval should not hide launch blockers. If follow-on work is real, name the owner task. If the task itself is wrong, keep the finding on that task instead of creating a duplicate mystery.
After a Needs Revision or Rejected verdict, dispatch the existing owner when the same task can absorb the fix. Create a new owner only for a concrete missing prerequisite or follow-on that should not be hidden inside the current task.
Example: review decision trail
In a request-to-evidence-to-review thread like web-fix-reset-password-memory-leak, the reviewer checks:
- The request: verify a reset-password listener cleanup.
- The evidence: task-specific checks and current source proof.
- The Knowledge Item:
metadata.json,overview.md, HTML deck, and narration. - The deliverables link on the board item.
- The scope boundary: the task did not claim the sibling recovery-mode fix.
- The verdict: approve only if the listener cleanup is proven and the scope boundary is truthful.
That pattern scales. Whether the task is content, UI, packaging, billing, or research, review follows the same chain: claim, evidence, artifact, current source, decision.
Media needs
Review education benefits from screenshots that show the decision surface:
| Screenshot | Why it helps |
|---|---|
| Task drawer in review | Shows where completion notes, deliverables, and review history appear. |
| KI review flag | Shows how needsReview and reviewed map to Knowledge review. |
| Deliverable deck | Shows what the reviewer opens before accepting. |
| Gate output | Shows the difference between validated review readiness and final approval. |
Use real product captures when available. If the surface is unavailable, document the missing media instead of inventing a mock proof state.