Product Surfaces

Review Queue

Use Pulse AI review to decide whether completed agent work moves to done, revision, or the next dispatch.

Operating model outline

PageReader jobUnique sectionMedia need
WorkdayObserve completed work entering reviewDone with the dayWorkday screenshot
DispatchAvoid relaunching review-pending workBlocked-work safeguardsDispatch diagram
BoardPreserve verdicts on the task recordReview movement on the boardStatus flow visual
Review QueueMake the verdictVerdicts and movementEvidence checklist

Review changes the next dispatch

The Review Queue is where agent work becomes accountable. It gives the founder or lead a place to inspect the task, evidence, Knowledge Item, deliverables, and prior review history before anything moves to done.

Review is not ceremony. It is the human decision point that separates "an agent produced something" from "this work is accepted."

A review item is not unfinished just because it is not done. It is waiting for a verdict. Launch planning and Workday sizing should count review-pending work separately from open todos.

What evidence belongs in review

Review targetWhat to check
Original requestDid the task solve the actual user need and stay inside scope?
Completion notesDo the notes explain what changed, what was excluded, and what remains?
Prior reviewsAre old findings resolved or still open?
Knowledge ItemDoes the KI exist, set needsReview, and contain current artifacts?
DeliverablesDo the HTML, Markdown, screenshots/media, and narration match the closeout?
ValidationDid the task-specific checks, build, runtime proof, or source verification actually run?
Git proofDoes gitReceipt identify the pushed task-owned commit and dirty-tree caveat?

Read older review entries before adding a new one. Preserving review history matters because repeated reviews often reveal whether a task is improving or just cycling.

Verdicts and movement

VerdictUse it when
ApproveThe current source and artifacts satisfy the request, and any caveats are non-blocking.
Needs RevisionThe direction is right, but a concrete fix is required before acceptance.
RejectThe submission solves the wrong problem, lacks proof, breaks scope, or cannot be trusted.

Approval should not hide launch blockers. If follow-on work is real, name the owner task. If the task itself is wrong, keep the finding on that task instead of creating a duplicate mystery.

After a Needs Revision or Rejected verdict, dispatch the existing owner when the same task can absorb the fix. Create a new owner only for a concrete missing prerequisite or follow-on that should not be hidden inside the current task.

Example: review decision trail

In a request-to-evidence-to-review thread like web-fix-reset-password-memory-leak, the reviewer checks:

  1. The request: verify a reset-password listener cleanup.
  2. The evidence: task-specific checks and current source proof.
  3. The Knowledge Item: metadata.json, overview.md, HTML deck, and narration.
  4. The deliverables link on the board item.
  5. The scope boundary: the task did not claim the sibling recovery-mode fix.
  6. The verdict: approve only if the listener cleanup is proven and the scope boundary is truthful.

That pattern scales. Whether the task is content, UI, packaging, billing, or research, review follows the same chain: claim, evidence, artifact, current source, decision.

Media needs

Review education benefits from screenshots that show the decision surface:

ScreenshotWhy it helps
Task drawer in reviewShows where completion notes, deliverables, and review history appear.
KI review flagShows how needsReview and reviewed map to Knowledge review.
Deliverable deckShows what the reviewer opens before accepting.
Gate outputShows the difference between validated review readiness and final approval.

Use real product captures when available. If the surface is unavailable, document the missing media instead of inventing a mock proof state.

Related links