Skip to main content

Playbooks and flows API

Playbooks and Flows are control-plane resources. They coordinate normal Kheish runs rather than replacing them.

Endpoint inventory

Playbooks:
  • GET /v1/playbooks
  • POST /v1/playbooks
  • POST /v1/playbooks/validate
  • GET /v1/playbooks/{playbook_id}
  • POST /v1/playbooks/{playbook_id}/publish
  • POST /v1/playbooks/{playbook_id}/revoke
Flows:
  • GET /v1/flows
  • POST /v1/flows
  • GET /v1/flows/{flow_id}
  • POST /v1/flows/{flow_id}/cancel
  • GET /v1/flows/{flow_id}/stream

Playbook manifest

POST /v1/playbooks/validate and POST /v1/playbooks both accept:
{
  "manifest": {
    "playbook_id": "ops-inspection",
    "version": "2026-04-23",
    "title": "Operator inspection",
    "objective": "Inspect a workspace and report what was verified.",
    "phases": [
      {
        "phase_id": "inspect",
        "objective": "Inspect the workspace root.",
        "acceptance_criteria": ["The run output states what was verified."]
      }
    ],
    "acceptance_criteria": ["A run is created and inspectable through the daemon."]
  }
}
The daemon computes a digest over the immutable manifest body. Creating the same playbook_id + version + digest is idempotent. Creating the same playbook_id + version with a different digest is rejected. Manifest fields such as tools, runtime_defaults, and scopes are stored, digest-covered governance hints. Flow start still executes a normal SubmitInputRequest; the daemon does not enforce a separate Playbook tool policy or credential scope at Flow level.

Publish and revoke

POST /v1/playbooks/{playbook_id}/publish accepts:
{
  "version": "2026-04-23",
  "digest": "sha256-digest",
  "status": "active",
  "evidence_refs": [
    {
      "kind": "run",
      "id": "run-1",
      "description": "real daemon validation"
    }
  ]
}
Allowed publish statuses are verified, canary, and active. Publishing to one of those statuses requires evidence_refs. POST /v1/playbooks/{playbook_id}/revoke accepts:
{
  "version": "2026-04-23",
  "digest": "sha256-digest",
  "reason": "replaced by a safer version",
  "evidence_refs": []
}

Start a Flow

POST /v1/flows accepts:
{
  "flow_id": "flow-demo-1",
  "idempotency_key": "flow-demo-1",
  "playbook_ref": {
    "playbook_id": "ops-inspection",
    "version": "2026-04-23",
    "digest": "sha256-digest"
  },
  "session_id": "flow-demo",
  "request": {
    "provider": "openai",
    "source_plugin": "daemon",
    "source_kind": "api",
    "actor_id": "operator",
    "content": "Inspect the workspace root and summarize what you verified.",
    "input_items": [],
    "attachments": [],
    "generation": null,
    "metadata": null,
    "binding_keys": [],
    "reply_targets": [],
    "reply_plugin": null,
    "reply_address": null
  },
  "metadata": {},
  "evidence_refs": []
}
The request field is the normal SubmitInputRequest shape used by session run creation. The daemon validates and normalizes that request through the existing session-run path. Flow start writes daemon-owned kheish_flow metadata into the underlying run. Callers must not provide that metadata key themselves. The metadata includes the Flow id, Playbook id, version, digest, and a daemon-generated nonce used for recovery matching. Retries are idempotent only when the submitted flow_id, idempotency_key, Playbook reference, session id, input request, Flow metadata, and evidence refs match the existing Flow.

Flow status

Flow status is derived from scoped daemon primitives, not from a persisted flow.status field. The scope starts with the root run and includes sidechain agents spawned from that run, runs in those child sessions, scoped session tasks, and approval/question ids from current run state and historical run events. FlowView.status uses Flow statuses such as succeeded; the embedded run, when present, is a normal RunView with normal run statuses such as completed. Any interrupted non-root scoped run projects the Flow to interrupted; an interrupted root can still project to succeeded when completed child work provides the useful terminal outcome.
  • no run reference yet -> pending
  • cancelled before run scheduling -> cancelled
  • failed scoped run or failed scoped task -> failed
  • cancelled scoped run or cancelled scoped task -> cancelled
  • waiting scoped run or blocked scoped task -> waiting
  • queued/running scoped run or pending/in-progress scoped task -> running
  • completed scoped work with no failed/cancelled/running/waiting work -> succeeded
  • interrupted root run with no more useful scoped outcome -> interrupted
  • missing referenced run -> unknown
Failure and cancellation are fail-closed: a scoped failed task prevents the Flow from appearing succeeded just because the root run produced a positive final answer. GET /v1/flows/{flow_id}/stream is a thin proxy over the referenced run stream. It does not create a second Flow event bus.

Evidence Used

  • DTOs and status mapping: crates/kheish-daemon/src/playbooks.rs
  • API routes: crates/kheish-daemon/src/api/handlers.rs
  • Run scheduling path: crates/kheish-daemon/src/state/playbook_workflow.rs

Evidence Note

  • Code verified: crates/kheish-daemon/src/playbooks.rs, crates/kheish-daemon/src/state/playbook_workflow.rs, crates/kheish-daemon/src/api/handlers.rs, crates/kheish-daemon/src/cli/commands/playbooks.rs.
  • CLI/API verified: endpoint inventory and command names checked against implemented routes and CLI commands.
  • Daemon live tested for this note: no; deterministic API tests cover Flow start/get/list/cancel and scoped projection.
  • Provider-specific tested for this note: no; API projection logic is provider-neutral.