Skip to main content

Quickstart

This page is the shortest honest path from a fresh checkout to:
  • one working daemon
  • one created session
  • one completed run
You have two supported bootstrap paths:
  • Local binary: fastest when you are already working from source
  • Docker: same daemon model, but with the container boundary and file-backed secrets from the start
For deeper operator guidance after the first successful run, continue with Running the daemon, Docker and containers, and Security and auth.

What you need first

Run the commands from the repository root. For the examples below, you need one provider credential in your shell. The examples use OpenAI plus gpt-5.4. If you keep credentials in a repository-local .env, load it before either path:
set -a
source .env
set +a

Path A: local binary

Use this when you want the fastest path from a checkout to one real daemon.

1. Build the daemon

cargo build -p kheish-daemon

2. Create isolated local directories

mkdir -p .kheish-daemon-data .kheish-workspace

3. Start the daemon

./target/debug/kheish-daemon serve \
  --bind 127.0.0.1:4000 \
  --state-root .kheish-daemon-data \
  --workspace-root .kheish-workspace \
  --provider openai \
  --model gpt-5.4 \
  --api-key "$OPENAI_API_KEY" \
  --mcp-discovery disabled
Keep that process running in its own terminal. This keeps the quickstart deterministic on a fresh machine:
  • one explicit provider
  • one explicit model
  • one explicit API key
  • no ambient MCP import from $HOME/.codex
If you want named routes, daemon-managed secret slots, or account-backed auth instead of direct API-key startup, continue with Running the daemon.

4. Verify the daemon is healthy

In another terminal:
curl http://127.0.0.1:4000/healthz
curl http://127.0.0.1:4000/readyz
./target/debug/kheish-daemon runtime get
This quickstart binds to 127.0.0.1, so the daemon can stay in --http-auth-mode auto without explicit bearer tokens. Treat that as a localhost-only bootstrap path. If you expose the control plane beyond loopback, configure auth explicitly instead of copying this shape unchanged.

5. Create one session

./target/debug/kheish-daemon sessions create demo

6. Submit one run and wait for it

./target/debug/kheish-daemon sessions input demo \
  --model openai/gpt-5.4 \
  --wait \
  "Do not use any tools. Reply with QUICKSTART_OK and one short sentence confirming that the daemon completed a real model run."
The CLI accepts selectors in either of these shapes:
  • gpt-5.4
  • <route_id>/<model>
The explicit <route_id>/<model> form is the better quickstart default because the selected route stays obvious in logs and saved requests.

Path B: Docker

Use this when you want the same daemon flow, but with the container image built from this repository’s Docker assets, file-backed secrets, probes, and the container boundary from the start. The repository ships the container assets used below:

1. Build the image

docker compose -f docker/compose.yaml build daemon

2. Prepare secret files

mkdir -p docker/secrets

docker run --rm \
  --entrypoint /usr/local/bin/kheish-daemon \
  kheish-daemon:local \
  secrets generate > docker/secrets/auth-store-master-key.txt

docker run --rm \
  --entrypoint /bin/sh \
  kheish-daemon:local \
  -lc 'tr -dc "A-Za-z0-9" < /dev/urandom | head -c 48' \
  > docker/secrets/admin-token.txt

chmod 600 docker/secrets/auth-store-master-key.txt docker/secrets/admin-token.txt
These stay on the host. Compose reads them and mounts them into the container as Docker-managed secret files under /run/secrets/.

3. Bootstrap the route secret into the daemon state volume

docker compose -f docker/compose.yaml run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  daemon \
  secrets set openai.prod \
    --offline \
    --state-root /var/lib/kheish/state \
    --provider openai \
    --from-env OPENAI_API_KEY

4. Start the daemon container

docker compose -f docker/compose.yaml up -d

5. Verify the daemon is healthy

curl http://127.0.0.1:4000/healthz
curl http://127.0.0.1:4000/readyz
curl -H "Authorization: Bearer $(cat docker/secrets/admin-token.txt)" \
  http://127.0.0.1:4000/v1/runtime
The shipped Compose file binds the control plane on 127.0.0.1:4000 and keeps bearer auth enabled. That is a better container default than relying on localhost-only auth auto-detection. It also wires the same admin token into the in-container CLI client, so the docker compose exec daemon kheish-daemon ... commands below authenticate against the daemon without extra flags.

6. Create one session

docker compose -f docker/compose.yaml exec daemon \
  kheish-daemon sessions create demo

7. Submit one run and wait for it

docker compose -f docker/compose.yaml exec daemon \
  kheish-daemon sessions input demo \
    --model openai/gpt-5.4 \
    --wait \
    "Do not use any tools. Reply with QUICKSTART_OK and one short sentence confirming that the daemon completed a real model run."

Inspect the result

Whichever path you used, these are the first commands to reach for after a successful run:
./target/debug/kheish-daemon runs list --session-id demo
./target/debug/kheish-daemon sessions get demo
./target/debug/kheish-daemon sessions events demo
If you used the Docker path and did not build the host binary, run the same commands through Compose instead:
docker compose -f docker/compose.yaml exec daemon kheish-daemon runs list --session-id demo
docker compose -f docker/compose.yaml exec daemon kheish-daemon sessions get demo
docker compose -f docker/compose.yaml exec daemon kheish-daemon sessions events demo
If a run pauses instead of completing, inspect:
  • approvals with approvals list --session-id demo
  • tasks with tasks list demo
  • session state with sessions get demo
If you used the Docker path and are done with the quickstart daemon:
docker compose -f docker/compose.yaml down
That stops the daemon but keeps the named state and workspace volumes. Use docker compose -f docker/compose.yaml down -v only when you intentionally want to delete the quickstart state. For the full container topology, guardrails, secret handling, and probe behavior, continue with Docker and containers.

After the first successful run

Once the basic flow works, the next useful steps are: