Semantic learning and procedural skills
Kheish has a separate daemon-owned learning plane in addition to normal session state and recovered run memory. That learning plane is intentionally split into two products:- durable semantic learnings
- reviewed procedural learnings that can be promoted into daemon-owned reusable skills
How this differs from recovered run memory
Recovered run memory is:- derived from terminal runs
- compact and episodic
- prompt-bounded
- best-effort recovery data
- it is stored as first-class records
- it is scoped to
session,persona,project, orworkspace - it survives restart as daemon-owned state
- it supports review and lifecycle transitions such as publish, reject, revoke, and supersede
- it can be captured, judged, and published automatically through daemon policy
Learning kinds
Kheish currently stores five durable learning kinds:run_summaryfactpreferencedecisionprocedure
Prompt-eligible kinds
The runtime can inject these kinds back into later prompts when they are active, in scope, and prompt-visible:factpreferencedecision
Stored but not auto-injected
These kinds are durably stored but excluded from the normallearned_context prompt section:
run_summaryprocedure
run_summaryis review material rather than standing memoryprocedureis durable procedure state, not free-form prompt memory
Scopes
Every learning belongs to one durable scope:sessionpersonaprojectworkspace
- current session
- bound persona, when one exists
- linked projects
- daemon-wide workspace scope
workspace, the stable scope id is always default.
Retrieval model
Semantic retrieval is daemon-owned and bounded. On input submission, the daemon builds onelearned_context bundle from visible scope records and attaches it as derived input state for the current run only.
Important prompt-visibility rules:
procedureandrun_summarystay out oflearned_context- revoked and superseded records are excluded
- expired records are excluded
publish_tier=provisionalis excludedverification_status=failedis excluded- automatically published records are prompt-visible only when
verification_status=verified policy_decision=escalatedis excluded- manually published active records remain prompt-visible without that automatic-only gate
Inspecting the effective projection
The durable store and the prompt projection are related but not identical.GET /v1/learningsshows stored learning recordsGET /v1/sessions/{session_id}/memory-contextshows the effective eligible projection for one sessionGET /v1/sessions/{session_id}/memory-searchshows the session-scoped memory browse/search view
- effective capability scope
- visible learning scopes
- current
learned_context - current
recovered_memory - current
visible_skills
Review workflow
The learning workflow remains explicit even when automation is enabled.- Create or capture a learning candidate.
- Review it automatically or manually.
- Publish it into the durable learning store, or reject it.
- Later revoke or supersede it if the knowledge changes.
Candidate origins
Every candidate retains one ingress origin:api: created explicitly through the public API or CLIdaemon: created by a daemon-owned workflow
Automatic capture
Kheish has two daemon-owned automatic capture paths.Run-summary capture
The daemon can createrun_summary candidates automatically from terminal runs.
Important constraints:
- this is controlled by runtime policy
run_summaryremains review material even after publication- public attempts to create
run_summarycandidates are rejected run_summarystays excluded fromlearned_context
Semantic capture
The daemon can also extract conservative semantic candidates from completed runs. Current behavior:- this path is controlled by
capture.semantic_candidates - it only proposes
fact,preference, anddecision - extracted candidates are daemon-origin and session-scoped
- extraction uses a model-backed structured prompt when enabled
- there is a heuristic fallback for explicit
Preference:,Fact:, andDecision:request labels - the daemon attaches daemon-owned evidence references to extracted candidates
- each run-memory record carries a semantic-capture receipt
- unfinished capture work is replayed on boot
- duplicate extraction is suppressed for runs already captured or already in flight
Automation modes
The current runtime policy has three modes:manual_onlyshadowenabled
shadow.
Mode behavior:
manual_only: the worker does not review candidates automaticallyshadow: the worker records its review but leaves the candidate pendingenabled: the worker may reject, escalate, or publish automatically
Rule matching
Publication rules are evaluated in order. The first matching rule wins. If nothing matches, the daemon falls back topublication.default_action.
Rules can currently match on:
scope_kindscope_idkindsensitivitymin_confidencerequire_evidencerequire_source_runrequire_source_session
require_evidencerequire_source_runrequire_source_session
source and evidence_refs, but daemon automation does not trust those fields for rule matching.
There are also two important publication controls:
allow_api_origin_active_publicationquarantined_rule_names
allow_api_origin_active_publication defaults to false, so API-origin candidates are downgraded from automatic publish_active to publish_provisional unless the operator opts in.
quarantined_rule_names lets operators disable named rules without deleting them from the policy.
Model-backed judge
The daemon now has a configurable model-backed judge. The judge is:- daemon-owned
- optional
- invoked only after deterministic policy evaluation
- bounded by its own optional model override and timeout
- clamped so it cannot expand automation beyond the policy envelope
- it can only choose from actions allowed by the policy baseline
- it never writes learnings directly
- in
enabledmode, judge failures fail closed tomanual_review - the final judge review is retained in
automation_review.judge
Automatic active versus provisional
Automatic publication can currently produce:
publish_provisionalpublish_active
publish_active remains guarded.
Current constraints:
publication.default_actioncannot bepublish_activepublish_activerules must declare an explicitkindprocedurelearnings cannot be auto-published with the active tier
- when policy and the judge still select
publish_active, the daemon verifies daemon-owned support for the candidate content - for daemon-origin semantic
fact,preference, anddecisioncandidates, that verification can use the persisted run-memory record of the source run - otherwise the daemon checks source-run debug artifacts referenced by the candidate evidence
- if verification fails, the daemon downgrades the result to
publish_provisional
Publication tiers and verification
Published learnings carry more than one on/off flag. Current metadata includes:statuspublish_tierverification_statuspolicy_decisionpolicy_actor
- only records with
status=activeandpublish_tier=activeare eligible forlearned_context verification_status=failedexcludes a record from prompt injectionpolicy_decision=automaticrequiresverification_status=verifiedbefore prompt injectionpolicy_decision=manualactive records can still be prompt-visiblepolicy_decision=escalatedstays out of prompt retrieval
Procedural learning promotion
Reviewedprocedure learnings can optionally be promoted into daemon-owned reusable skills.
Promotion is a separate step after publication. A stored procedure does not become a skill automatically.
Current promotion constraints
The current production path is intentionally narrow:- only
procedurelearnings can be promoted - the source learning must be
active - the source learning must use
publish_tier=active - the source learning must use
workspacescope - promoted procedure skills must use
fork - promoted procedure skills must use the
verificationchild-agent profile
Promotion rollout states
Promoted procedural skills keep their own rollout lifecycle:draftverifiedcanaryactiverevoked
active promoted skills are mounted into:
GET /v1/skillsGET /v1/sessions/{session_id}/skills- runtime
list_skills - runtime
use_skilldiscovery
Runtime behavior of promoted procedures
Promoted procedure skills are designed to execute through a child-agent path. They can carry normal skill runtime metadata such as:allowed_toolsblocked_toolsagent_profileprovidermodelfallback_model
.kheish-procedural-worktrees/<safe-skill-name>/<tool-call-id>
spawn payload can include:
launch_run_id- the child snapshot
- canonical
final_output
spawn.final_output over raw last_assistant_message when it exists.
Revocation and restart behavior
Semantic learnings and promoted procedure skills stay linked.- revoking or superseding a promoted source learning also revokes the linked promoted skill
- promoted-skill records remain durable for audit
- only active promoted skills stay mounted in the visible catalog
- reloads persisted promoted-skill records
- repairs missing daemon-owned promoted skill files when needed
- refuses to silently rebind a promoted record to a foreign skill root
- replays pending semantic capture work from run-memory receipts
Operational summary
Today, the intended production usage is:- use durable learnings for stable facts, preferences, and decisions
- use
memory-contextto inspect what one session is currently eligible to receive before final prompt packing - use
memory-searchto browse or query visible learnings, recovered runs, and skills - use reviewed
procedurelearnings as candidates for reusable automation - promote only workspace-scoped procedures that are safe to expose through the shared daemon skill catalog
