Skip to main content

Agent system

Each agent is split into two objects:

  • AgentDefinition (src/agent/definition.py) — built from a markdown file with YAML frontmatter, cached for the lifetime of the gateway
  • AgentRun (src/agent/run.py) — per-invocation state, holds turn-level data, tool call history, streaming buffers

Definition file

Agent files live in examples/config/agents/<id>.md for the bundled defaults, and ~/.youkore/config/agents/<id>.md for user-edited or user-created ones.

---
type: main # main | sub
name: My Assistant
display_name: My Assistant
abbreviation: Asst
description: Personal assistant.
enabled: true
spawnable: true
tags: [assistant]
provider: claude # provider id; see src/llm/ for the full list
model: claude-sonnet-4-5 # model name as the provider expects it
max_iterations: 400
tools:
- {name: bash, core: true, confirm: true}
- {name: memory, core: true}
- {name: web_browse, core: true}
- {name: spawn_agents, core: true}
---

You are <agent persona>. <Tone, constraints, behaviour, ...>.

The body below the frontmatter is the agent's identity prompt. The tools list is a per-tool config — each entry can carry flags like core: true (always included) and confirm: true (require user approval before running).

The agent's id is derived from the filename (without .md).

PromptBuilder

src/agent/prompt_builder.py composes the system prompt from these base sections, in order:

  1. identity — the markdown body of the agent file
  2. environment — date, OS, locale, gateway info
  3. system_instructions — global rules from config
  4. ai_team — other agents the user has, with their domains
  5. channels — currently connected integrations
  6. tool_usage — usage rules for tools
  7. tool_catalog — schemas of tools the agent has whitelisted
  8. skill_catalog — skills the agent can load_skill

Per-run dynamic sections (added on each turn): session, project, pending_tasks, active_skills.

Run lifecycle

user input ──> AgentRun.execute()
├─ run_scope() sets ContextVars (agent_id, run_id, ...)
├─ build prompt (PromptBuilder)
├─ call LLM provider
├─ if tool_call: dispatch via ToolManager → result back into context
└─ stream tokens to client (WebSocket)

run_scope (in src/agent/context.py) is an async context manager that binds per-run identifiers into the structured logger, so every log line carries agent_id and run_id automatically.