Tools
A tool is a callable capability that agents use to perform actions. Tools are first-class resources: created, stored, and referenced by ID.
Tool protocols
| Protocol | Description |
|---|---|
| native | Built-in or code-based. Implemented in the runtime (e.g. std-weather, std-fetch-url). |
| http | Calls an external URL. Config: { url, method }. |
| mcp | Connects to an MCP (Model Context Protocol) server. |
Tool lifecycle
1. Create
create_tool with name, protocol, and optional config / inputSchema.
2. List
list_tools returns all tools with id, name, protocol.
3. Get
get_tool(id) returns full details (config, inputSchema, outputSchema).
4. Update
update_tool to change name, config, schemas. Standard tools (std-*) can only update inputSchema / outputSchema.
Tool classes
Tools fall into two classes by when and how the agent sees them:
| Class | When it applies | How the agent gets it | Example |
|---|---|---|---|
| Callable tools | Every request | The LLM receives tool definitions and decides whether to call one; when called, the tool runs and returns a result the LLM then sees. | std-fetch-url, std-execute-code, get_workflow_context (when shared context is on) |
| Context (prompt-injection) | Workflow runs | Context is injected into the prompt (e.g. recent turns, summary, round) before the LLM chooses any tool. The agent does not “call” this; it is part of the input. | In workflows: “Recent turns” and partner output in the turn input. When a run uses no shared output (e.g. red-vs-blue), each agent’s own prior turns are injected this way, and the callable get_workflow_context is not offered — context is prompt-only. |
Use callable tools for actions (fetch, run code, ask the user). Use context when the agent should have information (e.g. its own history) in scope before deciding what to do, without spending a tool call.
How agents use tools
- Node agents reference tools in two ways:
- Decision layer (
toolIds): The LLM receives tool definitions and decides per request whether to call a tool or respond. The agent can only use tools intoolIds(agent-level or per decision node). - Tool nodes: Unconditional tool calls in the graph (
parameters.toolId).
- Decision layer (
- Code agents can call tools via the runtime API.
- Use IDs from
list_toolswhen creating or updating agents.
Standard tools (built-in)
Agentron ships with standard tools such as:
std-weather: weather datastd-fetch-url: fetch URL content- Others depending on installation
Import from OpenAPI
Create many HTTP tools at once from an OpenAPI 3.x (or Swagger) spec:
- In chat: Tell the assistant: “Add tools from this API: <URL>” or paste the spec. It uses
create_tools_from_openapiwithspecUrlorspecto create one tool per operation. Each tool gets a stable id (e.g.get_users,post_users), correct URL and method, andinputSchemafrom parameters and request body. - Attach to agents: After import, use
list_toolsto see the new ids, thenupdate_agentwithtoolIds.
HTTP tools from OpenAPI support path parameters (e.g. /users/{id}), query parameters (GET), and request body (POST/PUT/PATCH). The runtime substitutes placeholders and sends query/body as appropriate.
Custom code tools
Create tools that run custom code (JavaScript, Python, or TypeScript):
- Create: In chat, ask the assistant to “create a tool that does X”. It uses
create_code_toolwithname,language, andsource. The tool is created with a default runner sandbox; attach it to agents viaupdate_agentwithtoolIds(returnedtoolIdis e.g.fn-<uuid>). - Improve: To change code:
get_tool(id)→config.functionId, thenget_custom_function(functionId)to read source, thenupdate_custom_function(id, { source: "..." }). Optionallyupdate_toolfor name orinputSchema.
Code tools run in sandboxes (Podman/Docker). Use list_custom_functions to list all custom functions (id, name, language, description; no source in the list).
OpenClaw gateway
Steer a local OpenClaw instance (personal AI assistant gateway) from chat:
- send_to_openclaw: Send a message or command to OpenClaw.
- openclaw_history: Get recent chat history.
- openclaw_abort: Abort the current run.
Running OpenClaw in a container: Use the alpine/openclaw image (Docker Hub). Create a sandbox with alpine/openclaw:latest, expose port 18789 with bind_sandbox_port, then set OPENCLAW_GATEWAY_URL or pass gatewayUrl in the tool calls. The flow is container-engine agnostic (Docker or Podman). See OpenClaw integration for full details.
Suggested user actions
| User wants… | Action |
|---|---|
| ”Add a tool to my agent” | list_tools, then update_agent with toolIds including the chosen IDs |
| ”Create a custom HTTP tool” | create_tool with protocol: "http" and config: { url, method } |
| ”Create a tool that runs code” | create_code_tool with name, language, source; then update_agent with toolIds |
| ”Add tools from an API spec” / “Import OpenAPI” | create_tools_from_openapi with specUrl or spec, then update_agent with toolIds |
| ”What tools are available?” | list_tools |
| ”Fix a tool” | get_tool(id) first; then update_tool. For code tools, use get_custom_function and update_custom_function to change source |