Tools are the agent's hands. Every file read, every shell command, every memory write goes through a tool. agent-zero's tool registry is small enough to fit in your head — one ToolDef dataclass, one global registry, one dispatch function.
Built-in tools
a-mini ships 18 tools across four families. a-full has more (30+) but the same shape.
| Family | Tools |
|---|---|
| Core | Read, Write, Edit, Bash, Glob, Grep, WebFetch, WebSearch |
| Memory | MemorySave, MemoryDelete, MemorySearch, MemoryList |
| Sub-agent | Agent, SendMessage, CheckAgentResult, ListAgentTasks, ListAgentTypes |
| Skill | Skill, SkillList |
The full table with key parameters is in the a-mini README; this page focuses on registering your own.
ToolDef
The registry's contract:
@dataclass
class ToolDef:
name: str
schema: Dict[str, Any] # JSON schema sent to the model
func: Callable[[dict, dict], str] # (params, config) -> string result
read_only: bool = False
concurrent_safe: bool = False
name— unique, used in tool calls.schema— JSON schema withname,description,input_schema. The model uses this to know what params to send.func— pure function: take params + config, return a string. No streaming, no async (yet — a-full has streaming variants).read_only— when true, the tool always runs without prompting inautopermission mode.concurrent_safe— when true, the runtime can parallelise this tool with other tools in the same turn.
Registering a tool
tool_registry.py exposes register_tool(tool_def). Call it from any module imported by the entry point.
# my_tools.py
from tool_registry import register_tool, ToolDef
def slugify(params: dict, config: dict) -> str:
text = params.get("text", "")
return text.lower().strip().replace(" ", "-")
register_tool(ToolDef(
name="Slugify",
schema={
"name": "Slugify",
"description": "Convert a string to a URL-safe slug.",
"input_schema": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "Text to slugify."}
},
"required": ["text"],
},
},
func=slugify,
read_only=True,
concurrent_safe=True,
))
Make sure my_tools.py is imported before the agent loop starts. The simplest path: import it from tools.py (or whatever module your entry point loads).
A more useful example: AppMint repository read
A tool that reads an AppMint record using the agent's environment credentials:
import os, json, urllib.request
def appmint_get(params: dict, config: dict) -> str:
datatype = params["datatype"]
record_id = params["id"]
base = os.environ.get("APPMINT_API_URL", "")
org = os.environ.get("APPMINT_ORG_ID", "")
token = os.environ.get("APPMINT_TOKEN", "")
if not (base and org and token):
return "Error: APPMINT_API_URL / APPMINT_ORG_ID / APPMINT_TOKEN missing"
req = urllib.request.Request(
f"{base}/repository/get/{datatype}/{record_id}",
headers={
"orgid": org,
"Authorization": f"Bearer {token}",
"Accept": "application/json",
},
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return resp.read().decode("utf-8")
except Exception as e:
return f"Error: {e}"
register_tool(ToolDef(
name="AppMintGet",
schema={
"name": "AppMintGet",
"description": "Fetch a single AppMint record by datatype and id.",
"input_schema": {
"type": "object",
"properties": {
"datatype": {"type": "string"},
"id": {"type": "string"},
},
"required": ["datatype", "id"],
},
},
func=appmint_get,
read_only=True,
))
Notes:
read_only=True— this is a GET, no mutation, so it auto-approves underautopermission mode.- The function reads credentials from environment. session-manager injects these for every spawned agent (AI-INSTRUCTIONS § 4).
- The result is a string. Any structure (JSON, plain text) is fine — the model parses it.
Output truncation
execute_tool truncates results longer than max_output chars (default 32k). The truncation keeps the first half and last quarter, with a marker in between:
<first 16k chars>
[... 80000 chars truncated ...]
<last 8k chars>
This protects the context window from a runaway tool. If your tool can return huge output (a big file, a long log), either truncate inside the tool or accept the registry's truncation.
Concurrent execution
If the runtime sees multiple tools in one turn and they're all concurrent_safe=True, it runs them in parallel. Otherwise it runs them sequentially. Most file-system tools are not safe for concurrent execution (race conditions on the same file). Read-only tools generally are.
Error handling
execute_tool wraps the function call in a try/except — exceptions become Error executing <name>: <message>. Don't raise from inside your tool unless you want a traceback in the conversation; instead, return an error string. The model handles "Error: ..." results gracefully and decides whether to retry, ask the user, or give up.
Tool schemas in the API call
The schema you provide is forwarded to the LLM as part of the tool definition. Get it right:
descriptionshould tell the model when to use the tool, not just what it does.input_schema.properties[*].descriptionshould describe each param.- Mark required fields in
required. - Use
enumfor fixed sets of values.
Bad description: "Fetches data."
Good description: "Fetch a single AppMint record by datatype and id. Use when you need the full document, not just metadata."
Removing a tool
from tool_registry import _registry
del _registry["Slugify"]
There's no public unregister_tool because it's rarely needed. The clear_registry() helper exists for tests.
Permissions and tool gating in skills
Skills can restrict which tools they invoke via allowed-tools in the frontmatter (see Skills). The skill's executor passes a filtered registry to the sub-agent — even if a tool is registered, the skill can't call it.
Where to read the source
Built-in tools: a-mini/tools.py (Read/Write/Edit/Bash/Glob/Grep/WebFetch/WebSearch), a-mini/memory/tools.py, a-mini/multi_agent/tools.py, a-mini/skill/tools.py.
Registry: a-mini/tool_registry.py — short, readable, the canonical reference.