Table of contents
LLM scripts are YAML configuration files that define how to interact with large language models (LLMs) and what tools LLMs can use. You should treat them like a normal scripts. In particular - DO NOT run LLM scripts from unknown / untrusted sources. Scripts can easily download and run malicious code on your machine, or submit your secrets to some web site.
For real examples, see generic-assistant.yaml and files in examples directory.
Basic Structure
# Environment variables configuration (optional)
env:
<var_name>:
description: <description> # Optional
persistent: <boolean> # Optional, default: false
secret: <boolean> # Optional, default: false
# MCP Servers configuration (optional)
mcp:
<server_name>:
transport: "stdio" | "streamable_http"
command: <command> # For stdio
args: [<arg1>, <arg2>] # For stdio
env: # Optional, environment variables for the server process
<key>: <value>
url: <url> # For streamable_http
headers: # Optional, for streamable_http
<key>: <value>
auto_import_scope: none|shared|chat
# Shared configuration
shared:
data: # Optional shared variables accessible to all tools via ${key}
<key>: <value_expression>
# Each key becomes a top-level variable in the evaluation context
tools: # Optional shared tools that can be referenced from chat/cli/other tools
# Single tool import
- import_tool: <import_path>
name: <tool_name> # Optional
# ... other tool parameters
# Mass import from toolkit or MCP server
- import_tools: <toolkit_or_mcp_server>
prefix: <prefix> # Mandatory
filter: [<pattern>, ...] # Optional
ui_hints_for: [<pattern>, ...] # Optional
ui_hints_args: [<arg>, ...] # Optional
require_confirmation_for: [<pattern>, ...] # Optional
# Custom tool definition
- name: <tool_name>
description: <description> # Optional
input: # Required for custom tools
- name: <param_name>
description: <description>
type: <type>
default: <default_value> # Optional
tools: # Optional, local tools available only within this custom tool
- import_tool: <import_path>
# ... or other tool definition statements
confidential: <boolean> # Optional
return_direct: <boolean> # Optional
ui_hint: <template_string> # Optional
do: # Required for custom tools
<statement(s)> # List of statements for more complex flows
chat: # For interactive chat mode
model_ref: <model_name> # Optional, references model by name (fast/default/thinking). If not defined, uses "default".
tools: # Optional, tools available for chat LLM. NOTE: shared tools are available unless listed here!!!
- import_tool: <import_path>
# ... or other tool definition statements
user_banner: | # Optional, markdown-formatted text displayed at the beginning of chat
<banner text>
system_message: |
<system prompt>
default_prompt: | # Optional
<default prompt>
cli: # For command-line interface
process_input: one_by_one | all_as_list # How to process input arguments
json_output: <boolean> # Optional, default: false - output results as JSON
tools: # Optional, tools available for use in `do` section
- import_tool: <import_path>
# ... or other tool definition statements
do: # Required
<statement(s)> # List of statements for processing inputs
Environment Variables Section
The env section allows you to declare environment variables required by your script. Variables can be marked as persistent (saved to .env file) or transient (prompted each time the script loads).
Structure:
env:
VAR_NAME:
description: "Description of what this variable is used for"
persistent: true # or false (default)
secret: true # or false (default)
Parameters:
description: (Optional) Human-readable description shown when prompting the userpersistent: (Optional, default:false)true: Value is saved to.envfile and persists across sessionsfalse: Value is prompted each time the script is loaded (transient)
secret: (Optional, default:false)true: Input is hidden with asterisks when prompting (requiresprompt_toolkitand TTY)false: Input is shown normally- Use for passwords, API keys, tokens, and other sensitive values
Behavior:
- Environment variables are inherited from parent process and loaded from
.envin current directory (if exists) or~/.config/llm-workers/.env(default) - Variable statements are processed at startup
- If a variable is already set in the environment, no prompting occurs
- Persistent variables are saved back to
.envfile used at startup
Example:
env:
API_KEY:
description: "API key for external service"
persistent: true
secret: true # Input will be hidden
SESSION_TOKEN:
description: "Temporary session token for this run"
persistent: false
secret: true # Hidden but not saved
DATABASE_URL:
description: "PostgreSQL connection string"
persistent: true
secret: false # Not sensitive, can be shown
When to use secret:
- ✓ API keys, authentication tokens
- ✓ Passwords, private keys
- ✓ OAuth secrets, session tokens
- ✗ Non-sensitive config like URLs, paths, usernames
MCP Servers Section
The mcp section allows you to connect to external MCP (Model Context Protocol) servers and use their tools alongside built-in tools. Tools from MCP servers are automatically prefixed with the server name to avoid conflicts.
Structure:
mcp:
server_name: # Used as prefix for tools
transport: "stdio" | "streamable_http"
# For stdio transport (local subprocess)
command: "command_to_run"
args: ["arg1", "arg2"]
env: # Optional, environment variables for server process
KEY: "${env.VAR_NAME}" # Can reference env variables
# For streamable_http transport (remote server)
url: "http://localhost:8000/mcp"
headers: # Optional
X-API-KEY: "${env.API_KEY}"
# Auto-import scope (optional, default: "none")
# Controls where tools from this server are automatically imported
auto_import_scope: none | shared | chat
Auto-Import Scope
The auto_import_scope field controls where tools from an MCP server are automatically imported:
none(default): Tools are not automatically imported. You must explicitly import them usingimport_toolorimport_toolsstatements.shared: Tools are automatically imported to theshared.toolssection, making them accessible across all agents.chat: Tools are automatically imported to thechat.toolssection, making them available only in chat mode.
Example with auto-import:
mcp:
echo:
transport: "stdio"
command: "uvx"
args: ["echo-mcp-server-for-testing"]
auto_import_scope: chat # All tools automatically available in chat
chat:
system_message: "You are a helpful assistant."
# No need to explicitly import echo tools - they're already available
Example without auto-import (manual import):
mcp:
github:
transport: "streamable_http"
url: "https://api.githubcopilot.com/mcp/"
auto_import_scope: none # Must import manually
chat:
system_message: "You are a helpful assistant."
tools:
- import_tools: mcp:github # Explicitly import GitHub tools
prefix: gh_
filter: ["!*delete*"]
Transport Types
Stdio Transport - For local MCP servers running as subprocesses:
mcp:
math:
transport: "stdio"
command: "uvx"
args: ["mcp-server-math"]
auto_import_scope: chat
HTTP Transport - For remote MCP servers accessible via HTTP:
mcp:
weather:
transport: "streamable_http"
url: "http://localhost:8000/mcp"
headers:
X-API-KEY: "${env.WEATHER_API_KEY}"
auto_import_scope: none
Environment Variable Substitution
MCP server configurations support environment variable substitution using the ${env.VAR_NAME} syntax in both args and env fields:
mcp:
github:
transport: "stdio"
command: "npx"
args:
- "-y"
- "@modelcontextprotocol/server-github"
- "--config"
- "${env.CONFIG_PATH}/github.json" # In args
env:
GITHUB_TOKEN: "${env.GITHUB_TOKEN}" # In env
LOG_PATH: "/var/log/${env.USER}.log" # Embedded substitution
Key Features:
- The
${env.VAR_NAME}references are replaced with actual environment variable values at runtime - Works in both
args(list of arguments) andenv(environment variables for the server process) - Supports embedded substitutions: “prefix_${env.VAR}_suffix” → “prefix_value_suffix”
- Can use multiple variables in one string: “${env.VAR1}and${env.VAR2}”
Tool Naming and Import Control
When importing tools from MCP servers (either via auto_import_scope or import_tools), you control the naming and behavior through the import statement:
With auto-import:
- Tools are imported with their original names from the MCP server
- The server name is used as a prefix:
server_name_tool_name - Example:
addtool frommathserver →math_add
With manual import (import_tools):
- You have full control via the
prefixparameter - You can filter, add UI hints, and require confirmation
- See Mass Import section for details
Example with filtering and UI hints:
mcp:
github:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
auto_import_scope: none # Must import manually
chat:
tools:
- import_tools: mcp:github
prefix: gh_
filter:
- "!*write*" # Exclude write operations
- "!*delete*" # Exclude delete operations
force_ui_hints_for: ["*"]
ui_hints_args: ["owner", "repo"]
force_require_confirmation_for: ["*push*"]
Complete MCP Example
env:
GITHUB_TOKEN:
description: "GitHub personal access token"
persistent: true
WEATHER_API_KEY:
description: "API key for weather service"
persistent: true
mcp:
# Local math server with auto-import
math:
transport: "stdio"
command: "uvx"
args: ["mcp-server-math"]
auto_import_scope: chat # Tools automatically available in chat
# GitHub server with manual import for fine-grained control
github:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_TOKEN: "${env.GITHUB_TOKEN}"
auto_import_scope: none # Must import manually
# Remote weather API with auto-import
weather:
transport: "streamable_http"
url: "http://localhost:8000/mcp"
headers:
X-API-KEY: "${env.WEATHER_API_KEY}"
auto_import_scope: chat
# Regular tools work alongside MCP tools
shared:
tools:
- import_tool: llm_workers.tools.fs.read_file
chat:
system_message: "You are a helpful assistant with access to MCP tools."
tools:
# GitHub tools imported manually with fine-grained control
- import_tools: mcp:github
prefix: gh_
filter:
- "!*write*"
- "!*delete*"
force_ui_hints_for: ["*"]
ui_hints_args: ["owner", "repo"]
force_require_confirmation_for: ["*push*"]
# Regular tool reference
- read_file
# math and weather tools are auto-imported, no need to list them
Error Handling
- If an MCP server fails to connect, the error is logged and the system continues with other servers
- If a tool name conflicts with an existing tool, the MCP tool is skipped with a warning
- Environment variables that don’t exist will raise an error during initialization
Shared Tools Section
The shared.tools section defines shared tools that can be used across agents and other tools. Tools can be:
- Imported from Python classes or functions (
import_tool) - Mass imported from toolkits or MCP servers (
import_tools) - Custom tools defined using statement composition (custom tool definition)
Structure:
shared:
tools:
# Import single tool from Python class/function
- import_tool: <import_path>
name: <tool_name> # Optional, can override the default name
description: <description> # Optional
# ... other tool parameters
# Import single tool from toolkit
- import_tool: <toolkit_class>/<tool_name>
# ... tool parameters
# Import single tool from MCP server
- import_tool: mcp:<server_name>/<tool_name>
# ... tool parameters
# Mass import from toolkit or MCP server
- import_tools: <toolkit_class_or_mcp_server>
prefix: <prefix> # Mandatory, can be empty ""
filter: [<pattern>, ...] # Optional, default: ["*"]
ui_hints_for: [<pattern>, ...] # Optional, default: ["*"]
ui_hints_args: [<arg>, ...] # Optional, default: []
require_confirmation_for: [<pattern>, ...] # Optional, default: []
# Custom tool definition
- name: <tool_name>
description: <description>
input:
- name: <param_name>
description: <param_description>
type: <type>
do:
# ... statements
Example:
shared:
tools:
# Single tool import from Python class
- import_tool: llm_workers.tools.fetch.fetch_page_text
name: _fetch_page_text
# Single tool import from Python function
- import_tool: llm_workers.tools.llm_tool.build_llm_tool
name: _LLM
# Mass import from toolkit with filtering
- import_tools: llm_workers.tools.fs.FilesystemToolkit
prefix: fs_
filter:
- "read_*" # Include read operations
- "!write_*" # Exclude write operations
ui_hints_for: ["*"]
ui_hints_args: ["path"]
require_confirmation_for: []
# Custom tool definition
- name: metacritic_monkey
description: >
Finds the Metacritic score for a given movie title and year. Returns either a single number or "N/A" if the movie is not found.
input:
- name: movie_title
description: Movie title
type: str
- name: movie_year
description: Movie release year
type: int
ui_hint: Looking up Metacritic score for movie "{movie_title}" ({movie_year})
do:
- call: _fetch_page_text
params:
url: "https://www.metacritic.com/search/{movie_title}/?page=1&category=2"
xpath: "//*[@class=\"c-pageSiteSearch-results\"]"
- call: _LLM
params:
prompt: >
Find Metacritic score for movie "${movie_title}" released in ${movie_year}.
To do so:
- From the list of possible matches, chose the one matching movie title and year and return metacritic score as single number
- If no matching movie is found, respond with just "N/A" (without quotes)
- DO NOT provide any additional information in the response
Possible matches:
${_}
In addition to defining tools in the shared.tools section, you can define tools inline within call statements and within the tools configuration of LMs. This provides flexibility for single-use tools or when you need to customize tool behavior for specific calls. See relevant sections below.
Common Tool Parameters
name: Unique identifier for the tool. This name is used to reference the tool in other parts of the script. Names should be unique within the script and should not contain spaces or special characters. Tools with names starting with_are considered to be “private”. Those are not available for LLM use unless added totoolslist explicitly.description: Brief description of the tool’s purpose. For imported tools is optional, and is taken from Python code if omitted.return_direct: Optional, defaults tofalse. Iftrue, the tool’s result is returned directly to the user without further processing by the LLM. This is useful for tools that provide direct answers or results.confidential: Optional, defaults tofalse. Iftrue, the tool’s result is considered sensitive and will not be used in subsequent LLM calls. This is useful for tools that return sensitive data, such as passwords or personal information. Impliesreturn_direct: true.require_confirmation: Optional, defaults to Python tool logic (if defined, can be on per-call basis) orfalse- If
true, the tool requires user confirmation before executing. This is useful for tools that perform actions that may have significant consequences, such as deleting files or sending emails. - If
false, the tool does not require user confirmation before executing, even if Python tool code does.
- If
ui_hint: Optional, defaults to Python tool logic (if defined), or empty. If defined, it is used to generate a UI hint for the tool. This can be a template string referencing tool’s parameters.
See Using Python Tools and Defining Custom Tools sections below for more details on how to define and use tools.
Shared Data Section
The shared.data section provides a way to define reusable variables that can be accessed by all custom tools in the script. This is useful for avoiding duplication of common values like API endpoints, prompts, or configuration settings.
Key Features:
- Each key in
shared.databecomes a top-level variable in the evaluation context - Values can be any JSON-serializable data
- Variables can reference other variables defined earlier in the
datasection - Accessible in custom tools via the
${key}template syntax - Variables are read-only after initialization
- Supports
!requiredirective to load content from external files
Example:
shared:
data:
prompts:
test: "Yada-yada-yada"
api_base: "https://api.example.com"
api_version: "v1"
tools:
- name: demo_shared_access
input:
- name: query
description: "Search query"
type: str
do:
eval: "Query ${query} returned ${prompts['test']}"
Loading External Files with !require:
The !require directive allows you to load content from external files into shared data variables:
shared:
data:
# Load instructions from an external markdown file
reformat_Scala_instructions: !require reformat-Scala-instructions.md
# Load configuration from JSON
api_config: !require config.json
tools:
- name: process_with_instructions
input:
- name: code
type: str
do:
call: llm
params:
prompt: |
Follow these instructions:
${reformat_Scala_instructions}
Code to process:
${code}
This is particularly useful for:
- Keeping large prompt templates in separate files
- Sharing configuration across multiple scripts
- Maintaining complex instructions separately from the main script
- Version controlling prompts and instructions independently
Usage Notes:
- The
shared.datasection is optional and defaults to an empty dictionary - Variables are accessible directly by their key name:
${api_base},${prompts} - Use bracket notation for accessing nested values:
${prompts['test']} - Variables are evaluated in order, so later variables can reference earlier ones
- Changes to the shared section require reloading the script configuration
Chat Section
Configuration for interactive chat mode:
model_ref: References a model configured in~/.config/llm-workers/config.yaml(fast/default/thinking), defaults to “default”system_message: Instructions for the LLM’s behaviordefault_prompt: Initial prompt when starting the chat, defaults to empty stringuser_banner: Optional markdown-formatted text displayed at the beginning of chat session, defaults to not showntools: (Optional) List of tools to make available for this LLM. Defaults to all public tools (e.g. not starting with_).
Tools can be specified in multiple ways:
- By name - Reference tools defined in the
toolssection: ```yaml tools:- read_file
- write_file ```
- By pattern - Use patterns to match multiple tools: ```yaml tools:
- match: [“fs_”, “!fs_write”] # Include fs_* but exclude fs_write* ```
- Inline import_tool - Import a single tool with full control: ```yaml tools:
- import_tool: llm_workers.tools.fs.read_file require_confirmation: false ```
- Inline import_tools - Mass import from toolkit or MCP server: ```yaml tools:
- import_tools: llm_workers.tools.fs.FilesystemToolkit prefix: fs_ filter: [“read_”, “!write_”] ```
- Inline custom tool - Define a custom tool directly: ```yaml tools:
- name: custom_processor description: “Process data” input:
- name: data type: str do:
- eval: “Processed: ${data}” ```
- name: custom_processor description: “Process data” input:
Example:
chat:
model_ref: thinking
user_banner: |
# Game Analytics Assistant
Welcome to the mobile game analytics environment! I can help you investigate live game issues by:
- Writing Python scripts to fetch and analyze data
- Connecting data from various sources
- Generating reports in JSON format
Type your requests naturally and I'll get to work!
system_message: |-
You are AI assistant in a mobile game company.
Your team is investigating issues in a live game. Your task is to help your team by writing
and running Python scripts to fetch and connect data from various sources.
If needed, preview text file content by reading first few lines from it.
Unless explicitly asked by user, write script result to promptly named file in the current directory,
output only progress information and file name. Prefer '.json' as output format.
If unsure about the requirements, ask for clarification.
default_prompt: |-
Please run Python script to detect Python version.
tools:
# Reference by name
- read_file
# Pattern matching
- match: ["fs_read*", "fs_list*"]
# Inline tool import
- import_tool: llm_workers.tools.unsafe.run_python_script
name: run_python
require_confirmation: true
# Mass import from toolkit
- import_tools: llm_workers.tools.fs.FilesystemToolkit
prefix: safe_
filter: ["read_*", "list_*"]
CLI Section
Configuration for command-line interface. Defines how to process command-line inputs using statement composition.
Structure:
cli:
process_input: one_by_one | all_as_list
json_output: false # Optional, default: false
do:
<statement(s)> # Statements to execute
Parameters:
process_input: (Required) How to process command-line inputs:one_by_one: Execute the flow once for each input argument. Each input is available as${input}variable.all_as_list: Execute the flow once with all inputs as a list. The list is available as${input}variable.
json_output: (Optional, default:false) Iftrue, outputs results as JSON. Iffalse, outputs as plain text.do: (Required) Statements to execute for processing inputs. Can reference${input}variable.
Input Sources:
- Command-line arguments:
llm-workers-cli script.yaml input1 input2 input3 - Standard input:
echo -e "input1\ninput2" | llm-workers-cli script.yaml --
See “Custom Tools” section for details on statements.
Example with process_input: one_by_one:
cli:
process_input: one_by_one
json_output: false
do:
- call: read_file
params:
path: "${input}"
lines: 10000
- call: llm
params:
prompt: |-
You are senior Scala developer. Your job is to reformat give file according to the rules below.
Rules:
- Add types for local variables and method returns. Do not make assumptions about the type - if type cannot be
inferred from the provided code alone, omit type definition.
- Break long statements up in several lines, use intermediate variables
- When dealing with Java code, handle null values explicitly and as soon as possible
- Don't add `null` checks unless they are really needed
- Don't wrap `null`-s to `Option` unless you pass it further to Scala code
- Handle the `null`-s immediately if possible, just wrapping them to `Option` pushes the responsibility to the receiver of the `Option`.
- Get rid of Option-s as early as possible, prefer pattern matching over call chaining
- Don't use infix or postfix notation, use dot notation with parenthesis everywhere: `obj.method(args)`
- Chain method calls with dots on new line
- Always use braces () in method definition and calls
- Use curly braces in method definitions
- Prefer code readability over complex pure functional code
- Prefer for comprehension over chaining async method calls
- Don't use curly braces for if-else with a single statement:
```scala
if (playerContribution <= 2)
1 + Math.floor(titanStars / 2.0).toInt
else
1
```
- Don't use curly braces for if with return:
```scala
if (playerContribution <= 2) return 1 + Math.floor(titanStars / 2.0).toInt
```
After reformatting, output just the file content without any additional comments or formatting.
If no changes are needed, respond with string "NO CHANGES" (without quotes).
Input file:
${_}
- if: "${_ == 'NO CHANGES'}"
then:
eval: "${input}: NO CHANGES"
else:
- call: write_file
params:
path: "${input}"
content: "${_}"
if_exists: "overwrite"
- eval: "${input}: FIXED"
Example with process_input: all_as_list:
cli:
process_input: all_as_list
json_output: true
do:
- call: llm
params:
prompt: |-
Analyze the following list of file names and categorize them by type.
Return a JSON object with categories as keys and lists of files as values.
Files: ${input}
In this example, all command-line inputs are passed as a list to the ${input} variable, and the result is output as JSON.
Using Tools
Referencing Tools
Tools can be referenced in different ways depending on the context:
In call statements (single tool reference):
- By name only:
call: read_file
Important: As of recent versions, inline tool definitions in call statements are no longer supported. Tools must be defined in a tools section (either in shared.tools, chat.tools, custom tool’s tools, or CLI’s tools) before they can be referenced by name in call statements.
In tools sections (tool definitions):
- By name:
- read_file(references a previously defined tool) - By pattern:
- match: ["fs_*", "!fs_write*"](matches multiple tools) - Import single tool:
- import_tool: llm_workers.tools.fs.read_file - Import from toolkit:
- import_tool: llm_workers.tools.fs.FilesystemToolkit/read_file - Import from MCP:
- import_tool: mcp:server_name/tool_name - Mass import:
- import_tools: llm_workers.tools.fs.FilesystemToolkit - Custom definition:
- name: my_tool ...
For most projects, tools should be defined in the shared.tools section and referenced by name in call statements.
Importing Tools
LLM Workers provides three ways to import tools:
1. Import Single Tool (import_tool)
The import_tool statement imports a single tool with full control over its properties.
Import from Python class or function:
tools:
- import_tool: llm_workers.tools.fs.read_file
name: _read_file # Optional, can override default name
description: "Custom description" # Optional
require_confirmation: true # Optional
Import single tool from toolkit:
You can import one specific tool from a toolkit using <toolkit_class>/<tool_name> syntax:
tools:
- import_tool: llm_workers.tools.fs.FilesystemToolkit/read_file
name: my_read_file
ui_hint: "Reading file: ${path}"
Import single tool from MCP server:
You can import one specific tool from an MCP server using mcp:<server_name>/<tool_name> syntax:
tools:
- import_tool: mcp:github/search_repositories
name: gh_search
require_confirmation: false
Supported import sources:
- Python class extending
langchain_core.tools.base.BaseTool(instantiated with config parameters) - Python factory function/method returning a
BaseToolinstance - Single tool from a toolkit
- Single tool from an MCP server
Factory functions must conform to this signature:
def build_tool(context: WorkersContext, tool_config: Dict[str, Any]) -> BaseTool:
2. Mass Import (import_tools)
The import_tools statement imports multiple tools at once with basic control over their properties.
Import from toolkit:
tools:
- import_tools: llm_workers.tools.fs.FilesystemToolkit
prefix: fs_ # Mandatory prefix (can be empty "")
filter: # Optional, default: ["*"]
- "read_*" # Include read operations
- "list_*" # Include list operations
- "!write_*" # Exclude write operations
ui_hints_for: ["*"] # Optional, patterns for UI hints
ui_hints_args: ["path"] # Optional, args to show in UI hints
require_confirmation_for: ["write_*"] # Optional, patterns requiring confirmation
Import from MCP server:
tools:
- import_tools: mcp:github
prefix: gh_
filter:
- "!*delete*" # Exclude delete operations
- "!*force*" # Exclude force operations
ui_hints_for: ["*"]
ui_hints_args: ["owner", "repo"]
require_confirmation_for: ["*write*", "*push*"]
Pattern Matching:
The filter, ui_hints_for, and require_confirmation_for fields support Unix shell-style wildcards:
*- matches everything?- matches any single character[seq]- matches any character in seq[!seq]- matches any character not in seq!pattern- negation (exclude matching tools)
Patterns are evaluated in order:
- Tools are included if they match any inclusion pattern
- Tools are excluded if they match any exclusion pattern (prefixed with
!)
Parameters:
prefix: (Mandatory) Prefix for all imported tool names. Can be empty""for no prefixfilter: (Optional, default:["*"]) Patterns to include/exclude toolsui_hints_args: (Optional, default:[]) Tool arguments to include in UI hints (empty means show tool name only)force_ui_hints_for: (Optional) Patterns to force UI hints for tools without ExtendedBaseTool (e.g., MCP tools)force_no_ui_hints_for: (Optional) Patterns to explicitly suppress UI hints even for ExtendedBaseToolforce_require_confirmation_for: (Optional) Patterns to force confirmation for tools without ExtendedBaseTool (e.g., MCP tools)
3. Custom Tool Definitions
Tools without import_tool or import_tools are custom tool definitions. See Custom Tools for details.
Example:
tools:
- name: metacritic_monkey
description: "Finds Metacritic score for a movie"
input:
- name: movie_title
description: Movie title
type: str
do:
- call: fetch_page_text
params:
url: "https://www.metacritic.com/search/{movie_title}"
For detailed documentation on built-in tools (web fetching, file operations, LLM tools, etc.), see Built-in Tools.
For documentation on creating custom tools with statement composition, see Custom Tools.