# Smolagents

## Docs

- [`smolagents`](https://huggingface.co/docs/smolagents/main/index.md)
- [Agents - Guided tour](https://huggingface.co/docs/smolagents/main/guided_tour.md)
- [Installation Options](https://huggingface.co/docs/smolagents/main/installation.md)
- [What are agents? 🤔](https://huggingface.co/docs/smolagents/main/conceptual_guides/intro_agents.md)
- [How do multi-step agents work?](https://huggingface.co/docs/smolagents/main/conceptual_guides/react.md)
- [Agents](https://huggingface.co/docs/smolagents/main/reference/agents.md)
- [Built-in Tools](https://huggingface.co/docs/smolagents/main/reference/default_tools.md)
- [Tools](https://huggingface.co/docs/smolagents/main/reference/tools.md)
- [Models](https://huggingface.co/docs/smolagents/main/reference/models.md)
- [Agentic RAG](https://huggingface.co/docs/smolagents/main/examples/rag.md)
- [Web Browser Automation with Agents 🤖🌐](https://huggingface.co/docs/smolagents/main/examples/web_browser.md)
- [Async Applications with Agents](https://huggingface.co/docs/smolagents/main/examples/async_agent.md)
- [Human-in-the-Loop: Customize Agent Plan Interactively](https://huggingface.co/docs/smolagents/main/examples/plan_customization.md)
- [Orchestrate a multi-agent system 🤖🤝🤖](https://huggingface.co/docs/smolagents/main/examples/multiagents.md)
- [Text-to-SQL](https://huggingface.co/docs/smolagents/main/examples/text_to_sql.md)
- [Using different models](https://huggingface.co/docs/smolagents/main/examples/using_different_models.md)
- [Building good agents](https://huggingface.co/docs/smolagents/main/tutorials/building_good_agents.md)
- [📚 Manage your agent's memory](https://huggingface.co/docs/smolagents/main/tutorials/memory.md)
- [Tools](https://huggingface.co/docs/smolagents/main/tutorials/tools.md)
- [Inspecting runs with OpenTelemetry](https://huggingface.co/docs/smolagents/main/tutorials/inspect_runs.md)
- [Secure code execution](https://huggingface.co/docs/smolagents/main/tutorials/secure_code_execution.md)

### `smolagents`
https://huggingface.co/docs/smolagents/main/index.md

# `smolagents`

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" style="max-width:700px"/>
</div>

## What is smolagents?

`smolagents` is an open-source Python library designed to make it extremely easy to build and run agents using just a few lines of code.

Key features of `smolagents` include:

✨ **Simplicity**: The logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code!

🧑‍💻 **First-class support for Code Agents**: [`CodeAgent`](reference/agents#smolagents.CodeAgent) writes its actions in code (as opposed to "agents being used to write code") to invoke tools or perform computations, enabling natural composability (function nesting, loops, conditionals). To make it secure, we support [executing in sandboxed environment](tutorials/secure_code_execution) via [E2B](https://e2b.dev/) or via Docker.

📡 **Common Tool-Calling Agent Support**: In addition to CodeAgents, [`ToolCallingAgent`](reference/agents#smolagents.ToolCallingAgent) supports usual JSON/text-based tool-calling for scenarios where that paradigm is preferred.

🤗 **Hub integrations**: Seamlessly share and load agents and tools to/from the Hub as Gradio Spaces.

🌐 **Model-agnostic**: Easily integrate any large language model (LLM), whether it's hosted on the Hub via [Inference providers](https://huggingface.co/docs/inference-providers/index), accessed via APIs such as OpenAI, Anthropic, or many others via LiteLLM integration, or run locally using Transformers or Ollama. Powering an agent with your preferred LLM is straightforward and flexible.

👁️ **Modality-agnostic**: Beyond text, agents can handle vision, video, and audio inputs, broadening the range of possible applications. Check out [this tutorial](examples/web_browser) for vision.

🛠️ **Tool-agnostic**: You can use tools from any [MCP server](reference/tools#smolagents.ToolCollection.from_mcp), from [LangChain](reference/tools#smolagents.Tool.from_langchain), you can even use a [Hub Space](reference/tools#smolagents.Tool.from_space) as a tool.

💻 **CLI Tools**: Comes with command-line utilities (smolagent, webagent) for quickly running agents without writing boilerplate code.

## Quickstart


Get started with smolagents in just a few minutes! This guide will show you how to create and run your first agent.

### Installation

Install smolagents with pip:

```bash
pip install 'smolagents[toolkit]'  # Includes default tools like web search
```

### Create Your First Agent

Here's a minimal example to create and run an agent:

```python
from smolagents import CodeAgent, InferenceClientModel

# Initialize a model (using Hugging Face Inference API)
model = InferenceClientModel()  # Uses a default model

# Create an agent with no tools
agent = CodeAgent(tools=[], model=model)

# Run the agent with a task
result = agent.run("Calculate the sum of numbers from 1 to 10")
print(result)
```

That's it! Your agent will use Python code to solve the task and return the result.

### Adding Tools

Let's make our agent more capable by adding some tools:

```python
from smolagents import CodeAgent, InferenceClientModel, DuckDuckGoSearchTool

model = InferenceClientModel()
agent = CodeAgent(
    tools=[DuckDuckGoSearchTool()],
    model=model,
)

# Now the agent can search the web!
result = agent.run("What is the current weather in Paris?")
print(result)
```

### Using Different Models

You can use various models with your agent:

```python
# Using a specific model from Hugging Face
model = InferenceClientModel(model_id="meta-llama/Llama-2-70b-chat-hf")

# Using OpenAI/Anthropic (requires 'smolagents[litellm]')
from smolagents import LiteLLMModel
model = LiteLLMModel(model_id="gpt-4")

# Using local models (requires 'smolagents[transformers]')
from smolagents import TransformersModel
model = TransformersModel(model_id="meta-llama/Llama-2-7b-chat-hf")
```

## Next Steps

- Learn how to set up smolagents with various models and tools in the [Installation Guide](installation)
- Check out the [Guided Tour](guided_tour) for more advanced features
- Learn about [building custom tools](tutorials/tools)
- Explore [secure code execution](tutorials/secure_code_execution)
- See how to create [multi-agent systems](tutorials/building_good_agents)

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour"
      ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guided tour</div>
      <p class="text-gray-700">Learn the basics and become familiar with using Agents. Start here if you are using Agents for the first time!</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides to help you achieve a specific goal: create an agent to generate and test SQL queries!</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents"
      ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
      <p class="text-gray-700">High-level explanations for building a better understanding of important topics.</p>
   </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents"
      ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
      <p class="text-gray-700">Horizontal tutorials that cover important aspects of building agents.</p>
    </a>
  </div>
</div>


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/index.md" />

### Agents - Guided tour
https://huggingface.co/docs/smolagents/main/guided_tour.md

# Agents - Guided tour


In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.

## Choosing an agent type: CodeAgent or ToolCallingAgent

`smolagents` comes with two agent classes: [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) and [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent), which represent two different paradigms for how agents interact with tools.
The key difference lies in how actions are specified and executed: code generation vs structured tool calling.

- [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) generates tool calls as Python code snippets.
  - The code is executed either locally (potentially unsecure) or in a secure sandbox.
  - Tools are exposed as Python functions (via bindings).
  - Example of tool call:
    ```py
    result = search_docs("What is the capital of France?")
    print(result)
    ```
  - Strengths:
    - Highly expressive: Allows for complex logic and control flow and can combine tools, loop, transform, reason.
    - Flexible: No need to predefine every possible action, can dynamically generate new actions/tools.
    - Emergent reasoning: Ideal for multi-step problems or dynamic logic.
  - Limitations
    - Risk of errors: Must handle syntax errors, exceptions.
    - Less predictable: More prone to unexpected or unsafe outputs.
    - Requires secure execution environment.

- [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) writes tool calls as structured JSON.
  - This is the common format used in many frameworks (OpenAI API), allowing for structured tool interactions without code execution.
  - Tools are defined with a JSON schema: name, description, parameter types, etc.
  - Example of tool call:
    ```json
    {
      "tool_call": {
        "name": "search_docs",
        "arguments": {
          "query": "What is the capital of France?"
        }
      }
    }
    ```
  - Strengths:
    - Reliable: Less prone to hallucination, outputs are structured and validated.
    - Safe: Arguments are strictly validated, no risk of arbitrary code running.
    - Interoperable: Easy to map to external APIs or services.
  - Limitations:
    - Low expressivity: Can't easily combine or transform results dynamically, or perform complex logic or control flow.
    - Inflexible: Must define all possible actions in advance, limited to predefined tools.
    - No code synthesis: Limited to tool capabilities.

When to use which agent type:
- Use [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) when:
  - You need reasoning, chaining, or dynamic composition.
  - Tools are functions that can be combined (e.g., parsing + math + querying).
  - Your agent is a problem solver or programmer.

- Use [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) when:
  - You have simple, atomic tools (e.g., call an API, fetch a document).
  - You want high reliability and clear validation.
  - Your agent is like a dispatcher or controller.

## CodeAgent

[CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) generates Python code snippets to perform actions and solve tasks.

By default, the Python code execution is done in your local environment.
This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.

The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.
You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent):

```py
model = InferenceClientModel()
agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])
agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
```

Additionally, as an extra security layer, access to submodule is forbidden by default, unless explicitly authorized within the import list.
For instance, to access the `numpy.random` submodule, you need to add `'numpy.random'` to the `additional_authorized_imports` list.
This could also be authorized by using `numpy.*`, which will allow `numpy` as well as any subpackage like `numpy.random` and its own subpackages.

> [!WARNING]
> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!

The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.

You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) or Docker instead of a local Python interpreter. For E2B, first [set the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then pass `executor_type="e2b"` upon agent initialization. For Docker, pass `executor_type="docker"` during initialization.


> [!TIP]
> Learn more about code execution [in this tutorial](tutorials/secure_code_execution).

### ToolCallingAgent

[ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) outputs JSON tool calls, which is the common format used in many frameworks (OpenAI API), allowing for structured tool interactions without code execution. We utilize the built-in WebSearchTool (from the smolagents toolkit extra, which will be described in more detail later) to enable our agent to perform web searches.   

It works much in the same way like [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent), of course without `additional_authorized_imports` since it doesn't execute code:

```py
from smolagents import ToolCallingAgent, WebSearchTool

agent = ToolCallingAgent(tools=[WebSearchTool()], model=model)
agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
```

## Building your agent

To initialize a minimal agent, you need at least these two arguments:

- `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:
    - [TransformersModel](/docs/smolagents/main/en/reference/models#smolagents.TransformersModel) takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
    - [InferenceClientModel](/docs/smolagents/main/en/reference/models#smolagents.InferenceClientModel) leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub: Cerebras, Cohere, Fal, Fireworks, HF-Inference, Hyperbolic, Nebius, Novita, Replicate, SambaNova, Together, and more.
    - [LiteLLMModel](/docs/smolagents/main/en/reference/models#smolagents.LiteLLMModel) similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)!
    - [AzureOpenAIModel](/docs/smolagents/main/en/reference/models#smolagents.AzureOpenAIModel) allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).
    - [AmazonBedrockModel](/docs/smolagents/main/en/reference/models#smolagents.AmazonBedrockModel) allows you to use Amazon Bedrock in [AWS](https://aws.amazon.com/bedrock/?nc1=h_ls).
    - [MLXModel](/docs/smolagents/main/en/reference/models#smolagents.MLXModel) creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine.

- `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.

Once you have these two arguments, `tools` and `model`,  you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), [Amazon Bedrock](https://aws.amazon.com/bedrock/?nc1=h_ls), or [mlx-lm](https://pypi.org/project/mlx-lm/).

All model classes support passing additional keyword arguments (like `temperature`, `max_tokens`, `top_p`, etc.) directly at instantiation time.
These parameters are automatically forwarded to the underlying model's completion calls, allowing you to configure model behavior such as creativity, response length, and sampling strategies.

<hfoptions id="Pick a LLM">
<hfoption id="Inference Providers">

Inference Providers need a `HF_TOKEN` to authenticate, but a free HF account already comes with included credits. Upgrade to PRO to raise your included credits.

To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `InferenceClientModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)

```python
from smolagents import CodeAgent, InferenceClientModel

model_id = "meta-llama/Llama-3.3-70B-Instruct"

model = InferenceClientModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to InferenceClientModel to use a default model
# you can also specify a particular provider e.g. provider="together" or provider="sambanova"
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```
</hfoption>
<hfoption id="Local Transformers Model">

```python
# !pip install 'smolagents[transformers]'
from smolagents import CodeAgent, TransformersModel

model_id = "meta-llama/Llama-3.2-3B-Instruct"

model = TransformersModel(model_id=model_id)
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```
</hfoption>
<hfoption id="OpenAI or Anthropic API">

To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.

```python
# !pip install 'smolagents[litellm]'
from smolagents import CodeAgent, LiteLLMModel

model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```
</hfoption>
<hfoption id="Ollama">

```python
# !pip install 'smolagents[litellm]'
from smolagents import CodeAgent, LiteLLMModel

model = LiteLLMModel(
    model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though
    api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary
    api_key="YOUR_API_KEY", # replace with API key if necessary
    num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.
)

agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```
</hfoption>
<hfoption id="Azure OpenAI">

To connect to Azure OpenAI, you can either use `AzureOpenAIModel` directly, or use `LiteLLMModel` and configure it accordingly.

To initialize an instance of `AzureOpenAIModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.

```python
# !pip install 'smolagents[openai]'
from smolagents import CodeAgent, AzureOpenAIModel

model = AzureOpenAIModel(model_id="gpt-4o-mini")
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```

Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:

- pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`
- make sure to set the environment variable `AZURE_API_VERSION`
- either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`

```python
import os
from smolagents import CodeAgent, LiteLLMModel

AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name

os.environ["AZURE_API_KEY"] = "" # api_key
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview"

model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
   "Could you give me the 118th number in the Fibonacci sequence?",
)
```

</hfoption>
<hfoption id="Amazon Bedrock">

The `AmazonBedrockModel` class provides native integration with Amazon Bedrock, allowing for direct API calls and comprehensive configuration.

Basic Usage:

```python
# !pip install 'smolagents[bedrock]'
from smolagents import CodeAgent, AmazonBedrockModel

model = AmazonBedrockModel(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```

Advanced Configuration:

```python
import boto3
from smolagents import AmazonBedrockModel

# Create a custom Bedrock client
bedrock_client = boto3.client(
    'bedrock-runtime',
    region_name='us-east-1',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

additional_api_config = {
    "inferenceConfig": {
        "maxTokens": 3000
    },
    "guardrailConfig": {
        "guardrailIdentifier": "identify1",
        "guardrailVersion": 'v1'
    },
}

# Initialize with comprehensive configuration
model = AmazonBedrockModel(
    model_id="us.amazon.nova-pro-v1:0",
    client=bedrock_client,  # Use custom client
    **additional_api_config
)

agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
    "Could you give me the 118th number in the Fibonacci sequence?",
)
```

Using LiteLLMModel:

Alternatively, you can use `LiteLLMModel` with Bedrock models:

```python
from smolagents import LiteLLMModel, CodeAgent

model = LiteLLMModel(model_name="bedrock/anthropic.claude-3-sonnet-20240229-v1:0")
agent = CodeAgent(tools=[], model=model)

agent.run("Explain the concept of quantum computing")
```

</hfoption>
<hfoption id="mlx-lm">

```python
# !pip install 'smolagents[mlx-lm]'
from smolagents import CodeAgent, MLXModel

mlx_model = MLXModel("mlx-community/Qwen2.5-Coder-32B-Instruct-4bit")
agent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True)

agent.run("Could you give me the 118th number in the Fibonacci sequence?")
```

</hfoption>
</hfoptions>

### Model parameter management

When initializing models, you can pass keyword arguments that will be forwarded as completion parameters to the
underlying model API during inference.

For fine-grained control over parameter handling, the `REMOVE_PARAMETER` sentinel value allows you to explicitly exclude
parameters that might otherwise be set by default or passed through elsewhere:

```python
from smolagents import OpenAIModel, REMOVE_PARAMETER

# Remove "stop" parameter
model = OpenAIModel(
    model_id="gpt-5",
    stop=REMOVE_PARAMETER,  # Ensures "stop" is not included in API calls
    temperature=0.7
)

agent = CodeAgent(tools=[], model=model, add_base_tools=True)
```

This is particularly useful when:
- You want to override default parameters that might be applied automatically
- You need to ensure certain parameters are completely excluded from API calls
- You want to let the model provider use their own defaults for specific parameters

## Advanced agent configuration

### Customizing agent termination conditions

By default, an agent continues running until it calls the `final_answer` function or reaches the maximum number of steps.
The `final_answer_checks` parameter gives you more control over when and how an agent terminates its execution:

```python
from smolagents import CodeAgent, InferenceClientModel

# Define a custom final answer check function
def is_integer(final_answer: str, agent_memory=None) -> bool:
    """Return True if final_answer is an integer."""
    try:
        int(final_answer)
        return True
    except ValueError:
        return False

# Initialize agent with custom final answer check
agent = CodeAgent(
    tools=[],
    model=InferenceClientModel(),
    final_answer_checks=[is_integer]
)

agent.run("Calculate the least common multiple of 3 and 7")
```

The `final_answer_checks` parameter accepts a list of functions that each:
- Take the agent's final_answer and the agent itself as parameters
- Return a boolean indicating whether the final_answer is valid (True) or not (False)

If any function returns `False`, the agent will log the error message and continue the run.
This validation mechanism enables:
- Enforcing output format requirements (e.g., ensuring numeric answers for math problems)
- Implementing domain-specific validation rules
- Creating more robust agents that validate their own outputs

## Inspecting an agent run

Here are a few useful attributes to inspect what happened after a run:
- `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.
- Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.

## Tools

A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:
- A name
- A description
- Input types and descriptions
- An output type

You can for instance check the [PythonInterpreterTool](/docs/smolagents/main/en/reference/default_tools#smolagents.PythonInterpreterTool): it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action.

When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.

**Schema Information**: For tools that have an `output_schema` defined (such as MCP tools with structured output), the `CodeAgent` system prompt automatically includes the JSON schema information. This helps the agent understand the expected structure of tool outputs and access the data appropriately.

### Default toolbox

If you install `smolagents` with the "toolkit" extra, it comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools=True`:

- **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.
- **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code
- **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.

You can manually use a tool by calling it with its arguments.

```python
# !pip install 'smolagents[toolkit]'
from smolagents import WebSearchTool

search_tool = WebSearchTool()
print(search_tool("Who's the current president of Russia?"))
```

### Create a new tool

You can create your own tool for use cases not covered by the default tools from Hugging Face.
For example, let's create a tool that returns the most downloaded model for a given task from the Hub.

You'll start with the code below.

```python
from huggingface_hub import list_models

task = "text-classification"

most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
print(most_downloaded_model.id)
```

This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:
This is not the only way to build the tool: you can directly define it as a subclass of [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool), which gives you more flexibility, for instance the possibility to initialize heavy class attributes.

Let's see how it works for both options:

<hfoptions id="build-a-tool">
<hfoption id="Decorate a function with @tool">

```py
from smolagents import tool

@tool
def model_download_tool(task: str) -> str:
    """
    This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
    It returns the name of the checkpoint.

    Args:
        task: The task for which to get the download count.
    """
    most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
    return most_downloaded_model.id
```

The function needs:
- A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
- Type hints on both inputs and output
- A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering your agent, so do not neglect it.

All these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!

> [!TIP]
> This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).


Then you can directly initialize your agent:
```py
from smolagents import CodeAgent, InferenceClientModel
agent = CodeAgent(tools=[model_download_tool], model=InferenceClientModel())
agent.run(
    "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)
```
</hfoption>
<hfoption id="Subclass Tool">

```py
from smolagents import Tool

class ModelDownloadTool(Tool):
    name = "model_download_tool"
    description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint."
    inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}}
    output_type = "string"

    def forward(self, task: str) -> str:
        most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
        return most_downloaded_model.id
```

The subclass needs the following attributes:
- A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
- A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering your agent, so do not neglect it.
- Input types and descriptions
- Output type
All these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!


Then you can directly initialize your agent:
```py
from smolagents import CodeAgent, InferenceClientModel
agent = CodeAgent(tools=[ModelDownloadTool()], model=InferenceClientModel())
agent.run(
    "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)
```
</hfoption>
</hfoptions>

You get the following logs:
```text
╭──────────────────────────────────────── New run ─────────────────────────────────────────╮
│                                                                                          │
│ Can you give me the name of the model that has the most downloads in the 'text-to-video' │
│ task on the Hugging Face Hub?                                                            │
│                                                                                          │
╰─ InferenceClientModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
│   1 model_name = model_download_tool(task="text-to-video")                               │
│   2 print(model_name)                                                                    │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
Execution logs:
ByteDance/AnimateDiff-Lightning

Out: None
[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
│   1 final_answer("ByteDance/AnimateDiff-Lightning")                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
Out - Final answer: ByteDance/AnimateDiff-Lightning
[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]
Out[20]: 'ByteDance/AnimateDiff-Lightning'
```

> [!TIP]
> Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one).

## Multi-agents

Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).

In this type of framework, you have several agents working together to solve your task instead of only one.
It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.

You can easily build hierarchical multi-agent systems with `smolagents`.

To do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.
Then you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent.

Here's an example of making an agent that managed a specific web search agent using our native [WebSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.WebSearchTool):

```py
from smolagents import CodeAgent, InferenceClientModel, WebSearchTool

model = InferenceClientModel()

web_agent = CodeAgent(
    tools=[WebSearchTool()],
    model=model,
    name="web_search_agent",
    description="Runs web searches for you. Give it your query as an argument."
)

manager_agent = CodeAgent(
    tools=[], model=model, managed_agents=[web_agent]
)

manager_agent.run("Who is the CEO of Hugging Face?")
```

> [!TIP]
> For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).


## Talk with your agent and visualize its thoughts in a cool Gradio interface

You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:

```py
from smolagents import (
    load_tool,
    CodeAgent,
    InferenceClientModel,
    GradioUI
)

# Import tool from Hub
image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)

model = InferenceClientModel(model_id=model_id)

# Initialize the agent with the image generation tool
agent = CodeAgent(tools=[image_generation_tool], model=model)

GradioUI(agent).launch()
```

Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.
The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.

You can also use this `reset=False` argument to keep the conversation going in any other agentic application.

In gradio UIs, if you want to allow users to interrupt a running agent, you could do this with a button that triggers method `agent.interrupt()`.
This will stop the agent at the end of its current step, then raise an error.

## Next steps

Finally, when you've configured your agent to your needs, you can share it to the Hub!

```py
agent.push_to_hub("m-ric/my_agent")
```

Similarly, to load an agent that has been pushed to hub, if you trust the code from its tools, use:
```py
agent.from_hub("m-ric/my_agent", trust_remote_code=True)
```

For more in-depth usage, you will then want to check out our tutorials:
- [the explanation of how our code agents work](./tutorials/secure_code_execution)
- [this guide on how to build good agents](./tutorials/building_good_agents).
- [the in-depth guide for tool usage](./tutorials/building_good_agents).


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/guided_tour.md" />

### Installation Options
https://huggingface.co/docs/smolagents/main/installation.md

# Installation Options

The `smolagents` library can be installed using pip. Here are the different installation methods and options available.

## Prerequisites
- Python 3.10 or newer
- Python package manager: [`pip`](https://pip.pypa.io/en/stable/) or [`uv`](https://docs.astral.sh/uv/)

## Virtual Environment

It's strongly recommended to install `smolagents` within a Python virtual environment.
Virtual environments isolate your project dependencies from other Python projects and your system Python installation,
preventing version conflicts and making package management more reliable.

<hfoptions id="virtual-environment">
<hfoption id="venv">

Using [`venv`](https://docs.python.org/3/library/venv.html):

```bash
python -m venv .venv
source .venv/bin/activate
```

</hfoption>
<hfoption id="uv">

Using [`uv`](https://docs.astral.sh/uv/):

```bash
uv venv .venv
source .venv/bin/activate
```

</hfoption>
</hfoptions>

## Basic Installation

Install `smolagents` core library with:

<hfoptions id="installation">
<hfoption id="pip">
```bash
pip install smolagents
```
</hfoption>
<hfoption id="uv">
```bash
uv pip install smolagents
```
</hfoption>
</hfoptions>

## Installation with Extras

`smolagents` provides several optional dependencies (extras) that can be installed based on your needs.
You can install these extras using the following syntax:
<hfoptions id="installation">
<hfoption id="pip">
```bash
pip install "smolagents[extra1,extra2]"
```
</hfoption>
<hfoption id="uv">
```bash
uv pip install "smolagents[extra1,extra2]"
```
</hfoption>
</hfoptions>

### Tools
These extras include various tools and integrations:
<hfoptions id="installation">
<hfoption id="pip">
- **toolkit**: Install a default set of tools for common tasks.
  ```bash
  pip install "smolagents[toolkit]"
  ```
- **mcp**: Add support for the Model Context Protocol (MCP) to integrate with external tools and services.
  ```bash
  pip install "smolagents[mcp]"
  ```
</hfoption>
<hfoption id="uv">
- **toolkit**: Install a default set of tools for common tasks.
  ```bash
  uv pip install "smolagents[toolkit]"
  ```
- **mcp**: Add support for the Model Context Protocol (MCP) to integrate with external tools and services.
  ```bash
  uv pip install "smolagents[mcp]"
  ```
</hfoption>
</hfoptions>

### Model Integration
These extras enable integration with various AI models and frameworks:
<hfoptions id="installation">
<hfoption id="pip">
- **openai**: Add support for OpenAI API models.
  ```bash
  pip install "smolagents[openai]"
  ```
- **transformers**: Enable Hugging Face Transformers models.
  ```bash
  pip install "smolagents[transformers]"
  ```
- **vllm**: Add VLLM support for efficient model inference.
  ```bash
  pip install "smolagents[vllm]"
  ```
- **mlx-lm**: Enable support for MLX-LM models.
  ```bash
  pip install "smolagents[mlx-lm]"
  ```
- **litellm**: Add LiteLLM support for lightweight model inference.
  ```bash
  pip install "smolagents[litellm]"
  ```
- **bedrock**: Enable support for AWS Bedrock models.
  ```bash
  pip install "smolagents[bedrock]"
  ```
</hfoption>
<hfoption id="uv">
- **openai**: Add support for OpenAI API models.
  ```bash
  uv pip install "smolagents[openai]"
  ```
- **transformers**: Enable Hugging Face Transformers models.
  ```bash
  uv pip install "smolagents[transformers]"
  ```
- **vllm**: Add VLLM support for efficient model inference.
  ```bash
  uv pip install "smolagents[vllm]"
  ```
- **mlx-lm**: Enable support for MLX-LM models.
  ```bash
  uv pip install "smolagents[mlx-lm]"
  ```
- **litellm**: Add LiteLLM support for lightweight model inference.
  ```bash
  uv pip install "smolagents[litellm]"
  ```
- **bedrock**: Enable support for AWS Bedrock models.
  ```bash
  uv pip install "smolagents[bedrock]"
  ```
</hfoption>
</hfoptions>

### Multimodal Capabilities
Extras for handling different types of media and input:
<hfoptions id="installation">
<hfoption id="pip">
- **vision**: Add support for image processing and computer vision tasks.
  ```bash
  pip install "smolagents[vision]"
  ```
- **audio**: Enable audio processing capabilities.
  ```bash
  pip install "smolagents[audio]"
  ```
</hfoption>
<hfoption id="uv">
- **vision**: Add support for image processing and computer vision tasks.
  ```bash
  uv pip install "smolagents[vision]"
  ```
- **audio**: Enable audio processing capabilities.
  ```bash
  uv pip install "smolagents[audio]"
  ```
</hfoption>
</hfoptions>

### Remote Execution
Extras for executing code remotely:
<hfoptions id="installation">
<hfoption id="pip">
- **docker**: Add support for executing code in Docker containers.
  ```bash
  pip install "smolagents[docker]"
  ```
- **e2b**: Enable E2B support for remote execution.
  ```bash
  pip install "smolagents[e2b]"
  ```
</hfoption>
<hfoption id="uv">
- **docker**: Add support for executing code in Docker containers.
  ```bash
  uv pip install "smolagents[docker]"
  ```
- **e2b**: Enable E2B support for remote execution.
  ```bash
  uv pip install "smolagents[e2b]"
  ```
</hfoption>
</hfoptions>

### Telemetry and User Interface
Extras for telemetry, monitoring and user interface components:
<hfoptions id="installation">
<hfoption id="pip">
- **telemetry**: Add support for monitoring and tracing.
  ```bash
  pip install "smolagents[telemetry]"
  ```
- **gradio**: Add support for interactive Gradio UI components.
  ```bash
  pip install "smolagents[gradio]"
  ```
</hfoption>
<hfoption id="uv">
- **telemetry**: Add support for monitoring and tracing.
  ```bash
  uv pip install "smolagents[telemetry]"
  ```
- **gradio**: Add support for interactive Gradio UI components.
  ```bash
  uv pip install "smolagents[gradio]"
  ```
</hfoption>
</hfoptions>

### Complete Installation
To install all available extras, you can use:
<hfoptions id="installation">
<hfoption id="pip">
```bash
pip install "smolagents[all]"
```
</hfoption>
<hfoption id="uv">
```bash
uv pip install "smolagents[all]"
```
</hfoption>
</hfoptions>

## Verifying Installation
After installation, you can verify that `smolagents` is installed correctly by running:
```python
import smolagents
print(smolagents.__version__)
```

## Next Steps
Once you have successfully installed `smolagents`, you can:
- Follow the [guided tour](./guided_tour) to learn the basics.
- Explore the [how-to guides](./examples/text_to_sql) for practical examples.
- Read the [conceptual guides](./conceptual_guides/intro_agents) for high-level explanations.
- Check out the [tutorials](./tutorials/building_good_agents) for in-depth tutorials on building agents.
- Explore the [API reference](./reference/index) for detailed information on classes and functions.


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/installation.md" />

### What are agents? 🤔
https://huggingface.co/docs/smolagents/main/conceptual_guides/intro_agents.md

# What are agents? 🤔

## An introduction to agentic systems.

Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have ***agency***. Agentic programs are the gateway to the outside world for LLMs.

> [!TIP]
> AI Agents are **programs where LLM outputs control the workflow**.

Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM's input on the code workflow is the level of agency of LLMs in the system.

Note that with this definition, "agent" is not a discrete, 0 or 1 definition: instead, "agency" evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.

See in the table below how agency can vary across systems:

| Agency Level | Description                                                     | Short name       | Example Code                                       |
| ------------ | --------------------------------------------------------------- | ---------------- | -------------------------------------------------- |
| ☆☆☆          | LLM output has no impact on program flow                        | Simple processor | `process_llm_output(llm_response)`                 |
| ★☆☆          | LLM output controls an if/else switch                           | Router           | `if llm_decision(): path_a() else: path_b()`       |
| ★★☆          | LLM output controls function execution                          | Tool call        | `run_function(llm_chosen_tool, llm_chosen_args)`   |
| ★★☆          | LLM output controls iteration and program continuation          | Multi-step Agent | `while llm_should_continue(): execute_next_step()` |
| ★★★          | One agentic workflow can start another agentic workflow         | Multi-Agent      | `if llm_trigger(): execute_agent()`                |
| ★★★          | LLM acts in code, can define its own tools / start other agents | Code Agents      | `def custom_tool(args): ...`                       |

The multi-step agent has this code structure:

```python
memory = [user_defined_task]
while llm_should_continue(memory): # this loop is the multi-step part
    action = llm_get_next_action(memory) # this is the tool-calling part
    observations = execute_action(action)
    memory += [action, observations]
```

This agentic system runs in a loop, executing a new action at each step (the action can involve calling some pre-determined *tools* that are just functions), until its observations make it apparent that a satisfactory state has been reached to solve the given task. Here’s an example of how a multi-step agent can solve a simple math question:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"/>
</div>


## ✅ When to use agents / ⛔ when to avoid them

Agents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand?
If the pre-determined workflow falls short too often, that means you need more flexibility.
Let's take an example: say you're making an app that handles customer requests on a surfing trip website.

You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.

1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base
2. Wants to talk to sales? ⇒ let them type in a contact form.

If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behaviour. 

But what if the workflow can't be determined that well in advance? 

For instance, a user wants to ask: `"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?"` This question hinges on many factors, and probably none of the predetermined criteria above will suffice for this request.

If the pre-determined workflow falls short too often, that means you need more flexibility.

That is where an agentic setup helps.

In the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base.

Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up  if/else switches. They focused on extremely narrow tasks, like "compute the sum of these numbers" or "find the shortest path in this graph". But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!

## Why `smolagents`?

For some low-level agentic use cases, like chains or routers, you can write all the code yourself. You'll be much better that way, since it will let you control and understand your system better.

But once you start going for more complicated behaviours like letting an LLM call a function (that's "tool calling") or letting an LLM run a while loop ("multi-step agent"), some abstractions become necessary:
- For tool calling, you need to parse the agent's output, so this output needs a predefined format like "Thought: I should call tool 'get_weather'. Action: get_weather(Paris).", that you parse with a predefined function, and system prompt given to the LLM should notify it about this format.
- For a multi-step agent where the LLM output determines the loop, you need to give a different prompt to the LLM based on what happened in the last loop iteration: so you need some kind of memory.

See? With these two examples, we already found the need for a few items to help us:

- Of course, an LLM that acts as the engine powering the system
- A list of tools that the agent can access
- A system prompt guiding the LLM on the agent logic: ReAct loop of Reflection -> Action -> Observation, available tools, tool calling format to use...
- A parser that extracts tool calls from the LLM output, in the format indicated by system prompt above.
- A memory

But wait, since we give room to LLMs in decisions, surely they will make mistakes: so we need error logging and retry mechanisms.

All these elements need tight coupling to make a well-functioning system. That's why we decided we needed to make basic building blocks to make all this stuff work together.

## Code agents

In a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of "writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments".

[Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLMs actions written as code snippets is a more natural and flexible way of writing them.

The reason for this simply that *we crafted our code languages specifically to express the actions performed by a computer*.
In other words, our agent is going to write programs in order to solve the user's issues : do you think their programming will be easier in blocks of Python or JSON?

The figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030), illustrates some advantages of writing actions in code:

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">

Writing actions in code rather than JSON-like snippets provides better:

- **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
- **Object management:** how do you store the output of an action like `generate_image` in JSON?
- **Generality:** code is built to express simply anything you can have a computer do.
- **Representation in LLM training data:** plenty of quality code actions are already included in LLMs’ training data which means they’re already trained for this!


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/intro_agents.md" />

### How do multi-step agents work?
https://huggingface.co/docs/smolagents/main/conceptual_guides/react.md

# How do multi-step agents work?

The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) is currently the main approach to building agents.

The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand.

All agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework.

On a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below: 

Initialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` .

While loop (ReAct loop):

- Use `agent.write_memory_to_messages()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating).
- Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`).
- Execute the action and logs result into memory (an `ActionStep`).
- At the end of each step, we run all callback functions defined in `agent.step_callbacks` .

Optionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory.

For a `CodeAgent`, it looks like the figure below.

<div class="flex justify-center">
    <img
        src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
    />
</div>

Here is a video overview of how that works:

<div class="flex justify-center">
    <img
        class="block dark:hidden"
        src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
    />
    <img
        class="hidden dark:block"
        src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
    />
</div>

We implement two versions of agents:
- [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) generates its tool calls as Python code snippets.
- [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) writes its tool calls as JSON, as is common in many frameworks. Depending on your needs, either approach can be used. For instance, web browsing often requires waiting after each page interaction, so JSON tool calls can fit well.

> [!TIP]
> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/react.md" />

### Agents
https://huggingface.co/docs/smolagents/main/reference/agents.md

# Agents

<Tip warning={true}>

Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
can vary as the APIs or underlying models are prone to change.

</Tip>

To learn more about agents and tools make sure to read the [introductory guide](../index). This page
contains the API docs for the underlying classes.

## Agents

Our agents inherit from [MultiStepAgent](/docs/smolagents/main/en/reference/agents#smolagents.MultiStepAgent), which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react).

We provide two types of agents, based on the main `Agent` class.
  - [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) writes its tool calls in Python code (this is the default).
  - [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent) writes its tool calls in JSON.

Both require arguments `model` and list of tools `tools` at initialization.

### Classes of agents[[smolagents.MultiStepAgent]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.MultiStepAgent</name><anchor>smolagents.MultiStepAgent</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L266</source><parameters>[{"name": "tools", "val": ": list"}, {"name": "model", "val": ": Model"}, {"name": "prompt_templates", "val": ": smolagents.agents.PromptTemplates | None = None"}, {"name": "instructions", "val": ": str | None = None"}, {"name": "max_steps", "val": ": int = 20"}, {"name": "add_base_tools", "val": ": bool = False"}, {"name": "verbosity_level", "val": ": LogLevel = <LogLevel.INFO: 1>"}, {"name": "managed_agents", "val": ": list | None = None"}, {"name": "step_callbacks", "val": ": list[collections.abc.Callable] | dict[typing.Type[smolagents.memory.MemoryStep], collections.abc.Callable | list[collections.abc.Callable]] | None = None"}, {"name": "planning_interval", "val": ": int | None = None"}, {"name": "name", "val": ": str | None = None"}, {"name": "description", "val": ": str | None = None"}, {"name": "provide_run_summary", "val": ": bool = False"}, {"name": "final_answer_checks", "val": ": list[collections.abc.Callable] | None = None"}, {"name": "return_full_result", "val": ": bool = False"}, {"name": "logger", "val": ": smolagents.monitoring.AgentLogger | None = None"}]</parameters><paramsdesc>- **tools** (`list[Tool]`) -- [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool)s that the agent can use.
- **model** (`Callable[[list[dict[str, str]]], ChatMessage]`) -- Model that will generate the agent's actions.
- **prompt_templates** ([PromptTemplates](/docs/smolagents/main/en/reference/agents#smolagents.PromptTemplates), *optional*) -- Prompt templates.
- **instructions** (`str`, *optional*) -- Custom instructions for the agent, will be inserted in the system prompt.
- **max_steps** (`int`, default `20`) -- Maximum number of steps the agent can take to solve the task.
- **add_base_tools** (`bool`, default `False`) -- Whether to add the base tools to the agent's tools.
- **verbosity_level** (`LogLevel`, default `LogLevel.INFO`) -- Level of verbosity of the agent's logs.
- **managed_agents** (`list`, *optional*) -- Managed agents that the agent can call.
- **step_callbacks** (`list[Callable]` | `dict[Type[MemoryStep], Callable | list[Callable]]`, *optional*) -- Callbacks that will be called at each step.
- **planning_interval** (`int`, *optional*) -- Interval at which the agent will run a planning step.
- **name** (`str`, *optional*) -- Necessary for a managed agent only - the name by which this agent can be called.
- **description** (`str`, *optional*) -- Necessary for a managed agent only - the description of this agent.
- **provide_run_summary** (`bool`, *optional*) -- Whether to provide a run summary when called as a managed agent.
- **final_answer_checks** (`list[Callable]`, *optional*) -- List of validation functions to run before accepting a final answer.
  Each function should:
  - Take the final answer, the agent's memory, and the agent itself as arguments.
  - Return a boolean indicating whether the final answer is valid.
- **return_full_result** (`bool`, default `False`) -- Whether to return the full `RunResult` object or just the final answer output from the agent run.</paramsdesc><paramgroups>0</paramgroups></docstring>

Agent class that solves the given task step by step, using the ReAct framework:
While the objective is not reached, the agent will perform a cycle of action (given by the LLM) and observation (obtained from the environment).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>extract_action</name><anchor>smolagents.MultiStepAgent.extract_action</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L783</source><parameters>[{"name": "model_output", "val": ": str"}, {"name": "split_token", "val": ": str"}]</parameters><paramsdesc>- **model_output** (`str`) -- Output of the LLM
- **split_token** (`str`) -- Separator for the action. Should match the example in the system prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Parse action from the LLM output




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_dict</name><anchor>smolagents.MultiStepAgent.from_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1004</source><parameters>[{"name": "agent_dict", "val": ": dict"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **agent_dict** (`dict[str, Any]`) -- Dictionary representation of the agent.
- ****kwargs** -- Additional keyword arguments that will override agent_dict values.</paramsdesc><paramgroups>0</paramgroups><rettype>`MultiStepAgent`</rettype><retdesc>Instance of the agent class.</retdesc></docstring>
Create agent from a dictionary representation.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_folder</name><anchor>smolagents.MultiStepAgent.from_folder</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1102</source><parameters>[{"name": "folder", "val": ": str | pathlib.Path"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **folder** (`str` or `Path`) -- The folder where the agent is saved.
- ****kwargs** -- Additional keyword arguments that will be passed to the agent's init.</paramsdesc><paramgroups>0</paramgroups></docstring>
Loads an agent from a local folder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_hub</name><anchor>smolagents.MultiStepAgent.from_hub</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1048</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": str | None = None"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repo on the Hub where your tool is defined.
- **token** (`str`, *optional*) --
  The token to identify you on hf.co. If unset, will use the token generated when running
  `huggingface-cli login` (stored in `~/.huggingface`).
- **trust_remote_code(`bool`,** *optional*, defaults to False) --
  This flags marks that you understand the risk of running remote code and that you trust this tool.
  If not setting this to True, loading the tool from Hub will fail.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
  `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your agent, and the
  others will be passed along to its init.</paramsdesc><paramgroups>0</paramgroups></docstring>

Loads an agent defined on the Hub.

<Tip warning={true}>

Loading a tool from the Hub means that you'll download the tool and execute it locally.
ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when
installing a package using pip/npm/apt.

</Tip>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>initialize_system_prompt</name><anchor>smolagents.MultiStepAgent.initialize_system_prompt</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L743</source><parameters>[]</parameters></docstring>
To be implemented in child classes

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>interrupt</name><anchor>smolagents.MultiStepAgent.interrupt</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L748</source><parameters>[]</parameters></docstring>
Interrupts the agent execution.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>provide_final_answer</name><anchor>smolagents.MultiStepAgent.provide_final_answer</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L804</source><parameters>[{"name": "task", "val": ": str"}]</parameters><paramsdesc>- **task** (`str`) -- Task to perform.
- **images** (`list[PIL.Image.Image]`, *optional*) -- Image(s) objects.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>Final answer to the task.</retdesc></docstring>

Provide the final answer to the task, based on the logs of the agent's interactions.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>smolagents.MultiStepAgent.push_to_hub</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1134</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": str = 'Upload agent'"}, {"name": "private", "val": ": bool | None = None"}, {"name": "token", "val": ": bool | str | None = None"}, {"name": "create_pr", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository you want to push to. It should contain your organization name when
  pushing to a given organization.
- **commit_message** (`str`, *optional*, defaults to `"Upload agent"`) --
  Message to commit while pushing.
- **private** (`bool`, *optional*, defaults to `None`) --
  Whether to make the repo private. If `None`, the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **token** (`bool` or `str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated
  when running `huggingface-cli login` (stored in `~/.huggingface`).
- **create_pr** (`bool`, *optional*, defaults to `False`) --
  Whether to create a PR with the uploaded files or directly commit.</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload the agent to the Hub.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>replay</name><anchor>smolagents.MultiStepAgent.replay</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L853</source><parameters>[{"name": "detailed", "val": ": bool = False"}]</parameters><paramsdesc>- **detailed** (bool, optional) -- If True, also displays the memory at each step. Defaults to False.
  Careful: will increase log length exponentially. Use only for debugging.</paramsdesc><paramgroups>0</paramgroups></docstring>
Prints a pretty replay of the agent's steps.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run</name><anchor>smolagents.MultiStepAgent.run</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L433</source><parameters>[{"name": "task", "val": ": str"}, {"name": "stream", "val": ": bool = False"}, {"name": "reset", "val": ": bool = True"}, {"name": "images", "val": ": list['PIL.Image.Image'] | None = None"}, {"name": "additional_args", "val": ": dict | None = None"}, {"name": "max_steps", "val": ": int | None = None"}, {"name": "return_full_result", "val": ": bool | None = None"}]</parameters><paramsdesc>- **task** (`str`) -- Task to perform.
- **stream** (`bool`) -- Whether to run in streaming mode.
  If `True`, returns a generator that yields each step as it is executed. You must iterate over this generator to process the individual steps (e.g., using a for loop or `next()`).
  If `False`, executes all steps internally and returns only the final answer after completion.
- **reset** (`bool`) -- Whether to reset the conversation or keep it going from previous run.
- **images** (`list[PIL.Image.Image]`, *optional*) -- Image(s) objects.
- **additional_args** (`dict`, *optional*) -- Any other variables that you want to pass to the agent run, for instance images or dataframes. Give them clear names!
- **max_steps** (`int`, *optional*) -- Maximum number of steps the agent can take to solve the task. if not provided, will use the agent's default value.
- **return_full_result** (`bool`, *optional*) -- Whether to return the full `RunResult` object or just the final answer output.
  If `None` (default), the agent's `self.return_full_result` setting is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run the agent for the given task.



<ExampleCodeBlock anchor="smolagents.MultiStepAgent.run.example">

Example:
```py
from smolagents import CodeAgent
agent = CodeAgent(tools=[])
agent.run("What is the result of 2 power 3.7384?")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>smolagents.MultiStepAgent.save</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L886</source><parameters>[{"name": "output_dir", "val": ": str | pathlib.Path"}, {"name": "relative_path", "val": ": str | None = None"}]</parameters><paramsdesc>- **output_dir** (`str` or `Path`) -- The folder in which you want to save your agent.</paramsdesc><paramgroups>0</paramgroups></docstring>

Saves the relevant code files for your agent. This will copy the code of your agent in `output_dir` as well as autogenerate:

- a `tools` folder containing the logic for each of the tools under `tools/{tool_name}.py`.
- a `managed_agents` folder containing the logic for each of the managed agents.
- an `agent.json` file containing a dictionary representing your agent.
- a `prompt.yaml` file containing the prompt templates used by your agent.
- an `app.py` file providing a UI for your agent when it is exported to a Space with `agent.push_to_hub()`
- a `requirements.txt` containing the names of the modules used by your tool (as detected when inspecting its
  code)




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>smolagents.MultiStepAgent.step</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L776</source><parameters>[{"name": "memory_step", "val": ": ActionStep"}]</parameters></docstring>

Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
Returns either None if the step is not final, or the final answer.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>smolagents.MultiStepAgent.to_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L964</source><parameters>[]</parameters><rettype>`dict`</rettype><retdesc>Dictionary representation of the agent.</retdesc></docstring>
Convert the agent to a dictionary representation.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>visualize</name><anchor>smolagents.MultiStepAgent.visualize</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L849</source><parameters>[]</parameters></docstring>
Creates a rich tree visualization of the agent's structure.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>write_memory_to_messages</name><anchor>smolagents.MultiStepAgent.write_memory_to_messages</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L752</source><parameters>[{"name": "summary_mode", "val": ": bool = False"}]</parameters></docstring>

Reads past llm_outputs, actions, and observations or errors from the memory into a series of messages
that can be used as input to the LLM. Adds a number of keywords (such as PLAN, error, etc) to help
the LLM.


</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.CodeAgent</name><anchor>smolagents.CodeAgent</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1478</source><parameters>[{"name": "tools", "val": ": list"}, {"name": "model", "val": ": Model"}, {"name": "prompt_templates", "val": ": smolagents.agents.PromptTemplates | None = None"}, {"name": "additional_authorized_imports", "val": ": list[str] | None = None"}, {"name": "planning_interval", "val": ": int | None = None"}, {"name": "executor", "val": ": PythonExecutor = None"}, {"name": "executor_type", "val": ": typing.Literal['local', 'e2b', 'modal', 'docker', 'wasm'] = 'local'"}, {"name": "executor_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "max_print_outputs_length", "val": ": int | None = None"}, {"name": "stream_outputs", "val": ": bool = False"}, {"name": "use_structured_outputs_internally", "val": ": bool = False"}, {"name": "code_block_tags", "val": ": str | tuple[str, str] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **tools** (`list[Tool]`) -- [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool)s that the agent can use.
- **model** (`Model`) -- Model that will generate the agent's actions.
- **prompt_templates** ([PromptTemplates](/docs/smolagents/main/en/reference/agents#smolagents.PromptTemplates), *optional*) -- Prompt templates.
- **additional_authorized_imports** (`list[str]`, *optional*) -- Additional authorized imports for the agent.
- **planning_interval** (`int`, *optional*) -- Interval at which the agent will run a planning step.
- **executor** ([PythonExecutor](/docs/smolagents/main/en/reference/agents#smolagents.PythonExecutor), *optional*) -- Custom Python code executor. If not provided, a default executor will be created based on `executor_type`.
- **executor_type** (`Literal["local", "e2b", "modal", "docker", "wasm"]`, default `"local"`) -- Type of code executor.
- **executor_kwargs** (`dict`, *optional*) -- Additional arguments to pass to initialize the executor.
- **max_print_outputs_length** (`int`, *optional*) -- Maximum length of the print outputs.
- **stream_outputs** (`bool`, *optional*, default `False`) -- Whether to stream outputs during execution.
- **use_structured_outputs_internally** (`bool`, default `False`) -- Whether to use structured generation at each action step: improves performance for many models.

  <Added version="1.17.0"/>
- **code_block_tags** (`tuple[str, str]` | `Literal["markdown"]`, *optional*) -- Opening and closing tags for code blocks (regex strings). Pass a custom tuple, or pass 'markdown' to use ("```(?:python|py)", "\n```"), leave empty to use ("<code>", "</code>").
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups></docstring>

In this agent, the tool calls will be formulated by the LLM in code format, then parsed and executed.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup</name><anchor>smolagents.CodeAgent.cleanup</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1566</source><parameters>[]</parameters></docstring>
Clean up resources used by the agent, such as the remote Python executor.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_dict</name><anchor>smolagents.CodeAgent.from_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1752</source><parameters>[{"name": "agent_dict", "val": ": dict"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **agent_dict** (`dict[str, Any]`) -- Dictionary representation of the agent.
- ****kwargs** -- Additional keyword arguments that will override agent_dict values.</paramsdesc><paramgroups>0</paramgroups><rettype>`CodeAgent`</rettype><retdesc>Instance of the CodeAgent class.</retdesc></docstring>
Create CodeAgent from a dictionary representation.








</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ToolCallingAgent</name><anchor>smolagents.ToolCallingAgent</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1189</source><parameters>[{"name": "tools", "val": ": list"}, {"name": "model", "val": ": Model"}, {"name": "prompt_templates", "val": ": smolagents.agents.PromptTemplates | None = None"}, {"name": "planning_interval", "val": ": int | None = None"}, {"name": "stream_outputs", "val": ": bool = False"}, {"name": "max_tool_threads", "val": ": int | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **tools** (`list[Tool]`) -- [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool)s that the agent can use.
- **model** (`Model`) -- Model that will generate the agent's actions.
- **prompt_templates** ([PromptTemplates](/docs/smolagents/main/en/reference/agents#smolagents.PromptTemplates), *optional*) -- Prompt templates.
- **planning_interval** (`int`, *optional*) -- Interval at which the agent will run a planning step.
- **stream_outputs** (`bool`, *optional*, default `False`) -- Whether to stream outputs during execution.
- **max_tool_threads** (`int`, *optional*) -- Maximum number of threads for parallel tool calls.
  Higher values increase concurrency but resource usage as well.
  Defaults to `ThreadPoolExecutor`'s default.
- ****kwargs** -- Additional keyword arguments.</paramsdesc><paramgroups>0</paramgroups></docstring>

This agent uses JSON-like tool calls, using method `model.get_tool_call` to leverage the LLM engine's tool calling capabilities.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>execute_tool_call</name><anchor>smolagents.ToolCallingAgent.execute_tool_call</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1426</source><parameters>[{"name": "tool_name", "val": ": str"}, {"name": "arguments", "val": ": dict[str, str] | str"}]</parameters><paramsdesc>- **tool_name** (`str`) -- Name of the tool or managed agent to execute.
- **arguments** (dict[str, str] | str) -- Arguments passed to the tool call.</paramsdesc><paramgroups>0</paramgroups></docstring>

Execute a tool or managed agent with the provided arguments.

The arguments are replaced with the actual values from the state if they refer to state variables.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>process_tool_calls</name><anchor>smolagents.ToolCallingAgent.process_tool_calls</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L1335</source><parameters>[{"name": "chat_message", "val": ": ChatMessage"}, {"name": "memory_step", "val": ": ActionStep"}]</parameters><paramsdesc>- **chat_message** (`ChatMessage`) -- Chat message containing tool calls from the model.
- **memory_step** (`ActionStep)` -- Memory ActionStep to update with results.</paramsdesc><paramgroups>0</paramgroups><yieldtype>`ToolCall | ToolOutput`</yieldtype><yielddesc>The tool call or tool output.</yielddesc></docstring>
Process tool calls from the model output and update agent memory.








</div></div>

### stream_to_gradio[[smolagents.stream_to_gradio]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>smolagents.stream_to_gradio</name><anchor>smolagents.stream_to_gradio</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/gradio_ui.py#L248</source><parameters>[{"name": "agent", "val": ""}, {"name": "task", "val": ": str"}, {"name": "task_images", "val": ": list | None = None"}, {"name": "reset_agent_memory", "val": ": bool = False"}, {"name": "additional_args", "val": ": dict | None = None"}]</parameters></docstring>
Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages.

</div>

### GradioUI[[smolagents.GradioUI]]

> [!TIP]
> You must have `gradio` installed to use the UI. Please run `pip install 'smolagents[gradio]'` if it's not the case.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.GradioUI</name><anchor>smolagents.GradioUI</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/gradio_ui.py#L279</source><parameters>[{"name": "agent", "val": ": MultiStepAgent"}, {"name": "file_upload_folder", "val": ": str | None = None"}, {"name": "reset_agent_memory", "val": ": bool = False"}]</parameters><paramsdesc>- **agent** ([MultiStepAgent](/docs/smolagents/main/en/reference/agents#smolagents.MultiStepAgent)) -- The agent to interact with.
- **file_upload_folder** (`str`, *optional*) -- The folder where uploaded files will be saved.
  If not provided, file uploads are disabled.
- **reset_agent_memory** (`bool`, *optional*, defaults to `False`) -- Whether to reset the agent's memory at the start of each interaction.
  If `True`, the agent will not remember previous interactions.</paramsdesc><paramgroups>0</paramgroups><raises>- ``ModuleNotFoundError`` -- If the `gradio` extra is not installed.</raises><raisederrors>``ModuleNotFoundError``</raisederrors></docstring>

Gradio interface for interacting with a [MultiStepAgent](/docs/smolagents/main/en/reference/agents#smolagents.MultiStepAgent).

This class provides a web interface to interact with the agent in real-time, allowing users to submit prompts, upload files, and receive responses in a chat-like format.
It  can reset the agent's memory at the start of each interaction if desired.
It supports file uploads, which are saved to a specified folder.
It uses the `gradio.Chatbot` component to display the conversation history.
This class requires the `gradio` extra to be installed: `pip install 'smolagents[gradio]'`.







<ExampleCodeBlock anchor="smolagents.GradioUI.example">

Example:
```python
from smolagents import CodeAgent, GradioUI, InferenceClientModel

model = InferenceClientModel(model_id="meta-llama/Meta-Llama-3.1-8B-Instruct")
agent = CodeAgent(tools=[], model=model)
gradio_ui = GradioUI(agent, file_upload_folder="uploads", reset_agent_memory=True)
gradio_ui.launch()
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>launch</name><anchor>smolagents.GradioUI.launch</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/gradio_ui.py#L406</source><parameters>[{"name": "share", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **share** (`bool`, defaults to `True`) -- Whether to share the app publicly.
- ****kwargs** -- Additional keyword arguments to pass to the Gradio launch method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Launch the Gradio app with the agent interface.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_file</name><anchor>smolagents.GradioUI.upload_file</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/gradio_ui.py#L356</source><parameters>[{"name": "file", "val": ""}, {"name": "file_uploads_log", "val": ""}, {"name": "allowed_file_types", "val": " = None"}]</parameters><paramsdesc>- **file** (`gradio.File`) -- The uploaded file.
- **file_uploads_log** (`list`) -- A list to log uploaded files.
- **allowed_file_types** (`list`, *optional*) -- List of allowed file extensions. Defaults to [".pdf", ".docx", ".txt"].</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload a file and add it to the list of uploaded files in the session state.

The file is saved to the `self.file_upload_folder` folder.
If the file type is not allowed, it returns a message indicating the disallowed file type.




</div></div>

## Prompts[[smolagents.PromptTemplates]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.PromptTemplates</name><anchor>smolagents.PromptTemplates</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L164</source><parameters>""</parameters><paramsdesc>- **system_prompt** (`str`) -- System prompt.
- **planning** ([PlanningPromptTemplate](/docs/smolagents/main/en/reference/agents#smolagents.PlanningPromptTemplate)) -- Planning prompt templates.
- **managed_agent** ([ManagedAgentPromptTemplate](/docs/smolagents/main/en/reference/agents#smolagents.ManagedAgentPromptTemplate)) -- Managed agent prompt templates.
- **final_answer** ([FinalAnswerPromptTemplate](/docs/smolagents/main/en/reference/agents#smolagents.FinalAnswerPromptTemplate)) -- Final answer prompt templates.</paramsdesc><paramgroups>0</paramgroups></docstring>

Prompt templates for the agent.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.PlanningPromptTemplate</name><anchor>smolagents.PlanningPromptTemplate</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L123</source><parameters>""</parameters><paramsdesc>- **plan** (`str`) -- Initial plan prompt.
- **update_plan_pre_messages** (`str`) -- Update plan pre-messages prompt.
- **update_plan_post_messages** (`str`) -- Update plan post-messages prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Prompt templates for the planning step.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ManagedAgentPromptTemplate</name><anchor>smolagents.ManagedAgentPromptTemplate</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L138</source><parameters>""</parameters><paramsdesc>- **task** (`str`) -- Task prompt.
- **report** (`str`) -- Report prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Prompt templates for the managed agent.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.FinalAnswerPromptTemplate</name><anchor>smolagents.FinalAnswerPromptTemplate</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py#L151</source><parameters>""</parameters><paramsdesc>- **pre_messages** (`str`) -- Pre-messages prompt.
- **post_messages** (`str`) -- Post-messages prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Prompt templates for the final answer.




</div>

## Memory[[smolagents.AgentMemory]]

Smolagents use memory to store information across multiple steps.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AgentMemory</name><anchor>smolagents.AgentMemory</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L214</source><parameters>[{"name": "system_prompt", "val": ": str"}]</parameters><paramsdesc>- **system_prompt** (`str`) -- System prompt for the agent, which sets the context and instructions for the agent's behavior.</paramsdesc><paramgroups>0</paramgroups></docstring>
Memory for the agent, containing the system prompt and all steps taken by the agent.

This class is used to store the agent's steps, including tasks, actions, and planning steps.
It allows for resetting the memory, retrieving succinct or full step information, and replaying the agent's steps.



**Attributes**:
- **system_prompt** (`SystemPromptStep`) -- System prompt step for the agent.
- **steps** (`list[TaskStep | ActionStep | PlanningStep]`) -- List of steps taken by the agent, which can include tasks, actions, and planning steps.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_full_steps</name><anchor>smolagents.AgentMemory.get_full_steps</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L242</source><parameters>[]</parameters></docstring>
Return a full representation of the agent's steps, including model input messages.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_succinct_steps</name><anchor>smolagents.AgentMemory.get_succinct_steps</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L236</source><parameters>[]</parameters></docstring>
Return a succinct representation of the agent's steps, excluding model input messages.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>replay</name><anchor>smolagents.AgentMemory.replay</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L248</source><parameters>[{"name": "logger", "val": ": AgentLogger"}, {"name": "detailed", "val": ": bool = False"}]</parameters><paramsdesc>- **logger** (`AgentLogger`) -- The logger to print replay logs to.
- **detailed** (`bool`, default `False`) -- If True, also displays the memory at each step. Defaults to False.
  Careful: will increase log length exponentially. Use only for debugging.</paramsdesc><paramgroups>0</paramgroups></docstring>
Prints a pretty replay of the agent's steps.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reset</name><anchor>smolagents.AgentMemory.reset</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L232</source><parameters>[]</parameters></docstring>
Reset the agent's memory, clearing all steps and keeping the system prompt.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>return_full_code</name><anchor>smolagents.AgentMemory.return_full_code</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/memory.py#L273</source><parameters>[]</parameters></docstring>
Returns all code actions from the agent's steps, concatenated as a single script.

</div></div>

## Python code executors[[smolagents.PythonExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.PythonExecutor</name><anchor>smolagents.PythonExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/local_python_executor.py#L1608</source><parameters>[]</parameters></docstring>


</div>

### Local Python executor[[smolagents.LocalPythonExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.LocalPythonExecutor</name><anchor>smolagents.LocalPythonExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/local_python_executor.py#L1619</source><parameters>[{"name": "additional_authorized_imports", "val": ": list"}, {"name": "max_print_outputs_length", "val": ": int | None = None"}, {"name": "additional_functions", "val": ": dict[str, collections.abc.Callable] | None = None"}]</parameters><paramsdesc>- **additional_authorized_imports** (`list[str]`) --
  Additional authorized imports for the executor.
- **max_print_outputs_length** (`int`, defaults to `DEFAULT_MAX_LEN_OUTPUT=50_000`) --
  Maximum length of the print outputs.
- **additional_functions** (`dict[str, Callable]`, *optional*) --
  Additional Python functions to be added to the executor.</paramsdesc><paramgroups>0</paramgroups></docstring>

Executor of Python code in a local environment.

This executor evaluates Python code with restricted access to imports and built-in functions,
making it suitable for running untrusted code. It maintains state between executions,
allows for custom tools and functions to be made available to the code, and captures
print outputs separately from return values.




</div>

### Remote Python executors[[smolagents.remote_executors.RemotePythonExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.remote_executors.RemotePythonExecutor</name><anchor>smolagents.remote_executors.RemotePythonExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L54</source><parameters>[{"name": "additional_imports", "val": ": list"}, {"name": "logger", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_code_raise_errors</name><anchor>smolagents.remote_executors.RemotePythonExecutor.run_code_raise_errors</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L63</source><parameters>[{"name": "code", "val": ": str"}]</parameters></docstring>

Execute code, return the result and output, also determining if
the result is the final answer.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>send_variables</name><anchor>smolagents.remote_executors.RemotePythonExecutor.send_variables</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L88</source><parameters>[{"name": "variables", "val": ": dict"}]</parameters></docstring>

Send variables to the kernel namespace using pickle.


</div></div>

#### E2BExecutor[[smolagents.E2BExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.E2BExecutor</name><anchor>smolagents.E2BExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L158</source><parameters>[{"name": "additional_imports", "val": ": list"}, {"name": "logger", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **additional_imports** (`list[str]`) -- Additional imports to install.
- **logger** (`Logger`) -- Logger to use.
- ****kwargs** -- Additional arguments to pass to the E2B Sandbox.</paramsdesc><paramgroups>0</paramgroups></docstring>

Executes Python code using E2B.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup</name><anchor>smolagents.E2BExecutor.cleanup</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L240</source><parameters>[]</parameters></docstring>
Clean up the E2B sandbox and resources.

</div></div>

#### ModalExecutor[[smolagents.ModalExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ModalExecutor</name><anchor>smolagents.ModalExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L495</source><parameters>[{"name": "additional_imports", "val": ": list"}, {"name": "logger", "val": ""}, {"name": "app_name", "val": ": str = 'smolagent-executor'"}, {"name": "port", "val": ": int = 8888"}, {"name": "create_kwargs", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **additional_imports** -- Additional imports to install.
- **logger** (`Logger`) -- Logger to use for output and errors.
- **app_name** (`str`) -- App name.
- **port** (`int`) -- Port for jupyter to bind to.
- **create_kwargs** (`dict`, optional) -- Keyword arguments to pass to creating the sandbox. See
  `modal.Sandbox.create` [docs](https://modal.com/docs/reference/modal.Sandbox#create) for all the
  keyword arguments.</paramsdesc><paramgroups>0</paramgroups></docstring>

Executes Python code using Modal.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete</name><anchor>smolagents.ModalExecutor.delete</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L586</source><parameters>[]</parameters></docstring>
Ensure cleanup on deletion.

</div></div>

#### DockerExecutor[[smolagents.DockerExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.DockerExecutor</name><anchor>smolagents.DockerExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L341</source><parameters>[{"name": "additional_imports", "val": ": list"}, {"name": "logger", "val": ""}, {"name": "host", "val": ": str = '127.0.0.1'"}, {"name": "port", "val": ": int = 8888"}, {"name": "image_name", "val": ": str = 'jupyter-kernel'"}, {"name": "build_new_image", "val": ": bool = True"}, {"name": "container_run_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "dockerfile_content", "val": ": str | None = None"}]</parameters></docstring>

Executes Python code using Jupyter Kernel Gateway in a Docker container.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup</name><anchor>smolagents.DockerExecutor.cleanup</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L463</source><parameters>[]</parameters></docstring>
Clean up the Docker container and resources.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete</name><anchor>smolagents.DockerExecutor.delete</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L475</source><parameters>[]</parameters></docstring>
Ensure cleanup on deletion.

</div></div>

#### WasmExecutor[[smolagents.WasmExecutor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.WasmExecutor</name><anchor>smolagents.WasmExecutor</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L612</source><parameters>[{"name": "additional_imports", "val": ": list"}, {"name": "logger", "val": ""}, {"name": "deno_path", "val": ": str = 'deno'"}, {"name": "deno_permissions", "val": ": list[str] | None = None"}, {"name": "timeout", "val": ": int = 60"}]</parameters><paramsdesc>- **additional_imports** (`list[str]`) -- Additional Python packages to install in the Pyodide environment.
- **logger** (`Logger`) -- Logger to use for output and errors.
- **deno_path** (`str`, optional) -- Path to the Deno executable. If not provided, will use "deno" from PATH.
- **deno_permissions** (`list[str]`, optional) -- List of permissions to grant to the Deno runtime.
  Default is minimal permissions needed for execution.
- **timeout** (`int`, optional) -- Timeout in seconds for code execution. Default is 60 seconds.</paramsdesc><paramgroups>0</paramgroups></docstring>

Remote Python code executor in a sandboxed WebAssembly environment powered by Pyodide and Deno.

This executor combines Deno's secure runtime with Pyodide's WebAssembly‑compiled Python interpreter to deliver s
trong isolation guarantees while enabling full Python execution.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup</name><anchor>smolagents.WasmExecutor.cleanup</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L793</source><parameters>[]</parameters></docstring>
Clean up resources used by the executor.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete</name><anchor>smolagents.WasmExecutor.delete</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L809</source><parameters>[]</parameters></docstring>
Ensure cleanup on deletion.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>install_packages</name><anchor>smolagents.WasmExecutor.install_packages</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L777</source><parameters>[{"name": "additional_imports", "val": ": list"}]</parameters><paramsdesc>- **additional_imports** (`list[str]`) -- Package names to install.</paramsdesc><paramgroups>0</paramgroups><rettype>list[str]</rettype><retdesc>Installed packages.</retdesc></docstring>

Install additional Python packages in the Pyodide environment.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_code_raise_errors</name><anchor>smolagents.WasmExecutor.run_code_raise_errors</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/remote_executors.py#L716</source><parameters>[{"name": "code", "val": ": str"}]</parameters><paramsdesc>- **code** (`str`) -- Python code to execute.</paramsdesc><paramgroups>0</paramgroups><rettype>`CodeOutput`</rettype><retdesc>Code output containing the result, logs, and whether it is the final answer.</retdesc></docstring>

Execute Python code in the Pyodide environment and return the result.








</div></div>

<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/agents.md" />

### Built-in Tools
https://huggingface.co/docs/smolagents/main/reference/default_tools.md

# Built-in Tools

Ready-to-use tool implementations provided by the `smolagents` library.

These built-in tools are concrete implementations of the [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) base class, each designed for specific tasks such as web searching, Python code execution, webpage retrieval, and user interaction.
You can use these tools directly in your agents without having to implement the underlying functionality yourself.
Each tool handles a particular capability and follows a consistent interface, making it easy to compose them into powerful agent workflows.

The built-in tools can be categorized by their primary functions:
- **Information Retrieval**: Search and retrieve information from the web and specific knowledge sources.
  - [ApiWebSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.ApiWebSearchTool)
  - [DuckDuckGoSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.DuckDuckGoSearchTool)
  - [GoogleSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.GoogleSearchTool)
  - [WebSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.WebSearchTool)
  - [WikipediaSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.WikipediaSearchTool)
- **Web Interaction**: Fetch and process content from specific web pages.
  - [VisitWebpageTool](/docs/smolagents/main/en/reference/default_tools#smolagents.VisitWebpageTool)
- **Code Execution**: Dynamic execution of Python code for computational tasks.
  - [PythonInterpreterTool](/docs/smolagents/main/en/reference/default_tools#smolagents.PythonInterpreterTool)
- **User Interaction**: Enable Human-in-the-Loop collaboration between agents and users.
  - [UserInputTool](/docs/smolagents/main/en/reference/default_tools#smolagents.UserInputTool): Collect input from users.
- **Speech Processing**: Convert audio to textual data.
  - [SpeechToTextTool](/docs/smolagents/main/en/reference/default_tools#smolagents.SpeechToTextTool)
- **Workflow Control**: Manage and direct the flow of agent operations.
  - [FinalAnswerTool](/docs/smolagents/main/en/reference/default_tools#smolagents.FinalAnswerTool): Conclude agent workflow with final response.

## ApiWebSearchTool[[smolagents.ApiWebSearchTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ApiWebSearchTool</name><anchor>smolagents.ApiWebSearchTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L246</source><parameters>[{"name": "endpoint", "val": ": str = ''"}, {"name": "api_key", "val": ": str = ''"}, {"name": "api_key_name", "val": ": str = ''"}, {"name": "headers", "val": ": dict = None"}, {"name": "params", "val": ": dict = None"}, {"name": "rate_limit", "val": ": float | None = 1.0"}]</parameters><paramsdesc>- **endpoint** (`str`) -- API endpoint URL. Defaults to Brave Search API.
- **api_key** (`str`) -- API key for authentication.
- **api_key_name** (`str`) -- Environment variable name containing the API key. Defaults to "BRAVE_API_KEY".
- **headers** (`dict`, *optional*) -- Headers for API requests.
- **params** (`dict`, *optional*) -- Parameters for API requests.
- **rate_limit** (`float`, default `1.0`) -- Maximum queries per second. Set to `None` to disable rate limiting.</paramsdesc><paramgroups>0</paramgroups></docstring>
Web search tool that performs API-based searches.
By default, it uses the Brave Search API.

This tool implements a rate limiting mechanism to ensure compliance with API usage policies.
By default, it limits requests to 1 query per second.



<ExampleCodeBlock anchor="smolagents.ApiWebSearchTool.example">

Examples:
```python
>>> from smolagents import ApiWebSearchTool
>>> web_search_tool = ApiWebSearchTool(rate_limit=50.0)
>>> results = web_search_tool("Hugging Face")
>>> print(results)
```

</ExampleCodeBlock>


</div>

## DuckDuckGoSearchTool[[smolagents.DuckDuckGoSearchTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.DuckDuckGoSearchTool</name><anchor>smolagents.DuckDuckGoSearchTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L101</source><parameters>[{"name": "max_results", "val": ": int = 10"}, {"name": "rate_limit", "val": ": float | None = 1.0"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **max_results** (`int`, default `10`) -- Maximum number of search results to return.
- **rate_limit** (`float`, default `1.0`) -- Maximum queries per second. Set to `None` to disable rate limiting.
- ****kwargs** -- Additional keyword arguments for the `DDGS` client.</paramsdesc><paramgroups>0</paramgroups></docstring>
Web search tool that performs searches using the DuckDuckGo search engine.



<ExampleCodeBlock anchor="smolagents.DuckDuckGoSearchTool.example">

Examples:
```python
>>> from smolagents import DuckDuckGoSearchTool
>>> web_search_tool = DuckDuckGoSearchTool(max_results=5, rate_limit=2.0)
>>> results = web_search_tool("Hugging Face")
>>> print(results)
```

</ExampleCodeBlock>


</div>

## FinalAnswerTool[[smolagents.FinalAnswerTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.FinalAnswerTool</name><anchor>smolagents.FinalAnswerTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L80</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## GoogleSearchTool[[smolagents.GoogleSearchTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.GoogleSearchTool</name><anchor>smolagents.GoogleSearchTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L159</source><parameters>[{"name": "provider", "val": ": str = 'serpapi'"}]</parameters></docstring>


</div>

## PythonInterpreterTool[[smolagents.PythonInterpreterTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.PythonInterpreterTool</name><anchor>smolagents.PythonInterpreterTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L38</source><parameters>[{"name": "*args", "val": ""}, {"name": "authorized_imports", "val": " = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## SpeechToTextTool[[smolagents.SpeechToTextTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.SpeechToTextTool</name><anchor>smolagents.SpeechToTextTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L606</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## UserInputTool[[smolagents.UserInputTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.UserInputTool</name><anchor>smolagents.UserInputTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L90</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## VisitWebpageTool[[smolagents.VisitWebpageTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.VisitWebpageTool</name><anchor>smolagents.VisitWebpageTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L451</source><parameters>[{"name": "max_output_length", "val": ": int = 40000"}]</parameters></docstring>


</div>

## WebSearchTool[[smolagents.WebSearchTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.WebSearchTool</name><anchor>smolagents.WebSearchTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L339</source><parameters>[{"name": "max_results", "val": ": int = 10"}, {"name": "engine", "val": ": str = 'duckduckgo'"}]</parameters></docstring>


</div>

## WikipediaSearchTool[[smolagents.WikipediaSearchTool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.WikipediaSearchTool</name><anchor>smolagents.WikipediaSearchTool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L507</source><parameters>[{"name": "user_agent", "val": ": str = 'Smolagents (myemail@example.com)'"}, {"name": "language", "val": ": str = 'en'"}, {"name": "content_type", "val": ": str = 'text'"}, {"name": "extract_format", "val": ": str = 'WIKI'"}]</parameters><paramsdesc>- **user_agent** (`str`) -- Custom user-agent string to identify the project. This is required as per Wikipedia API policies.
  See: https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy
- **language** (`str`, default `"en"`) -- Language in which to retrieve Wikipedia article.
  See: http://meta.wikimedia.org/wiki/List_of_Wikipedias
- **content_type** (`Literal["summary", "text"]`, default `"text"`) -- Type of content to fetch. Can be "summary" for a short summary or "text" for the full article.
- **extract_format** (`Literal["HTML", "WIKI"]`, default `"WIKI"`) -- Extraction format of the output. Can be `"WIKI"` or `"HTML"`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Search Wikipedia and return the summary or full text of the requested article, along with the page URL.



<ExampleCodeBlock anchor="smolagents.WikipediaSearchTool.example">

Example:
```python
>>> from smolagents import CodeAgent, InferenceClientModel, WikipediaSearchTool
>>> agent = CodeAgent(
>>>     tools=[
>>>            WikipediaSearchTool(
>>>                user_agent="MyResearchBot (myemail@example.com)",
>>>                language="en",
>>>                content_type="summary",  # or "text"
>>>                extract_format="WIKI",
>>>            )
>>>        ],
>>>     model=InferenceClientModel(),
>>> )
>>> agent.run("Python_(programming_language)")
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/default_tools.md" />

### Tools
https://huggingface.co/docs/smolagents/main/reference/tools.md

# Tools

<Tip warning={true}>

Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
can vary as the APIs or underlying models are prone to change.

</Tip>

To learn more about agents and tools make sure to read the [introductory guide](../index). This page
contains the API docs for the underlying classes.

## Tool Base Classes

### load_tool[[smolagents.load_tool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>smolagents.load_tool</name><anchor>smolagents.load_tool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L840</source><parameters>[{"name": "repo_id", "val": ""}, {"name": "model_repo_id", "val": ": str | None = None"}, {"name": "token", "val": ": str | None = None"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_id** (`str`) --
  Space repo ID of a tool on the Hub.
- **model_repo_id** (`str`, *optional*) --
  Use this argument to use a different model than the default one for the tool you selected.
- **token** (`str`, *optional*) --
  The token to identify you on hf.co. If unset, will use the token generated when running `huggingface-cli
  login` (stored in `~/.huggingface`).
- **trust_remote_code** (`bool`, *optional*, defaults to False) --
  This needs to be accepted in order to load a tool from Hub.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
  `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your tool, and the others
  will be passed along to its init.</paramsdesc><paramgroups>0</paramgroups></docstring>

Main function to quickly load a tool from the Hub.

<Tip warning={true}>

Loading a tool means that you'll download the tool and execute it locally.
ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when
installing a package using pip/npm/apt.

</Tip>




</div>

### tool[[smolagents.tool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>smolagents.tool</name><anchor>smolagents.tool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L1061</source><parameters>[{"name": "tool_function", "val": ": Callable"}]</parameters><paramsdesc>- **tool_function** (`Callable`) -- Function to convert into a Tool subclass.
  Should have type hints for each input and a type hint for the output.
  Should also have a docstring including the description of the function
  and an 'Args:' part where each argument is described.</paramsdesc><paramgroups>0</paramgroups></docstring>

Convert a function into an instance of a dynamically created Tool subclass.




</div>

### Tool[[smolagents.Tool]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.Tool</name><anchor>smolagents.Tool</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

A base class for the functions used by the agent. Subclass this and implement the `forward` method as well as the
following class attributes:

- **description** (`str`) -- A short description of what your tool does, the inputs it expects and the output(s) it
  will return. For instance 'This is a tool that downloads a file from a `url`. It takes the `url` as input, and
  returns the text contained in the file'.
- **name** (`str`) -- A performative name that will be used for your tool in the prompt to the agent. For instance
  `"text-classifier"` or `"image_generator"`.
- **inputs** (`Dict[str, Dict[str, Union[str, type, bool]]]`) -- The dict of modalities expected for the inputs.
  It has one `type`key and a `description`key.
  This is used by `launch_gradio_demo` or to make a nice space from your tool, and also can be used in the generated
  description for your tool.
- **output_type** (`type`) -- The type of the tool output. This is used by `launch_gradio_demo`
  or to make a nice space from your tool, and also can be used in the generated description for your tool.
- **output_schema** (`Dict[str, Any]`, *optional*) -- The JSON schema defining the expected structure of the tool output.
  This can be included in system prompts to help agents understand the expected output format. Note: This is currently
  used for informational purposes only and does not perform actual output validation.

You can also override the method [setup()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.setup) if your tool has an expensive operation to perform before being
usable (such as loading a model). [setup()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.setup) will be called the first time you use your tool, but not at
instantiation.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_dict</name><anchor>smolagents.Tool.from_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L367</source><parameters>[{"name": "tool_dict", "val": ": dict[str, Any]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **tool_dict** (`dict[str, Any]`) -- Dictionary representation of the tool.
- ****kwargs** -- Additional keyword arguments to pass to the tool's constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`Tool`</rettype><retdesc>Tool object.</retdesc></docstring>

Create tool from a dictionary representation.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_gradio</name><anchor>smolagents.Tool.from_gradio</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L741</source><parameters>[{"name": "gradio_tool", "val": ""}]</parameters></docstring>

Creates a [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) from a gradio tool.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_hub</name><anchor>smolagents.Tool.from_hub</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L516</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": str | None = None"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the Space repo on the Hub where your tool is defined.
- **token** (`str`, *optional*) --
  The token to identify you on hf.co. If unset, will use the token generated when running
  `huggingface-cli login` (stored in `~/.huggingface`).
- **trust_remote_code(`str`,** *optional*, defaults to False) --
  This flags marks that you understand the risk of running remote code and that you trust this tool.
  If not setting this to True, loading the tool from Hub will fail.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
  `cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your tool, and the
  others will be passed along to its init.</paramsdesc><paramgroups>0</paramgroups></docstring>

Loads a tool defined on the Hub.

<Tip warning={true}>

Loading a tool from the Hub means that you'll download the tool and execute it locally.
ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when
installing a package using pip/npm/apt.

</Tip>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_langchain</name><anchor>smolagents.Tool.from_langchain</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L762</source><parameters>[{"name": "langchain_tool", "val": ""}]</parameters></docstring>

Creates a [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) from a langchain tool.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_space</name><anchor>smolagents.Tool.from_space</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L599</source><parameters>[{"name": "space_id", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "description", "val": ": str"}, {"name": "api_name", "val": ": str | None = None"}, {"name": "token", "val": ": str | None = None"}]</parameters><paramsdesc>- **space_id** (`str`) --
  The id of the Space on the Hub.
- **name** (`str`) --
  The name of the tool.
- **description** (`str`) --
  The description of the tool.
- **api_name** (`str`, *optional*) --
  The specific api_name to use, if the space has several tabs. If not precised, will default to the first available api.
- **token** (`str`, *optional*) --
  Add your token to access private spaces or increase your GPU quotas.</paramsdesc><paramgroups>0</paramgroups><rettype>[Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool)</rettype><retdesc>The Space, as a tool.</retdesc></docstring>

Creates a [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) from a Space given its id on the Hub.







<ExampleCodeBlock anchor="smolagents.Tool.from_space.example">

Examples:
```py
>>> image_generator = Tool.from_space(
...     space_id="black-forest-labs/FLUX.1-schnell",
...     name="image-generator",
...     description="Generate an image from a prompt"
... )
>>> image = image_generator("Generate an image of a cool surfer in Tahiti")
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="smolagents.Tool.from_space.example-2">

```py
>>> face_swapper = Tool.from_space(
...     "tuan2308/face-swap",
...     "face_swapper",
...     "Tool that puts the face shown on the first image on the second image. You can give it paths to images.",
... )
>>> image = face_swapper('./aymeric.jpeg', './ruth.jpg')
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>smolagents.Tool.push_to_hub</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L421</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": str = 'Upload tool'"}, {"name": "private", "val": ": bool | None = None"}, {"name": "token", "val": ": bool | str | None = None"}, {"name": "create_pr", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository you want to push your tool to. It should contain your organization name when
  pushing to a given organization.
- **commit_message** (`str`, *optional*, defaults to `"Upload tool"`) --
  Message to commit while pushing.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **token** (`bool` or `str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated
  when running `huggingface-cli login` (stored in `~/.huggingface`).
- **create_pr** (`bool`, *optional*, defaults to `False`) --
  Whether to create a PR with the uploaded files or directly commit.</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload the tool to the Hub.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>smolagents.Tool.save</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L390</source><parameters>[{"name": "output_dir", "val": ": str | Path"}, {"name": "tool_file_name", "val": ": str = 'tool'"}, {"name": "make_gradio_app", "val": ": bool = True"}]</parameters><paramsdesc>- **output_dir** (`str` or `Path`) -- The folder in which you want to save your tool.
- **tool_file_name** (`str`, *optional*) -- The file name in which you want to save your tool.
- **make_gradio_app** (`bool`, *optional*, defaults to True) -- Whether to also export a `requirements.txt` file and Gradio UI.</paramsdesc><paramgroups>0</paramgroups></docstring>

Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your
tool in `output_dir` as well as autogenerate:

- a `{tool_file_name}.py` file containing the logic for your tool.
If you pass `make_gradio_app=True`, this will also write:
- an `app.py` file providing a UI for your tool when it is exported to a Space with `tool.push_to_hub()`
- a `requirements.txt` containing the names of the modules used by your tool (as detected when inspecting its
  code)




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>setup</name><anchor>smolagents.Tool.setup</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L251</source><parameters>[]</parameters></docstring>

Overwrite this method here for any operation that is expensive and needs to be executed before you start using
your tool. Such as loading a big model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>smolagents.Tool.to_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L292</source><parameters>[]</parameters></docstring>
Returns a dictionary representing the tool

</div></div>

### launch_gradio_demo[[smolagents.launch_gradio_demo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>smolagents.launch_gradio_demo</name><anchor>smolagents.launch_gradio_demo</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L794</source><parameters>[{"name": "tool", "val": ": Tool"}]</parameters><paramsdesc>- **tool** (`Tool`) -- The tool for which to launch the demo.</paramsdesc><paramgroups>0</paramgroups></docstring>

Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes
`inputs` and `output_type`.




</div>

## ToolCollection[[smolagents.ToolCollection]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ToolCollection</name><anchor>smolagents.ToolCollection</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L895</source><parameters>[{"name": "tools", "val": ": list[Tool]"}]</parameters></docstring>

Tool collections enable loading a collection of tools in the agent's toolbox.

Collections can be loaded from a collection in the Hub or from an MCP server, see:
- [ToolCollection.from_hub()](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection.from_hub)
- [ToolCollection.from_mcp()](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection.from_mcp)

For example and usage, see: [ToolCollection.from_hub()](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection.from_hub) and [ToolCollection.from_mcp()](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection.from_mcp)



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_hub</name><anchor>smolagents.ToolCollection.from_hub</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L909</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "token", "val": ": str | None = None"}, {"name": "trust_remote_code", "val": ": bool = False"}]</parameters><paramsdesc>- **collection_slug** (str) -- The collection slug referencing the collection.
- **token** (str, *optional*) -- The authentication token if the collection is private.
- **trust_remote_code** (bool, *optional*, defaults to False) -- Whether to trust the remote code.</paramsdesc><paramgroups>0</paramgroups><rettype>ToolCollection</rettype><retdesc>A tool collection instance loaded with the tools.</retdesc></docstring>
Loads a tool collection from the Hub.

it adds a collection of tools from all Spaces in the collection to the agent's toolbox

> [!NOTE]
> Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if you'd
> like for this collection to showcase them.







<ExampleCodeBlock anchor="smolagents.ToolCollection.from_hub.example">

Example:
```py
>>> from smolagents import ToolCollection, CodeAgent

>>> image_tool_collection = ToolCollection.from_hub("huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f")
>>> agent = CodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True)

>>> agent.run("Please draw me a picture of rivers and lakes.")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_mcp</name><anchor>smolagents.ToolCollection.from_mcp</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/tools.py#L949</source><parameters>[{"name": "server_parameters", "val": ": 'mcp.StdioServerParameters' | dict"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "structured_output", "val": ": bool | None = None"}]</parameters><paramsdesc>- **server_parameters** (`mcp.StdioServerParameters` or `dict`) --
  Configuration parameters to connect to the MCP server. This can be:

  - An instance of `mcp.StdioServerParameters` for connecting a Stdio MCP server via standard input/output using a subprocess.

  - A `dict` with at least:
    - "url": URL of the server.
    - "transport": Transport protocol to use, one of:
      - "streamable-http": Streamable HTTP transport (default).
      - "sse": Legacy HTTP+SSE transport (deprecated).
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether to trust the execution of code from tools defined on the MCP server.
  This option should only be set to `True` if you trust the MCP server,
  and undertand the risks associated with running remote code on your local machine.
  If set to `False`, loading tools from MCP will fail.
- **structured_output** (`bool`, *optional*, defaults to `False`) --
  Whether to enable structured output features for MCP tools. If True, enables:
  - Support for outputSchema in MCP tools
  - Structured content handling (structuredContent from MCP responses)
  - JSON parsing fallback for structured data
  If False, uses the original simple text-only behavior for backwards compatibility.</paramsdesc><paramgroups>0</paramgroups><rettype>ToolCollection</rettype><retdesc>A tool collection instance.</retdesc></docstring>
Automatically load a tool collection from an MCP server.

This method supports Stdio, Streamable HTTP, and legacy HTTP+SSE MCP servers. Look at the `server_parameters`
argument for more details on how to connect to each MCP server.

Note: a separate thread will be spawned to run an asyncio event loop handling
the MCP server.







<ExampleCodeBlock anchor="smolagents.ToolCollection.from_mcp.example">

Example with a Stdio MCP server:
```py
>>> import os
>>> from smolagents import ToolCollection, CodeAgent, InferenceClientModel
>>> from mcp import StdioServerParameters

>>> model = InferenceClientModel()

>>> server_parameters = StdioServerParameters(
>>>     command="uvx",
>>>     args=["--quiet", "pubmedmcp@0.1.3"],
>>>     env={"UV_PYTHON": "3.12", **os.environ},
>>> )

>>> with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
>>>     agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True, model=model)
>>>     agent.run("Please find a remedy for hangover.")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="smolagents.ToolCollection.from_mcp.example-2">

Example with structured output enabled:
```py
>>> with ToolCollection.from_mcp(server_parameters, trust_remote_code=True, structured_output=True) as tool_collection:
>>>     agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True, model=model)
>>>     agent.run("Please find a remedy for hangover.")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="smolagents.ToolCollection.from_mcp.example-3">

Example with a Streamable HTTP MCP server:
```py
>>> with ToolCollection.from_mcp({"url": "http://127.0.0.1:8000/mcp", "transport": "streamable-http"}, trust_remote_code=True) as tool_collection:
>>>     agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True, model=model)
>>>     agent.run("Please find a remedy for hangover.")
```

</ExampleCodeBlock>


</div></div>

## MCP Client[[smolagents.MCPClient]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.MCPClient</name><anchor>smolagents.MCPClient</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/mcp_client.py#L33</source><parameters>[{"name": "server_parameters", "val": ": 'StdioServerParameters' | dict[str, Any] | list['StdioServerParameters' | dict[str, Any]]"}, {"name": "adapter_kwargs", "val": ": dict[str, Any] | None = None"}, {"name": "structured_output", "val": ": bool | None = None"}]</parameters><paramsdesc>- **server_parameters** (StdioServerParameters | dict[str, Any] | list[StdioServerParameters | dict[str, Any]]) --
  Configuration parameters to connect to the MCP server. Can be a list if you want to connect multiple MCPs at once.

  - An instance of `mcp.StdioServerParameters` for connecting a Stdio MCP server via standard input/output using a subprocess.

  - A `dict` with at least:
    - "url": URL of the server.
    - "transport": Transport protocol to use, one of:
      - "streamable-http": Streamable HTTP transport (default).
      - "sse": Legacy HTTP+SSE transport (deprecated).
- **adapter_kwargs** (dict[str, Any], optional) --
  Additional keyword arguments to be passed directly to `MCPAdapt`.
- **structured_output** (bool, optional, defaults to False) --
  Whether to enable structured output features for MCP tools. If True, enables:
  - Support for outputSchema in MCP tools
  - Structured content handling (structuredContent from MCP responses)
  - JSON parsing fallback for structured data
  If False, uses the original simple text-only behavior for backwards compatibility.</paramsdesc><paramgroups>0</paramgroups></docstring>
Manages the connection to an MCP server and make its tools available to SmolAgents.

Note: tools can only be accessed after the connection has been started with the
`connect()` method, done during the init. If you don't use the context manager
we strongly encourage to use "try ... finally" to ensure the connection is cleaned up.



<ExampleCodeBlock anchor="smolagents.MCPClient.example">

Example:
```python
# fully managed context manager + stdio
with MCPClient(...) as tools:
    # tools are now available

# context manager + Streamable HTTP transport:
with MCPClient({"url": "http://localhost:8000/mcp", "transport": "streamable-http"}) as tools:
    # tools are now available

# Enable structured output for advanced MCP tools:
with MCPClient(server_parameters, structured_output=True) as tools:
    # tools with structured output support are now available

# manually manage the connection via the mcp_client object:
try:
    mcp_client = MCPClient(...)
    tools = mcp_client.get_tools()

    # use your tools here.
finally:
    mcp_client.disconnect()
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>connect</name><anchor>smolagents.MCPClient.connect</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/mcp_client.py#L124</source><parameters>[]</parameters></docstring>
Connect to the MCP server and initialize the tools.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disconnect</name><anchor>smolagents.MCPClient.disconnect</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/mcp_client.py#L128</source><parameters>[{"name": "exc_type", "val": ": type[BaseException] | None = None"}, {"name": "exc_value", "val": ": BaseException | None = None"}, {"name": "exc_traceback", "val": ": TracebackType | None = None"}]</parameters></docstring>
Disconnect from the MCP server

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_tools</name><anchor>smolagents.MCPClient.get_tools</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/mcp_client.py#L137</source><parameters>[]</parameters><rettype>list[Tool]</rettype><retdesc>The SmolAgents tools available from the MCP server.</retdesc><raises>- ``ValueError`` -- If the MCP server tools is None (usually assuming the server is not started).</raises><raisederrors>``ValueError``</raisederrors></docstring>
The SmolAgents tools available from the MCP server.

Note: for now, this always returns the tools available at the creation of the session,
but it will in a future release return also new tools available from the MCP server if
any at call time.










</div></div>

## Agent Types

Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return
text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to
correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes
around these types.

The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
object should still behave as a `PIL.Image`.

These types have three specific purposes:

- Calling `to_raw` on the type should return the underlying object
- Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText`
  but will be the path of the serialized version of the object in other instances
- Displaying it in an ipython kernel should display the object correctly

### AgentText[[smolagents.AgentText]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AgentText</name><anchor>smolagents.AgentText</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L62</source><parameters>[{"name": "value", "val": ""}]</parameters></docstring>

Text type returned by the agent. Behaves as a string.


</div>

### AgentImage[[smolagents.AgentImage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AgentImage</name><anchor>smolagents.AgentImage</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L74</source><parameters>[{"name": "value", "val": ""}]</parameters></docstring>

Image type returned by the agent. Behaves as a PIL.Image.Image.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>smolagents.AgentImage.save</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L164</source><parameters>[{"name": "output_bytes", "val": ""}, {"name": "format", "val": ": str = None"}, {"name": "**params", "val": ""}]</parameters><paramsdesc>- **output_bytes** (bytes) -- The output bytes to save the image to.
- **format** (str) -- The format to use for the output image. The format is the same as in PIL.Image.save.
- ****params** -- Additional parameters to pass to PIL.Image.save.</paramsdesc><paramgroups>0</paramgroups></docstring>

Saves the image to a file.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_raw</name><anchor>smolagents.AgentImage.to_raw</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L119</source><parameters>[]</parameters></docstring>

Returns the "raw" version of that object. In the case of an AgentImage, it is a PIL.Image.Image.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_string</name><anchor>smolagents.AgentImage.to_string</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L136</source><parameters>[]</parameters></docstring>

Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized
version of the image.


</div></div>

### AgentAudio[[smolagents.AgentAudio]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AgentAudio</name><anchor>smolagents.AgentAudio</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L176</source><parameters>[{"name": "value", "val": ""}, {"name": "samplerate", "val": " = 16000"}]</parameters></docstring>

Audio type returned by the agent.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_raw</name><anchor>smolagents.AgentAudio.to_raw</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L216</source><parameters>[]</parameters></docstring>

Returns the "raw" version of that object. It is a `torch.Tensor` object.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_string</name><anchor>smolagents.AgentAudio.to_string</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/agent_types.py#L237</source><parameters>[]</parameters></docstring>

Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized
version of the audio.


</div></div>

<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/tools.md" />

### Models
https://huggingface.co/docs/smolagents/main/reference/models.md

# Models

<Tip warning={true}>

Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
can vary as the APIs or underlying models are prone to change.

</Tip>

To learn more about agents and tools make sure to read the [introductory guide](../index). This page
contains the API docs for the underlying classes.

## Models

All model classes in smolagents support passing additional keyword arguments (like `temperature`, `max_tokens`, `top_p`, etc.) directly at instantiation time.
These parameters are automatically forwarded to the underlying model's completion calls, allowing you to configure model behavior such as creativity, response length, and sampling strategies.

### Base Model[[smolagents.Model]]

The `Model` class serves as the foundation for all model implementations, providing the core interface that custom models must implement to work with agents.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.Model</name><anchor>smolagents.Model</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L393</source><parameters>[{"name": "flatten_messages_as_text", "val": ": bool = False"}, {"name": "tool_name_key", "val": ": str = 'name'"}, {"name": "tool_arguments_key", "val": ": str = 'arguments'"}, {"name": "model_id", "val": ": str | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **flatten_messages_as_text** (`bool`, default `False`) --
  Whether to flatten complex message content into plain text format.
- **tool_name_key** (`str`, default `"name"`) --
  The key used to extract tool names from model responses.
- **tool_arguments_key** (`str`, default `"arguments"`) --
  The key used to extract tool arguments from model responses.
- **model_id** (`str`, *optional*) --
  Identifier for the specific model being used.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying model completion call.</paramsdesc><paramgroups>0</paramgroups></docstring>
Base class for all language model implementations.

This abstract class defines the core interface that all model implementations must follow
to work with agents. It provides common functionality for message handling, tool integration,
and model configuration while allowing subclasses to implement their specific generation logic.



Note:
This is an abstract base class. Subclasses must implement the `generate()` method
to provide actual model inference capabilities.

<ExampleCodeBlock anchor="smolagents.Model.example">

Example:
```python
class CustomModel(Model):
    def generate(self, messages, **kwargs):
        # Implementation specific to your model
        pass
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate</name><anchor>smolagents.Model.generate</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L494</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "stop_sequences", "val": ": list[str] | None = None"}, {"name": "response_format", "val": ": dict[str, str] | None = None"}, {"name": "tools_to_call_from", "val": ": list[smolagents.tools.Tool] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **messages** (`list[dict[str, str | list[dict]]] | list[ChatMessage]`) --
  A list of message dictionaries to be processed. Each dictionary should have the structure `{"role": "user/system", "content": "message content"}`.
- **stop_sequences** (`List[str]`, *optional*) --
  A list of strings that will stop the generation if encountered in the model's output.
- **response_format** (`dict[str, str]`, *optional*) --
  The response format to use in the model's response.
- **tools_to_call_from** (`List[Tool]`, *optional*) --
  A list of tools that the model can use to generate responses.
- ****kwargs** --
  Additional keyword arguments to be passed to the underlying model.</paramsdesc><paramgroups>0</paramgroups><rettype>`ChatMessage`</rettype><retdesc>A chat message object containing the model's response.</retdesc></docstring>
Process the input messages and return the model's response.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>parse_tool_calls</name><anchor>smolagents.Model.parse_tool_calls</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L524</source><parameters>[{"name": "message", "val": ": ChatMessage"}]</parameters></docstring>
Sometimes APIs do not return the tool call as a specific object, so we need to parse it.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>smolagents.Model.to_dict</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L537</source><parameters>[]</parameters></docstring>

Converts the model into a JSON-compatible dictionary.


</div></div>

### API Model[[smolagents.ApiModel]]

The `ApiModel` class serves as the foundation for all API-based model implementations, providing common functionality for external API interactions, rate limiting, and client management that API-specific models inherit.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.ApiModel</name><anchor>smolagents.ApiModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1066</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "client", "val": ": typing.Optional[typing.Any] = None"}, {"name": "requests_per_minute", "val": ": float | None = None"}, {"name": "retry", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The identifier for the model to be used with the API.
- **custom_role_conversions** (`dict[str, str`], **optional**) --
  Mapping to convert  between internal role names and API-specific role names. Defaults to None.
- **client** (`Any`, **optional**) --
  Pre-configured API client instance. If not provided, a default client will be created. Defaults to None.
- **requests_per_minute** (`float`, **optional**) --
  Rate limit in requests per minute.
- **retry** (`bool`, **optional**) --
  Wether to retry on rate limit errors, up to RETRY_MAX_ATTEMPTS times. Defaults to True.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying model completion call.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for API-based language models.

This class serves as a foundation for implementing models that interact with
external APIs. It handles the common functionality for managing model IDs,
custom role mappings, and API client connections.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_client</name><anchor>smolagents.ApiModel.create_client</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1111</source><parameters>[]</parameters></docstring>
Create the API client for the specific service.

</div></div>

### TransformersModel[[smolagents.TransformersModel]]

For convenience, we have added a `TransformersModel` that implements the points above by building a local `transformers` pipeline for the model_id given at initialization.

```python
from smolagents import TransformersModel

model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")

print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"]))
```
```text
>>> What a
```

You can pass any keyword arguments supported by the underlying model (such as `temperature`, `max_new_tokens`, `top_p`, etc.) directly at instantiation time. These are forwarded to the model completion call:

```python
model = TransformersModel(
    model_id="HuggingFaceTB/SmolLM-135M-Instruct",
    temperature=0.7,
    max_new_tokens=1000
)
```

> [!TIP]
> You must have `transformers` and `torch` installed on your machine. Please run `pip install 'smolagents[transformers]'` if it's not the case.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.TransformersModel</name><anchor>smolagents.TransformersModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L793</source><parameters>[{"name": "model_id", "val": ": str | None = None"}, {"name": "device_map", "val": ": str | None = None"}, {"name": "torch_dtype", "val": ": str | None = None"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "model_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "max_new_tokens", "val": ": int = 4096"}, {"name": "max_tokens", "val": ": int | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub.
  For example, `"Qwen/Qwen3-Next-80B-A3B-Thinking"`.
- **device_map** (`str`, *optional*) --
  The device_map to initialize your model with.
- **torch_dtype** (`str`, *optional*) --
  The torch_dtype to initialize your model with.
- **trust_remote_code** (bool, default `False`) --
  Some models on the Hub require running remote code: for this model, you would have to set this flag to True.
- **model_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments to pass to `AutoModel.from_pretrained` (like revision, model_args, config, etc.).
- **max_new_tokens** (`int`, default `4096`) --
  Maximum number of new tokens to generate, ignoring the number of tokens in the prompt.
- **max_tokens** (`int`, *optional*) --
  Alias for `max_new_tokens`. If provided, this value takes precedence.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying Transformers model generate call, such as `device`.</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- 
  If the model name is not provided.</raises><raisederrors>``ValueError``</raisederrors></docstring>
A class that uses Hugging Face's Transformers library for language model interaction.

This model allows you to load and use Hugging Face's models locally using the Transformers library. It supports features like stop sequences and grammar customization.

> [!TIP]
> You must have `transformers` and `torch` installed on your machine. Please run `pip install 'smolagents[transformers]'` if it's not the case.







<ExampleCodeBlock anchor="smolagents.TransformersModel.example">

Example:
```python
>>> engine = TransformersModel(
...     model_id="Qwen/Qwen3-Next-80B-A3B-Thinking",
...     device="cuda",
...     max_new_tokens=5000,
... )
>>> messages = [{"role": "user", "content": "Explain quantum mechanics in simple terms."}]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```

</ExampleCodeBlock>


</div>

### InferenceClientModel[[smolagents.InferenceClientModel]]

The `InferenceClientModel` wraps huggingface_hub's [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference) for the execution of the LLM. It supports all [Inference Providers](https://huggingface.co/docs/inference-providers/index) available on the Hub: Cerebras, Cohere, Fal, Fireworks, HF-Inference, Hyperbolic, Nebius, Novita, Replicate, SambaNova, Together, and more.

You can also set a rate limit in requests per minute by using the `requests_per_minute` argument:

```python
from smolagents import InferenceClientModel

messages = [
  {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
]

model = InferenceClientModel(provider="novita", requests_per_minute=60)
print(model(messages))
```
```text
>>> Of course! If you change your mind, feel free to reach out. Take care!
```

You can pass any keyword arguments supported by the underlying model (such as `temperature`, `max_tokens`, `top_p`, etc.) directly at instantiation time. These are forwarded to the model completion call:

```python
model = InferenceClientModel(
    provider="novita",
    requests_per_minute=60,
    temperature=0.8,
    max_tokens=500
)
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.InferenceClientModel</name><anchor>smolagents.InferenceClientModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1382</source><parameters>[{"name": "model_id", "val": ": str = 'Qwen/Qwen3-Next-80B-A3B-Thinking'"}, {"name": "provider", "val": ": str | None = None"}, {"name": "token", "val": ": str | None = None"}, {"name": "timeout", "val": ": int = 120"}, {"name": "client_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "bill_to", "val": ": str | None = None"}, {"name": "base_url", "val": ": str | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`, *optional*, default `"Qwen/Qwen3-Next-80B-A3B-Thinking"`) --
  The Hugging Face model ID to be used for inference.
  This can be a model identifier from the Hugging Face model hub or a URL to a deployed Inference Endpoint.
  Currently, it defaults to `"Qwen/Qwen3-Next-80B-A3B-Thinking"`, but this may change in the future.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. A list of supported providers can be found in the [Inference Providers documentation](https://huggingface.co/docs/inference-providers/index#partners).
  Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order [here](https://hf.co/settings/inference-providers).
  If `base_url` is passed, then `provider` is not used.
- **token** (`str`, *optional*) --
  Token used by the Hugging Face API for authentication. This token need to be authorized 'Make calls to the serverless Inference Providers'.
  If the model is gated (like Llama-3 models), the token also needs 'Read access to contents of all public gated repos you can access'.
  If not provided, the class will try to use environment variable 'HF_TOKEN', else use the token stored in the Hugging Face CLI configuration.
- **timeout** (`int`, *optional*, defaults to 120) --
  Timeout for the API request, in seconds.
- **client_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments to pass to the Hugging Face InferenceClient.
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
- **api_key** (`str`, *optional*) --
  Token to use for authentication. This is a duplicated argument from `token` to make [InferenceClientModel](/docs/smolagents/main/en/reference/models#smolagents.InferenceClientModel)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None.
- **bill_to** (`str`, *optional*) --
  The billing account to use for the requests. By default the requests are billed on the user's account. Requests can only be billed to
  an organization the user is a member of, and which has subscribed to Enterprise Hub.
- **base_url** (`str`, `optional`) --
  Base URL to run inference. This is a duplicated argument from `model` to make [InferenceClientModel](/docs/smolagents/main/en/reference/models#smolagents.InferenceClientModel)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying Hugging Face InferenceClient completion call.</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- 
  If the model name is not provided.</raises><raisederrors>``ValueError``</raisederrors></docstring>
A class to interact with Hugging Face's Inference Providers for language model interaction.

This model allows you to communicate with Hugging Face's models using Inference Providers. It can be used in both serverless mode, with a dedicated endpoint, or even with a local URL, supporting features like stop sequences and grammar customization.

Providers include Cerebras, Cohere, Fal, Fireworks, HF-Inference, Hyperbolic, Nebius, Novita, Replicate, SambaNova, Together, and more.







<ExampleCodeBlock anchor="smolagents.InferenceClientModel.example">

Example:
```python
>>> engine = InferenceClientModel(
...     model_id="Qwen/Qwen3-Next-80B-A3B-Thinking",
...     provider="hyperbolic",
...     token="your_hf_token_here",
...     max_tokens=5000,
... )
>>> messages = [{"role": "user", "content": "Explain quantum mechanics in simple terms."}]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_client</name><anchor>smolagents.InferenceClientModel.create_client</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1473</source><parameters>[]</parameters></docstring>
Create the Hugging Face client.

</div></div>

### LiteLLMModel[[smolagents.LiteLLMModel]]

The `LiteLLMModel` leverages [LiteLLM](https://www.litellm.ai/) to support 100+ LLMs from various providers.
You can pass kwargs upon model initialization that will then be used whenever using the model, for instance below we pass `temperature`. You can also set a rate limit in requests per minute by using the `requests_per_minute` argument.

```python
from smolagents import LiteLLMModel

messages = [
  {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
]

model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10, requests_per_minute=60)
print(model(messages))
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.LiteLLMModel</name><anchor>smolagents.LiteLLMModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1131</source><parameters>[{"name": "model_id", "val": ": str | None = None"}, {"name": "api_base", "val": ": str | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "flatten_messages_as_text", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The model identifier to use on the server (e.g. "gpt-3.5-turbo").
- **api_base** (`str`, *optional*) --
  The base URL of the provider API to call the model.
- **api_key** (`str`, *optional*) --
  The API key to use for authentication.
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
- **flatten_messages_as_text** (`bool`, *optional*) -- Whether to flatten messages as text.
  Defaults to `True` for models that start with "ollama", "groq", "cerebras".
- ****kwargs** --
  Additional keyword arguments to forward to the underlying LiteLLM completion call.</paramsdesc><paramgroups>0</paramgroups></docstring>
Model to use [LiteLLM Python SDK](https://docs.litellm.ai/docs/#litellm-python-sdk) to access hundreds of LLMs.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_client</name><anchor>smolagents.LiteLLMModel.create_client</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1181</source><parameters>[]</parameters></docstring>
Create the LiteLLM client.

</div></div>

### LiteLLMRouterModel[[smolagents.LiteLLMRouterModel]]

The `LiteLLMRouterModel` is a wrapper around the [LiteLLM Router](https://docs.litellm.ai/docs/routing) that leverages
advanced routing strategies: load-balancing across multiple deployments, prioritizing critical requests via queueing,
and implementing basic reliability measures such as cooldowns, fallbacks, and exponential backoff retries.

```python
from smolagents import LiteLLMRouterModel

messages = [
  {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
]

model = LiteLLMRouterModel(
    model_id="llama-3.3-70b",
    model_list=[
        {
            "model_name": "llama-3.3-70b",
            "litellm_params": {"model": "groq/llama-3.3-70b", "api_key": os.getenv("GROQ_API_KEY")},
        },
        {
            "model_name": "llama-3.3-70b",
            "litellm_params": {"model": "cerebras/llama-3.3-70b", "api_key": os.getenv("CEREBRAS_API_KEY")},
        },
    ],
    client_kwargs={
        "routing_strategy": "simple-shuffle",
    },
)
print(model(messages))
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.LiteLLMRouterModel</name><anchor>smolagents.LiteLLMRouterModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1289</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "model_list", "val": ": list"}, {"name": "client_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "flatten_messages_as_text", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  Identifier for the model group to use from the model list (e.g., "model-group-1").
- **model_list** (`list[dict[str, Any]]`) --
  Model configurations to be used for routing.
  Each configuration should include the model group name and any necessary parameters.
  For more details, refer to the [LiteLLM Routing](https://docs.litellm.ai/docs/routing#quick-start) documentation.
- **client_kwargs** (`dict[str, Any]`, *optional*) --
  Additional configuration parameters for the Router client. For more details, see the
  [LiteLLM Routing Configurations](https://docs.litellm.ai/docs/routing).
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
- **flatten_messages_as_text** (`bool`, *optional*) -- Whether to flatten messages as text.
  Defaults to `True` for models that start with "ollama", "groq", "cerebras".
- ****kwargs** --
  Additional keyword arguments to forward to the underlying LiteLLM Router completion call.</paramsdesc><paramgroups>0</paramgroups></docstring>
Router‑based client for interacting with the [LiteLLM Python SDK Router](https://docs.litellm.ai/docs/routing).

This class provides a high-level interface for distributing requests among multiple language models using
the LiteLLM SDK's routing capabilities. It is responsible for initializing and configuring the router client,
applying custom role conversions, and managing message formatting to ensure seamless integration with various LLMs.



<ExampleCodeBlock anchor="smolagents.LiteLLMRouterModel.example">

Example:
```python
>>> import os
>>> from smolagents import CodeAgent, WebSearchTool, LiteLLMRouterModel
>>> os.environ["OPENAI_API_KEY"] = ""
>>> os.environ["AWS_ACCESS_KEY_ID"] = ""
>>> os.environ["AWS_SECRET_ACCESS_KEY"] = ""
>>> os.environ["AWS_REGION"] = ""
>>> llm_loadbalancer_model_list = [
...     {
...         "model_name": "model-group-1",
...         "litellm_params": {
...             "model": "gpt-4o-mini",
...             "api_key": os.getenv("OPENAI_API_KEY"),
...         },
...     },
...     {
...         "model_name": "model-group-1",
...         "litellm_params": {
...             "model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
...             "aws_access_key_id": os.getenv("AWS_ACCESS_KEY_ID"),
...             "aws_secret_access_key": os.getenv("AWS_SECRET_ACCESS_KEY"),
...             "aws_region_name": os.getenv("AWS_REGION"),
...         },
...     },
>>> ]
>>> model = LiteLLMRouterModel(
...    model_id="model-group-1",
...    model_list=llm_loadbalancer_model_list,
...    client_kwargs={
...        "routing_strategy":"simple-shuffle"
...    }
>>> )
>>> agent = CodeAgent(tools=[WebSearchTool()], model=model)
>>> agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
```

</ExampleCodeBlock>


</div>

### OpenAIModel[[smolagents.OpenAIModel]]

This class lets you call any OpenAIServer compatible model.
Here's how you can set it (you can customise the `api_base` url to point to another server):
```py
import os
from smolagents import OpenAIModel

model = OpenAIModel(
    model_id="gpt-4o",
    api_base="https://api.openai.com/v1",
    api_key=os.environ["OPENAI_API_KEY"],
)
```

You can pass any keyword arguments supported by the underlying model (such as `temperature`, `max_tokens`, `top_p`, etc.) directly at instantiation time. These are forwarded to the model completion call:

```py
model = OpenAIModel(
    model_id="gpt-4o",
    api_base="https://api.openai.com/v1",
    api_key=os.environ["OPENAI_API_KEY"],
    temperature=0.7,
    max_tokens=1000,
    top_p=0.9,
)
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.OpenAIModel</name><anchor>smolagents.OpenAIModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1572</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "api_base", "val": ": str | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "organization", "val": ": str | None = None"}, {"name": "project", "val": ": str | None = None"}, {"name": "client_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "flatten_messages_as_text", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The model identifier to use on the server (e.g. "gpt-5").
- **api_base** (`str`, *optional*) --
  The base URL of the OpenAI-compatible API server.
- **api_key** (`str`, *optional*) --
  The API key to use for authentication.
- **organization** (`str`, *optional*) --
  The organization to use for the API request.
- **project** (`str`, *optional*) --
  The project to use for the API request.
- **client_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments to pass to the OpenAI client (like organization, project, max_retries etc.).
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
- **flatten_messages_as_text** (`bool`, default `False`) --
  Whether to flatten messages as text.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying OpenAI API completion call, for instance `temperature`.</paramsdesc><paramgroups>0</paramgroups></docstring>
This model connects to an OpenAI-compatible API server.




</div>

### AzureOpenAIModel[[smolagents.AzureOpenAIModel]]

`AzureOpenAIModel` allows you to connect to any Azure OpenAI deployment. 

Below you can find an example of how to set it up, note that you can omit the `azure_endpoint`, `api_key`, and `api_version` arguments, provided you've set the corresponding environment variables -- `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.

Pay attention to the lack of an `AZURE_` prefix for `OPENAI_API_VERSION`, this is due to the way the underlying [openai](https://github.com/openai/openai-python) package is designed. 

```py
import os

from smolagents import AzureOpenAIModel

model = AzureOpenAIModel(
    model_id = os.environ.get("AZURE_OPENAI_MODEL"),
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
    api_version=os.environ.get("OPENAI_API_VERSION")    
)
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AzureOpenAIModel</name><anchor>smolagents.AzureOpenAIModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1725</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "azure_endpoint", "val": ": str | None = None"}, {"name": "api_key", "val": ": str | None = None"}, {"name": "api_version", "val": ": str | None = None"}, {"name": "client_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The model deployment name to use when connecting (e.g. "gpt-4o-mini").
- **azure_endpoint** (`str`, *optional*) --
  The Azure endpoint, including the resource, e.g. `https://example-resource.azure.openai.com/`. If not provided, it will be inferred from the `AZURE_OPENAI_ENDPOINT` environment variable.
- **api_key** (`str`, *optional*) --
  The API key to use for authentication. If not provided, it will be inferred from the `AZURE_OPENAI_API_KEY` environment variable.
- **api_version** (`str`, *optional*) --
  The API version to use. If not provided, it will be inferred from the `OPENAI_API_VERSION` environment variable.
- **client_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments to pass to the AzureOpenAI client (like organization, project, max_retries etc.).
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
- ****kwargs** --
  Additional keyword arguments to forward to the underlying Azure OpenAI API completion call.</paramsdesc><paramgroups>0</paramgroups></docstring>
This model connects to an Azure OpenAI deployment.




</div>

### AmazonBedrockModel[[smolagents.AmazonBedrockModel]]

`AmazonBedrockModel` helps you connect to Amazon Bedrock and run your agent with any available models.

Below is an example setup. This class also offers additional options for customization.

```py
import os

from smolagents import AmazonBedrockModel

model = AmazonBedrockModel(
    model_id = os.environ.get("AMAZON_BEDROCK_MODEL_ID"),
)
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.AmazonBedrockModel</name><anchor>smolagents.AmazonBedrockModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L1785</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "client", "val": " = None"}, {"name": "client_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "custom_role_conversions", "val": ": dict[str, str] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The model identifier to use on Bedrock (e.g. "us.amazon.nova-pro-v1:0").
- **client** (`boto3.client`, *optional*) --
  A custom boto3 client for AWS interactions. If not provided, a default client will be created.
- **client_kwargs** (dict[str, Any], *optional*) --
  Keyword arguments used to configure the boto3 client if it needs to be created internally.
  Examples include `region_name`, `config`, or `endpoint_url`.
- **custom_role_conversions** (`dict[str, str]`, *optional*) --
  Custom role conversion mapping to convert message roles in others.
  Useful for specific models that do not support specific message roles like "system".
  Defaults to converting all roles to "user" role to enable using all the Bedrock models.
- **flatten_messages_as_text** (`bool`, default `False`) --
  Whether to flatten messages as text.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying Amazon Bedrock model converse call.</paramsdesc><paramgroups>0</paramgroups></docstring>

A model class for interacting with Amazon Bedrock Server models through the Bedrock API.

This class provides an interface to interact with various Bedrock language models,
allowing for customized model inference, guardrail configuration, message handling,
and other parameters allowed by boto3 API.

Authentication:

Amazon Bedrock supports multiple authentication methods:
- Default AWS credentials:
  Use the default AWS credential chain (e.g., IAM roles, IAM users).
- API Key Authentication (requires `boto3 >= 1.39.0`):
  Set the API key using the `AWS_BEARER_TOKEN_BEDROCK` environment variable.

> [!TIP]
> API key support requires `boto3 >= 1.39.0`.
> For users not relying on API key authentication, the minimum supported version is `boto3 >= 1.36.18`.



Examples:
<ExampleCodeBlock anchor="smolagents.AmazonBedrockModel.example">

Creating a model instance with default settings:
```python
>>> bedrock_model = AmazonBedrockModel(
...     model_id='us.amazon.nova-pro-v1:0'
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="smolagents.AmazonBedrockModel.example-2">

Creating a model instance with a custom boto3 client:
```python
>>> import boto3
>>> client = boto3.client('bedrock-runtime', region_name='us-west-2')
>>> bedrock_model = AmazonBedrockModel(
...     model_id='us.amazon.nova-pro-v1:0',
...     client=client
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="smolagents.AmazonBedrockModel.example-3">

Creating a model instance with client_kwargs for internal client creation:
```python
>>> bedrock_model = AmazonBedrockModel(
...     model_id='us.amazon.nova-pro-v1:0',
...     client_kwargs={'region_name': 'us-west-2', 'endpoint_url': 'https://custom-endpoint.com'}
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="smolagents.AmazonBedrockModel.example-4">

Creating a model instance with inference and guardrail configurations:
```python
>>> additional_api_config = {
...     "inferenceConfig": {
...         "maxTokens": 3000
...     },
...     "guardrailConfig": {
...         "guardrailIdentifier": "identify1",
...         "guardrailVersion": 'v1'
...     },
... }
>>> bedrock_model = AmazonBedrockModel(
...     model_id='anthropic.claude-3-haiku-20240307-v1:0',
...     **additional_api_config
... )
```

</ExampleCodeBlock>


</div>

### MLXModel[[smolagents.MLXModel]]


```python
from smolagents import MLXModel

model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")

print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
```
```text
>>> What a
```

> [!TIP]
> You must have `mlx-lm` installed on your machine. Please run `pip install 'smolagents[mlx-lm]'` if it's not the case.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.MLXModel</name><anchor>smolagents.MLXModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L684</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "load_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "apply_chat_template_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (str) --
  The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub.
- **tool_name_key** (str) --
  The key, which can usually be found in the model's chat template, for retrieving a tool name.
- **tool_arguments_key** (str) --
  The key, which can usually be found in the model's chat template, for retrieving tool arguments.
- **trust_remote_code** (bool, default `False`) --
  Some models on the Hub require running remote code: for this model, you would have to set this flag to True.
- **load_kwargs** (dict[str, Any], *optional*) --
  Additional keyword arguments to pass to the `mlx.lm.load` method when loading the model and tokenizer.
- **apply_chat_template_kwargs** (dict, *optional*) --
  Additional keyword arguments to pass to the `apply_chat_template` method of the tokenizer.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying MLX model stream_generate call, for instance `max_tokens`.</paramsdesc><paramgroups>0</paramgroups></docstring>
A class to interact with models loaded using MLX on Apple silicon.

> [!TIP]
> You must have `mlx-lm` installed on your machine. Please run `pip install 'smolagents[mlx-lm]'` if it's not the case.



<ExampleCodeBlock anchor="smolagents.MLXModel.example">

Example:
```python
>>> engine = MLXModel(
...     model_id="mlx-community/Qwen2.5-Coder-32B-Instruct-4bit",
...     max_tokens=10000,
... )
>>> messages = [
...     {
...         "role": "user",
...         "content": "Explain quantum mechanics in simple terms."
...     }
... ]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```

</ExampleCodeBlock>


</div>

### VLLMModel[[smolagents.VLLMModel]]

Model to use [vLLM](https://docs.vllm.ai/) for fast LLM inference and serving.

```python
from smolagents import VLLMModel

model = VLLMModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")

print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
```

> [!TIP]
> You must have `vllm` installed on your machine. Please run `pip install 'smolagents[vllm]'` if it's not the case.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class smolagents.VLLMModel</name><anchor>smolagents.VLLMModel</anchor><source>https://github.com/huggingface/smolagents/blob/main/src/smolagents/models.py#L574</source><parameters>[{"name": "model_id", "val": ""}, {"name": "model_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  The Hugging Face model ID to be used for inference.
  This can be a path or model identifier from the Hugging Face model hub.
- **model_kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments to forward to the vLLM LLM instantiation, such as `revision`, `max_model_len`, etc.
- ****kwargs** --
  Additional keyword arguments to forward to the underlying vLLM model generate call.</paramsdesc><paramgroups>0</paramgroups></docstring>
Model to use [vLLM](https://docs.vllm.ai/) for fast LLM inference and serving.




</div>

### Custom Model

You're free to create and use your own models to power your agent.

You could subclass the base `Model` class to create a model for your agent.
The main criteria is to subclass the `generate` method, with these two criteria:
1. It follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns an object with a `.content` attribute.
2. It stops generating outputs at the sequences passed in the argument `stop_sequences`.

For defining your LLM, you can make a `CustomModel` class that inherits from the base `Model` class.
It should have a generate method that takes a list of [messages](./chat_templating) and returns an object with a .content attribute containing the text. The `generate` method also needs to accept a `stop_sequences` argument that indicates when to stop generating.

```python
from huggingface_hub import login, InferenceClient

from smolagents import Model

login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")

model_id = "meta-llama/Llama-3.3-70B-Instruct"

client = InferenceClient(model=model_id)

class CustomModel(Model):
    def generate(messages, stop_sequences=["Task"]):
        response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1024)
        answer = response.choices[0].message
        return answer

custom_model = CustomModel()
```

Additionally, `generate` can also take a `grammar` argument to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs.


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/models.md" />

### Agentic RAG
https://huggingface.co/docs/smolagents/main/examples/rag.md

# Agentic RAG


## Introduction to Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) combines the power of large language models with external knowledge retrieval to produce more accurate, factual, and contextually relevant responses. At its core, RAG is about "using an LLM to answer a user query, but basing the answer on information retrieved from a knowledge base."

### Why Use RAG?

RAG offers several significant advantages over using vanilla or fine-tuned LLMs:

1. **Factual Grounding**: Reduces hallucinations by anchoring responses in retrieved facts
2. **Domain Specialization**: Provides domain-specific knowledge without model retraining
3. **Knowledge Recency**: Allows access to information beyond the model's training cutoff
4. **Transparency**: Enables citation of sources for generated content
5. **Control**: Offers fine-grained control over what information the model can access

### Limitations of Traditional RAG

Despite its benefits, traditional RAG approaches face several challenges:

- **Single Retrieval Step**: If the initial retrieval results are poor, the final generation will suffer
- **Query-Document Mismatch**: User queries (often questions) may not match well with documents containing answers (often statements)
- **Limited Reasoning**: Simple RAG pipelines don't allow for multi-step reasoning or query refinement
- **Context Window Constraints**: Retrieved documents must fit within the model's context window

## Agentic RAG: A More Powerful Approach

We can overcome these limitations by implementing an **Agentic RAG** system - essentially an agent equipped with retrieval capabilities. This approach transforms RAG from a rigid pipeline into an interactive, reasoning-driven process.

### Key Benefits of Agentic RAG

An agent with retrieval tools can:

1. ✅ **Formulate optimized queries**: The agent can transform user questions into retrieval-friendly queries
2. ✅ **Perform multiple retrievals**: The agent can retrieve information iteratively as needed
3. ✅ **Reason over retrieved content**: The agent can analyze, synthesize, and draw conclusions from multiple sources
4. ✅ **Self-critique and refine**: The agent can evaluate retrieval results and adjust its approach

This approach naturally implements advanced RAG techniques:
- **Hypothetical Document Embedding (HyDE)**: Instead of using the user query directly, the agent formulates retrieval-optimized queries ([paper reference](https://huggingface.co/papers/2212.10496))
- **Self-Query Refinement**: The agent can analyze initial results and perform follow-up retrievals with refined queries ([technique reference](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/))

## Building an Agentic RAG System

Let's build a complete Agentic RAG system step by step. We'll create an agent that can answer questions about the Hugging Face Transformers library by retrieving information from its documentation.

You can follow along with the code snippets below, or check out the full example in the smolagents GitHub repository: [examples/rag.py](https://github.com/huggingface/smolagents/blob/main/examples/rag.py).

### Step 1: Install Required Dependencies

First, we need to install the necessary packages:

```bash
pip install smolagents pandas langchain langchain-community sentence-transformers datasets python-dotenv rank_bm25 --upgrade
```

If you plan to use Hugging Face's Inference API, you'll need to set up your API token:

```python
# Load environment variables (including HF_TOKEN)
from dotenv import load_dotenv
load_dotenv()
```

### Step 2: Prepare the Knowledge Base

We'll use a dataset containing Hugging Face documentation and prepare it for retrieval:

```python
import datasets
from langchain.docstore.document import Document
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.retrievers import BM25Retriever

# Load the Hugging Face documentation dataset
knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train")

# Filter to include only Transformers documentation
knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers"))

# Convert dataset entries to Document objects with metadata
source_docs = [
    Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]})
    for doc in knowledge_base
]

# Split documents into smaller chunks for better retrieval
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=500,  # Characters per chunk
    chunk_overlap=50,  # Overlap between chunks to maintain context
    add_start_index=True,
    strip_whitespace=True,
    separators=["\n\n", "\n", ".", " ", ""],  # Priority order for splitting
)
docs_processed = text_splitter.split_documents(source_docs)

print(f"Knowledge base prepared with {len(docs_processed)} document chunks")
```

### Step 3: Create a Retriever Tool

Now we'll create a custom tool that our agent can use to retrieve information from the knowledge base:

```python
from smolagents import Tool

class RetrieverTool(Tool):
    name = "retriever"
    description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query."
    inputs = {
        "query": {
            "type": "string",
            "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.",
        }
    }
    output_type = "string"

    def __init__(self, docs, **kwargs):
        super().__init__(**kwargs)
        # Initialize the retriever with our processed documents
        self.retriever = BM25Retriever.from_documents(
            docs, k=10  # Return top 10 most relevant documents
        )

    def forward(self, query: str) -> str:
        """Execute the retrieval based on the provided query."""
        assert isinstance(query, str), "Your search query must be a string"

        # Retrieve relevant documents
        docs = self.retriever.invoke(query)

        # Format the retrieved documents for readability
        return "\nRetrieved documents:\n" + "".join(
            [
                f"\n\n===== Document {str(i)} =====\n" + doc.page_content
                for i, doc in enumerate(docs)
            ]
        )

# Initialize our retriever tool with the processed documents
retriever_tool = RetrieverTool(docs_processed)
```

> [!TIP]
> We're using BM25, a lexical retrieval method, for simplicity and speed. For production systems, you might want to use semantic search with embeddings for better retrieval quality. Check the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) for high-quality embedding models.

### Step 4: Create an Advanced Retrieval Agent

Now we'll create an agent that can use our retriever tool to answer questions:

```python
from smolagents import InferenceClientModel, CodeAgent

# Initialize the agent with our retriever tool
agent = CodeAgent(
    tools=[retriever_tool],  # List of tools available to the agent
    model=InferenceClientModel(),  # Default model "Qwen/Qwen3-Next-80B-A3B-Thinking"
    max_steps=4,  # Limit the number of reasoning steps
    verbosity_level=2,  # Show detailed agent reasoning
)

# To use a specific model, you can specify it like this:
# model=InferenceClientModel(model_id="meta-llama/Llama-3.3-70B-Instruct")
```

> [!TIP]
> Inference Providers give access to hundreds of models, powered by serverless inference partners. A list of supported providers can be found [here](https://huggingface.co/docs/inference-providers/index).

### Step 5: Run the Agent to Answer Questions

Let's use our agent to answer a question about Transformers:

```python
# Ask a question that requires retrieving information
question = "For a transformers model training, which is slower, the forward or the backward pass?"

# Run the agent to get an answer
agent_output = agent.run(question)

# Display the final answer
print("\nFinal answer:")
print(agent_output)
```

## Practical Applications of Agentic RAG

Agentic RAG systems can be applied to various use cases:

1. **Technical Documentation Assistance**: Help users navigate complex technical documentation
2. **Research Paper Analysis**: Extract and synthesize information from scientific papers
3. **Legal Document Review**: Find relevant precedents and clauses in legal documents
4. **Customer Support**: Answer questions based on product documentation and knowledge bases
5. **Educational Tutoring**: Provide explanations based on textbooks and learning materials

## Conclusion

Agentic RAG represents a significant advancement over traditional RAG pipelines. By combining the reasoning capabilities of LLM agents with the factual grounding of retrieval systems, we can build more powerful, flexible, and accurate information systems.

The approach we've demonstrated:
- Overcomes the limitations of single-step retrieval
- Enables more natural interactions with knowledge bases
- Provides a framework for continuous improvement through self-critique and query refinement

As you build your own Agentic RAG systems, consider experimenting with different retrieval methods, agent architectures, and knowledge sources to find the optimal configuration for your specific use case.


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/rag.md" />

### Web Browser Automation with Agents 🤖🌐
https://huggingface.co/docs/smolagents/main/examples/web_browser.md

# Web Browser Automation with Agents 🤖🌐


In this notebook, we'll create an **agent-powered web browser automation system**! This system can navigate websites, interact with elements, and extract information automatically.

The agent will be able to:

- [x] Navigate to web pages
- [x] Click on elements
- [x] Search within pages
- [x] Handle popups and modals
- [x] Extract information

Let's set up this system step by step!

First, run these lines to install the required dependencies:

```bash
pip install smolagents selenium helium pillow -q
```

Let's import our required libraries and set up environment variables:

```python
from io import BytesIO
from time import sleep

import helium
from dotenv import load_dotenv
from PIL import Image
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

from smolagents import CodeAgent, tool
from smolagents.agents import ActionStep

# Load environment variables
load_dotenv()
```

Now let's create our core browser interaction tools that will allow our agent to navigate and interact with web pages:

```python
@tool
def search_item_ctrl_f(text: str, nth_result: int = 1) -> str:
    """
    Searches for text on the current page via Ctrl + F and jumps to the nth occurrence.
    Args:
        text: The text to search for
        nth_result: Which occurrence to jump to (default: 1)
    """
    elements = driver.find_elements(By.XPATH, f"//*[contains(text(), '{text}')]")
    if nth_result > len(elements):
        raise Exception(f"Match n°{nth_result} not found (only {len(elements)} matches found)")
    result = f"Found {len(elements)} matches for '{text}'."
    elem = elements[nth_result - 1]
    driver.execute_script("arguments[0].scrollIntoView(true);", elem)
    result += f"Focused on element {nth_result} of {len(elements)}"
    return result

@tool
def go_back() -> None:
    """Goes back to previous page."""
    driver.back()

@tool
def close_popups() -> str:
    """
    Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows!
    This does not work on cookie consent banners.
    """
    webdriver.ActionChains(driver).send_keys(Keys.ESCAPE).perform()
```

Let's set up our browser with Chrome and configure screenshot capabilities:

```python
# Configure Chrome options
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--force-device-scale-factor=1")
chrome_options.add_argument("--window-size=1000,1350")
chrome_options.add_argument("--disable-pdf-viewer")
chrome_options.add_argument("--window-position=0,0")

# Initialize the browser
driver = helium.start_chrome(headless=False, options=chrome_options)

# Set up screenshot callback
def save_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
    sleep(1.0)  # Let JavaScript animations happen before taking the screenshot
    driver = helium.get_driver()
    current_step = memory_step.step_number
    if driver is not None:
        for previous_memory_step in agent.memory.steps:  # Remove previous screenshots for lean processing
            if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= current_step - 2:
                previous_memory_step.observations_images = None
        png_bytes = driver.get_screenshot_as_png()
        image = Image.open(BytesIO(png_bytes))
        print(f"Captured a browser screenshot: {image.size} pixels")
        memory_step.observations_images = [image.copy()]  # Create a copy to ensure it persists

    # Update observations with current URL
    url_info = f"Current url: {driver.current_url}"
    memory_step.observations = (
        url_info if memory_step.observations is None else memory_step.observations + "\n" + url_info
    )
```

Now let's create our web automation agent:

```python
from smolagents import InferenceClientModel

# Initialize the model
model_id = "Qwen/Qwen2-VL-72B-Instruct"  # You can change this to your preferred VLM model
model = InferenceClientModel(model_id=model_id)

# Create the agent
agent = CodeAgent(
    tools=[go_back, close_popups, search_item_ctrl_f],
    model=model,
    additional_authorized_imports=["helium"],
    step_callbacks=[save_screenshot],
    max_steps=20,
    verbosity_level=2,
)

# Import helium for the agent
agent.python_executor("from helium import *", agent.state)
```

The agent needs instructions on how to use Helium for web automation. Here are the instructions we'll provide:

```python
helium_instructions = """
You can use helium to access websites. Don't bother about the helium driver, it's already managed.
We've already ran "from helium import *"
Then you can go to pages!
Code:
```py
go_to('github.com/trending')
```<end_code>

You can directly click clickable elements by inputting the text that appears on them.
Code:
```py
click("Top products")
```<end_code>

If it's a link:
Code:
```py
click(Link("Top products"))
```<end_code>

If you try to interact with an element and it's not found, you'll get a LookupError.
In general stop your action after each button click to see what happens on your screenshot.
Never try to login in a page.

To scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from.
Code:
```py
scroll_down(num_pixels=1200) # This will scroll one viewport down
```<end_code>

When you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails).
Just use your built-in tool `close_popups` to close them:
Code:
```py
close_popups()
```<end_code>

You can use .exists() to check for the existence of an element. For example:
Code:
```py
if Text('Accept cookies?').exists():
    click('I accept')
```<end_code>
"""
```

Now we can run our agent with a task! Let's try finding information on Wikipedia:

```python
search_request = """
Please navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word "1992" that mentions a construction accident.
"""

agent_output = agent.run(search_request + helium_instructions)
print("Final output:")
print(agent_output)
```

You can run different tasks by modifying the request. For example, here's for me to know if I should work harder:

```python
github_request = """
I'm trying to find how hard I have to work to get a repo in github.com/trending.
Can you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year?
"""

agent_output = agent.run(github_request + helium_instructions)
print("Final output:")
print(agent_output)
```

The system is particularly effective for tasks like:
- Data extraction from websites
- Web research automation
- UI testing and verification
- Content monitoring

<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/web_browser.md" />

### Async Applications with Agents
https://huggingface.co/docs/smolagents/main/examples/async_agent.md

# Async Applications with Agents

This guide demonstrates how to integrate a synchronous agent from the `smolagents` library into an asynchronous Python web application using Starlette.
The example is designed to help users new to async Python and agent integration understand best practices for combining synchronous agent logic with async web servers.

## Overview

- **Starlette**: A lightweight ASGI framework for building asynchronous web applications in Python.
- **anyio.to_thread.run_sync**: Utility to run blocking (synchronous) code in a background thread, preventing it from blocking the async event loop.
- **CodeAgent**: An agent from the `smolagents` library capable of programmatically solving tasks.

## Why Use a Background Thread?

`CodeAgent.run()` executes Python code synchronously. If called directly in an async endpoint, it would block Starlette's event loop, reducing performance and scalability. By offloading this operation to a background thread with `anyio.to_thread.run_sync`, you keep the app responsive and efficient, even under high concurrency.

## Example Workflow

- The Starlette app exposes a `/run-agent` endpoint that accepts a JSON payload with a `task` string.
- When a request is received, the agent is run in a background thread using `anyio.to_thread.run_sync`.
- The result is returned as a JSON response.

## Building a Starlette App with a CodeAgent

### 1. Install Dependencies

```bash
pip install smolagents starlette anyio uvicorn
```

### 2. Application Code (`main.py`)

```python
import anyio.to_thread
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route

from smolagents import CodeAgent, InferenceClientModel

agent = CodeAgent(
    model=InferenceClientModel(model_id="Qwen/Qwen3-Next-80B-A3B-Thinking"),
    tools=[],
)

async def run_agent(request: Request):
    data = await request.json()
    task = data.get("task", "")
    # Run the agent synchronously in a background thread
    result = await anyio.to_thread.run_sync(agent.run, task)
    return JSONResponse({"result": result})

app = Starlette(routes=[
    Route("/run-agent", run_agent, methods=["POST"]),
])
```

### 3. Run the App

```bash
uvicorn async_agent.main:app --reload
```

### 4. Test the Endpoint

```bash
curl -X POST http://localhost:8000/run-agent -H 'Content-Type: application/json' -d '{"task": "What is 2+2?"}'
```

**Expected Response:**

```json
{"result": "4"}
```

## Further Reading

- [Starlette Documentation](https://www.starlette.io/)
- [anyio Documentation](https://anyio.readthedocs.io/)

---

For the full code, see [`examples/async_agent`](https://github.com/huggingface/smolagents/tree/main/examples/async_agent).


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/async_agent.md" />

### Human-in-the-Loop: Customize Agent Plan Interactively
https://huggingface.co/docs/smolagents/main/examples/plan_customization.md

# Human-in-the-Loop: Customize Agent Plan Interactively

This page demonstrates advanced usage of the smolagents library, with a special focus on **Human-in-the-Loop (HITL)** approaches for interactive plan creation, user-driven plan modification, and memory preservation in agentic workflows.
The example is based on the code in `examples/plan_customization/plan_customization.py`.

## Overview

This example teaches you how to implement Human-in-the-Loop strategies to:

- Interrupt agent execution after a plan is created (using step callbacks)
- Allow users to review and modify the agent's plan before execution (Human-in-the-Loop)
- Resume execution while preserving the agent's memory
- Dynamically update plans based on user feedback, keeping the human in control

## Key Concepts

### Step Callbacks for Plan Interruption

The agent is configured to pause after creating a plan. This is achieved by registering a step callback for the `PlanningStep`:

```python
agent = CodeAgent(
    model=InferenceClientModel(),
    tools=[DuckDuckGoSearchTool()],
    planning_interval=5,  # Plan every 5 steps
    step_callbacks={PlanningStep: interrupt_after_plan},
    max_steps=10,
    verbosity_level=1
)
```

### Human-in-the-Loop: Interactive Plan Review and Modification

When the agent creates a plan, the callback displays it and prompts the human user to:

1. Approve the plan
2. Modify the plan
3. Cancel execution

Example interaction:

```
============================================================
🤖 AGENT PLAN CREATED
============================================================
1. Search for recent AI developments
2. Analyze the top results
3. Summarize the 3 most significant breakthroughs
4. Include sources for each breakthrough
============================================================

Choose an option:
1. Approve plan
2. Modify plan
3. Cancel
Your choice (1-3):
```

This Human-in-the-Loop step enables a human to intervene and review or modify the plan before execution continues, and ensures that the agent's actions align with human intent.

If the user chooses to modify, they can edit the plan directly. The updated plan is then used for subsequent execution steps.

### Memory Preservation and Resuming Execution

By running the agent with `reset=False`, all previous steps and memory are preserved. This allows you to resume execution after an interruption or plan modification:

```python
# First run (may be interrupted)
agent.run(task, reset=True)

# Resume with preserved memory
agent.run(task, reset=False)
```

### Inspecting Agent Memory

You can inspect the agent's memory to see all steps taken so far:

```python
print(f"Current memory contains {len(agent.memory.steps)} steps:")
for i, step in enumerate(agent.memory.steps):
    step_type = type(step).__name__
    print(f"  {i+1}. {step_type}")
```

## Example Human-in-the-Loop Workflow

1. Agent starts with a complex task
2. Planning step is created and execution pauses for human review
3. Human reviews and optionally modifies the plan (Human-in-the-Loop)
4. Execution resumes with the approved/modified plan
5. All steps are preserved for future runs, maintaining transparency and control

## Error Handling

The example includes error handling for:
- User cancellation
- Plan modification errors
- Resume execution failures

## Requirements

- smolagents library
- DuckDuckGoSearchTool (included with smolagents)
- InferenceClientModel (requires HuggingFace API token)

## Educational Value

This example demonstrates:
- Step callback implementation for custom agent behavior
- Memory management in multi-step agents
- User interaction patterns in agentic systems
- Plan modification techniques for dynamic agent control
- Error handling in interactive agent systems

---

For the full code, see [`examples/plan_customization`](https://github.com/huggingface/smolagents/tree/main/examples/plan_customization).


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/plan_customization.md" />

### Orchestrate a multi-agent system 🤖🤝🤖
https://huggingface.co/docs/smolagents/main/examples/multiagents.md

# Orchestrate a multi-agent system 🤖🤝🤖


In this notebook we will make a **multi-agent web browser: an agentic system with several agents collaborating to solve problems using the web!**

It will be a simple hierarchy:

```
              +----------------+
              | Manager agent  |
              +----------------+
                       |
        _______________|______________
       |                              |
Code Interpreter            +------------------+
    tool                    | Web Search agent |
                            +------------------+
                               |            |
                        Web Search tool     |
                                   Visit webpage tool
```
Let's set up this system. 

Run the line below to install the required dependencies:

```py
!pip install 'smolagents[toolkit]' --upgrade -q
```

Let's login to HF in order to call Inference Providers:

```py
from huggingface_hub import login

login()
```

⚡️ Our agent will be powered by [Qwen/Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking) using `InferenceClientModel` class that uses HF's Inference API: the Inference API allows to quickly and easily run any OS model.

> [!TIP]
> Inference Providers give access to hundreds of models, powered by serverless inference partners. A list of supported providers can be found [here](https://huggingface.co/docs/inference-providers/index).

```py
model_id = "Qwen/Qwen3-Next-80B-A3B-Thinking"
```

## 🔍 Create a web search tool

For web browsing, we can already use our native [WebSearchTool](/docs/smolagents/main/en/reference/default_tools#smolagents.WebSearchTool) tool to provide a Google search equivalent.

But then we will also need to be able to peak into the page found by the `WebSearchTool`.
To do so, we could import the library's built-in `VisitWebpageTool`, but we will build it again to see how it's done.

So let's create our `VisitWebpageTool` tool from scratch using `markdownify`.

```py
import re
import requests
from markdownify import markdownify
from requests.exceptions import RequestException
from smolagents import tool


@tool
def visit_webpage(url: str) -> str:
    """Visits a webpage at the given URL and returns its content as a markdown string.

    Args:
        url: The URL of the webpage to visit.

    Returns:
        The content of the webpage converted to Markdown, or an error message if the request fails.
    """
    try:
        # Send a GET request to the URL
        response = requests.get(url)
        response.raise_for_status()  # Raise an exception for bad status codes

        # Convert the HTML content to Markdown
        markdown_content = markdownify(response.text).strip()

        # Remove multiple line breaks
        markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content)

        return markdown_content

    except RequestException as e:
        return f"Error fetching the webpage: {str(e)}"
    except Exception as e:
        return f"An unexpected error occurred: {str(e)}"
```

Ok, now let's initialize and test our tool!

```py
print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500])
```

## Build our multi-agent system 🤖🤝🤖

Now that we have all the tools `search` and `visit_webpage`, we can use them to create the web agent.

Which configuration to choose for this agent?
- Web browsing is a single-timeline task that does not require parallel tool calls, so JSON tool calling works well for that. We thus choose a `ToolCallingAgent`.
- Also, since sometimes web search requires exploring many pages before finding the correct answer, we prefer to increase the number of `max_steps` to 10.

```py
from smolagents import (
    CodeAgent,
    ToolCallingAgent,
    InferenceClientModel,
    WebSearchTool,
)

model = InferenceClientModel(model_id=model_id)

web_agent = ToolCallingAgent(
    tools=[WebSearchTool(), visit_webpage],
    model=model,
    max_steps=10,
    name="web_search_agent",
    description="Runs web searches for you.",
)
```

Note that we gave this agent attributes `name` and `description`, mandatory attributes to make this agent callable by its manager agent.

Then we create a manager agent, and upon initialization we pass our managed agent to it in its `managed_agents` argument.

Since this agent is the one tasked with the planning and thinking, advanced reasoning will be beneficial, so a `CodeAgent` will work well.

Also, we want to ask a question that involves the current year and does additional data calculations: so let us add `additional_authorized_imports=["time", "numpy", "pandas"]`, just in case the agent needs these packages.

```py
manager_agent = CodeAgent(
    tools=[],
    model=model,
    managed_agents=[web_agent],
    additional_authorized_imports=["time", "numpy", "pandas"],
)
```

That's all! Now let's run our system! We select a question that requires both some calculation and research:

```py
answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.")
```

We get this report as the answer:
```
Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the 
current rhythm until 2030:

1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which 
translates to about 2,660,762 GWh/year.

2. Comparing this to countries' electricity consumption:
   - It would be equivalent to about 34% of China's total electricity consumption.
   - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).
   - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.

3. Source of numbers:
   - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.
   - The growth projection used a CAGR of 79.80% from market research by Springs.
   - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year 
2021.
```

Seems like we'll need some sizeable powerplants if the [scaling hypothesis](https://gwern.net/scaling-hypothesis) continues to hold true.

Our agents managed to efficiently collaborate towards solving the task! ✅

💡 You can easily extend this orchestration to more agents: one does the code execution, one the web search, one handles file loadings...


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/multiagents.md" />

### Text-to-SQL
https://huggingface.co/docs/smolagents/main/examples/text_to_sql.md

# Text-to-SQL


In this tutorial, we’ll see how to implement an agent that leverages SQL using `smolagents`.

> Let's start with the golden question: why not keep it simple and use a standard text-to-SQL pipeline?

A standard text-to-sql pipeline is brittle, since the generated SQL query can be incorrect. Even worse, the query could be incorrect, but not raise an error, instead giving some incorrect/useless outputs without raising an alarm.

👉 Instead, an agent system is able to critically inspect outputs and decide if the query needs to be changed or not, thus giving it a huge performance boost.

Let’s build this agent! 💪

Run the line below to install required dependencies:
```bash
!pip install smolagents python-dotenv sqlalchemy --upgrade -q
```
To call Inference Providers, you will need a valid token as your environment variable `HF_TOKEN`.
We use python-dotenv to load it.
```py
from dotenv import load_dotenv
load_dotenv()
```

Then, we setup the SQL environment:
```py
from sqlalchemy import (
    create_engine,
    MetaData,
    Table,
    Column,
    String,
    Integer,
    Float,
    insert,
    inspect,
    text,
)

engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()

def insert_rows_into_table(rows, table, engine=engine):
    for row in rows:
        stmt = insert(table).values(**row)
        with engine.begin() as connection:
            connection.execute(stmt)

table_name = "receipts"
receipts = Table(
    table_name,
    metadata_obj,
    Column("receipt_id", Integer, primary_key=True),
    Column("customer_name", String(16), primary_key=True),
    Column("price", Float),
    Column("tip", Float),
)
metadata_obj.create_all(engine)

rows = [
    {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
    {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
    {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43},
    {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00},
]
insert_rows_into_table(rows, receipts)
```

### Build our agent

Now let’s make our SQL table retrievable by a tool.

The tool’s description attribute will be embedded in the LLM’s prompt by the agent system: it gives the LLM information about how to use the tool. This is where we want to describe the SQL table.

```py
inspector = inspect(engine)
columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")]

table_description = "Columns:\n" + "\n".join([f"  - {name}: {col_type}" for name, col_type in columns_info])
print(table_description)
```

```text
Columns:
  - receipt_id: INTEGER
  - customer_name: VARCHAR(16)
  - price: FLOAT
  - tip: FLOAT
```

Now let’s build our tool. It needs the following: (read [the tool doc](../tutorials/tools) for more detail)
- A docstring with an `Args:` part listing arguments.
- Type hints on both inputs and output.

```py
from smolagents import tool

@tool
def sql_engine(query: str) -> str:
    """
    Allows you to perform SQL queries on the table. Returns a string representation of the result.
    The table is named 'receipts'. Its description is as follows:
        Columns:
        - receipt_id: INTEGER
        - customer_name: VARCHAR(16)
        - price: FLOAT
        - tip: FLOAT

    Args:
        query: The query to perform. This should be correct SQL.
    """
    output = ""
    with engine.connect() as con:
        rows = con.execute(text(query))
        for row in rows:
            output += "\n" + str(row)
    return output
```

Now let us create an agent that leverages this tool.

We use the `CodeAgent`, which is smolagents’ main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework.

The model is the LLM that powers the agent system. `InferenceClientModel` allows you to call LLMs using HF’s Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API.

```py
from smolagents import CodeAgent, InferenceClientModel

agent = CodeAgent(
    tools=[sql_engine],
    model=InferenceClientModel(model_id="meta-llama/Llama-3.1-8B-Instruct"),
)
agent.run("Can you give me the name of the client who got the most expensive receipt?")
```

### Level 2: Table joins

Now let’s make it more challenging! We want our agent to handle joins across multiple tables.

So let’s make a second table recording the names of waiters for each receipt_id!

```py
table_name = "waiters"
waiters = Table(
    table_name,
    metadata_obj,
    Column("receipt_id", Integer, primary_key=True),
    Column("waiter_name", String(16), primary_key=True),
)
metadata_obj.create_all(engine)

rows = [
    {"receipt_id": 1, "waiter_name": "Corey Johnson"},
    {"receipt_id": 2, "waiter_name": "Michael Watts"},
    {"receipt_id": 3, "waiter_name": "Michael Watts"},
    {"receipt_id": 4, "waiter_name": "Margaret James"},
]
insert_rows_into_table(rows, waiters)
```
Since we changed the table, we update the `SQLExecutorTool` with this table’s description to let the LLM properly leverage information from this table.

```py
updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.
It can use the following tables:"""

inspector = inspect(engine)
for table in ["receipts", "waiters"]:
    columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)]

    table_description = f"Table '{table}':\n"

    table_description += "Columns:\n" + "\n".join([f"  - {name}: {col_type}" for name, col_type in columns_info])
    updated_description += "\n\n" + table_description

print(updated_description)
```
Since this request is a bit harder than the previous one, we’ll switch the LLM engine to use the more powerful [Qwen/Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking)!

```py
sql_engine.description = updated_description

agent = CodeAgent(
    tools=[sql_engine],
    model=InferenceClientModel(model_id="Qwen/Qwen3-Next-80B-A3B-Thinking"),
)

agent.run("Which waiter got more total money from tips?")
```
It directly works! The setup was surprisingly simple, wasn’t it?

This example is done! We've touched upon these concepts:
- Building new tools.
- Updating a tool's description.
- Switching to a stronger LLM helps agent reasoning.

✅ Now you can go build this text-to-SQL system you’ve always dreamt of! ✨


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/text_to_sql.md" />

### Using different models
https://huggingface.co/docs/smolagents/main/examples/using_different_models.md

# Using different models


`smolagents` provides a flexible framework that allows you to use various language models from different providers.
This guide will show you how to use different model types with your agents.

## Available model types

`smolagents` supports several model types out of the box:
1. [InferenceClientModel](/docs/smolagents/main/en/reference/models#smolagents.InferenceClientModel): Uses Hugging Face's Inference API to access models
2. [TransformersModel](/docs/smolagents/main/en/reference/models#smolagents.TransformersModel): Runs models locally using the Transformers library
3. [VLLMModel](/docs/smolagents/main/en/reference/models#smolagents.VLLMModel): Uses vLLM for fast inference with optimized serving
4. [MLXModel](/docs/smolagents/main/en/reference/models#smolagents.MLXModel): Optimized for Apple Silicon devices using MLX
5. [LiteLLMModel](/docs/smolagents/main/en/reference/models#smolagents.LiteLLMModel): Provides access to hundreds of LLMs through LiteLLM
6. [LiteLLMRouterModel](/docs/smolagents/main/en/reference/models#smolagents.LiteLLMRouterModel): Distributes requests among multiple models
7. [OpenAIModel](/docs/smolagents/main/en/reference/models#smolagents.OpenAIModel): Provides access to any provider that implements an OpenAI-compatible API
8. [AzureOpenAIModel](/docs/smolagents/main/en/reference/models#smolagents.AzureOpenAIModel): Uses Azure's OpenAI service
9. [AmazonBedrockModel](/docs/smolagents/main/en/reference/models#smolagents.AmazonBedrockModel): Connects to AWS Bedrock's API

All model classes support passing additional keyword arguments (like `temperature`, `max_tokens`, `top_p`, etc.) directly at instantiation time.
These parameters are automatically forwarded to the underlying model's completion calls, allowing you to configure model behavior such as creativity, response length, and sampling strategies.

## Using Google Gemini Models

As explained in the Google Gemini API documentation (https://ai.google.dev/gemini-api/docs/openai),
Google provides an OpenAI-compatible API for Gemini models, allowing you to use the [OpenAIModel](/docs/smolagents/main/en/reference/models#smolagents.OpenAIModel)
with Gemini models by setting the appropriate base URL.

First, install the required dependencies:
```bash
pip install 'smolagents[openai]'
```

Then, [get a Gemini API key](https://ai.google.dev/gemini-api/docs/api-key) and set it in your code:
```python
GEMINI_API_KEY = <YOUR-GEMINI-API-KEY>
```

Now, you can initialize the Gemini model using the `OpenAIModel` class
and setting the `api_base` parameter to the Gemini API base URL:
```python
from smolagents import OpenAIModel

model = OpenAIModel(
    model_id="gemini-2.0-flash",
    # Google Gemini OpenAI-compatible API base URL
    api_base="https://generativelanguage.googleapis.com/v1beta/openai/",
    api_key=GEMINI_API_KEY,
)
```

## Using OpenRouter Models

OpenRouter provides access to a wide variety of language models through a unified OpenAI-compatible API.
You can use the [OpenAIModel](/docs/smolagents/main/en/reference/models#smolagents.OpenAIModel) to connect to OpenRouter by setting the appropriate base URL.

First, install the required dependencies:
```bash
pip install 'smolagents[openai]'
```

Then, [get an OpenRouter API key](https://openrouter.ai/keys) and set it in your code:
```python
OPENROUTER_API_KEY = <YOUR-OPENROUTER-API-KEY>
```

Now, you can initialize any model available on OpenRouter using the `OpenAIModel` class:
```python
from smolagents import OpenAIModel

model = OpenAIModel(
    # You can use any model ID available on OpenRouter
    model_id="openai/gpt-4o",
    # OpenRouter API base URL
    api_base="https://openrouter.ai/api/v1",
    api_key=OPENROUTER_API_KEY,
)
```

## Using xAI's Grok Models

xAI's Grok models can be accessed through [LiteLLMModel](/docs/smolagents/main/en/reference/models#smolagents.LiteLLMModel).

Some models (such as "grok-4" and "grok-3-mini") don't support the `stop` parameter, so you'll need to use
`REMOVE_PARAMETER` to exclude it from API calls.

First, install the required dependencies:
```bash
pip install smolagents[litellm]
```

Then, [get an xAI API key](https://console.x.ai/) and set it in your code:
```python
XAI_API_KEY = <YOUR-XAI-API-KEY>
```

Now, you can initialize Grok models using the `LiteLLMModel` class and remove the `stop` parameter if applicable:
```python
from smolagents import LiteLLMModel, REMOVE_PARAMETER

# Using Grok-4
model = LiteLLMModel(
    model_id="xai/grok-4",
    api_key=XAI_API_KEY,
    stop=REMOVE_PARAMETER,  # Remove stop parameter as grok-4 model doesn't support it
    temperature=0.7
)

# Or using Grok-3-mini
model_mini = LiteLLMModel(
    model_id="xai/grok-3-mini",
    api_key=XAI_API_KEY,
    stop=REMOVE_PARAMETER,  # Remove stop parameter as grok-3-mini model doesn't support it
    max_tokens=1000
)
```


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/using_different_models.md" />

### Building good agents
https://huggingface.co/docs/smolagents/main/tutorials/building_good_agents.md

# Building good agents


There's a world of difference between building an agent that works and one that doesn't.
How can we build agents that fall into the former category?
In this guide, we're going to talk about best practices for building agents.

> [!TIP]
> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).

### The best agentic systems are the simplest: simplify the workflow as much as you can

Giving an LLM some agency in your workflow introduces some risk of errors.

Well-programmed agentic systems have good error logging and retry mechanisms anyway, so the LLM engine has a chance to self-correct their mistake. But to reduce the risk of LLM error to the maximum, you should simplify your workflow!

Let's revisit the example from the [intro to agents](../conceptual_guides/intro_agents): a bot that answers user queries for a surf trip company.
Instead of letting the agent do 2 different calls for "travel distance API" and "weather API" each time they are asked about a new surf spot, you could just make one unified tool "return_spot_information", a function that calls both APIs at once and returns their concatenated outputs to the user.

This will reduce costs, latency, and error risk!

The main guideline is: Reduce the number of LLM calls as much as you can.

This leads to a few takeaways:
- Whenever possible, group 2 tools in one, like in our example of the two APIs.
- Whenever possible, logic should be based on deterministic functions rather than agentic decisions.

### Improve the information flow to the LLM engine

Remember that your LLM engine is like an *intelligent* robot, trapped into a room with the only communication with the outside world being notes passed under a door.

It won't know of anything that happened if you don't explicitly put that into its prompt.

So first start with making your task very clear!
Since an agent is powered by an LLM, minor variations in your task formulation might yield completely different results.

Then, improve the information flow towards your agent in tool use.

Particular guidelines to follow:
- Each tool should log (by simply using `print` statements inside the tool's `forward` method) everything that could be useful for the LLM engine.
  - In particular, logging detail on tool execution errors would help a lot!

For instance, here's a tool that retrieves weather data based on location and date-time:

First, here's a poor version:
```python
import datetime
from smolagents import tool

def get_weather_report_at_coordinates(coordinates, date_time):
    # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m]
    return [28.0, 0.35, 0.85]

def convert_location_to_coordinates(location):
    # Returns dummy coordinates
    return [3.3, -42.0]

@tool
def get_weather_api(location: str, date_time: str) -> str:
    """
    Returns the weather report.

    Args:
        location: the name of the place that you want the weather for.
        date_time: the date and time for which you want the report.
    """
    lon, lat = convert_location_to_coordinates(location)
    date_time = datetime.strptime(date_time)
    return str(get_weather_report_at_coordinates((lon, lat), date_time))
```

Why is it bad?
- there's no precision of the format that should be used for `date_time`
- there's no detail on how location should be specified.
- there's no logging mechanism trying to make explicit failure cases like location not being in a proper format, or date_time not being properly formatted.
- the output format is hard to understand

If the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it with so much heavy lifting to do?

A better way to build this tool would have been the following:
```python
@tool
def get_weather_api(location: str, date_time: str) -> str:
    """
    Returns the weather report.

    Args:
        location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco".
        date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.
    """
    lon, lat = convert_location_to_coordinates(location)
    try:
        date_time = datetime.strptime(date_time)
    except Exception as e:
        raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e))
    temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)
    return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m."
```

In general, to ease the load on your LLM, the good question to ask yourself is: "How easy would it be for me, if I was dumb and using this tool for the first time ever, to program with this tool and correct my own errors?".

### Give more arguments to the agent

To pass some additional objects to your agent beyond the simple string describing the task, you can use the `additional_args` argument to pass any type of object:

```py
from smolagents import CodeAgent, InferenceClientModel

model_id = "meta-llama/Llama-3.3-70B-Instruct"

agent = CodeAgent(tools=[], model=InferenceClientModel(model_id=model_id), add_base_tools=True)

agent.run(
    "Why does Mike not know many people in New York?",
    additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}
)
```
For instance, you can use this `additional_args` argument to pass images or strings that you want your agent to leverage.



## How to debug your agent

### 1. Use a stronger LLM

In an agentic workflows, some of the errors are actual errors, some other are the fault of your LLM engine not reasoning properly.
For instance, consider this trace for an `CodeAgent` that I asked to create a car picture:
```
==================================================================================================== New task ====================================================================================================
Make me a cool car picture
──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic")
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Step 1:

- Time taken: 16.35 seconds
- Input tokens: 1,383
- Output tokens: 77
──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png")
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Print outputs:

Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Final answer:
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
```
The user sees, instead of an image being returned, a path being returned to them.
It could look like a bug from the system, but actually the agentic system didn't cause the error: it's just that the LLM brain did the mistake of not saving the image output into a variable.
Thus it cannot access the image again except by leveraging the path that was logged while saving the image, so it returns the path instead of an image.

The first step to debugging your agent is thus "Use a more powerful LLM". Alternatives like `Qwen2/5-72B-Instruct` wouldn't have made that mistake.

### 2. Provide more information or specific instructions

You can also use less powerful models, provided you guide them more effectively.

Put yourself in the shoes of your model: if you were the model solving the task, would you struggle with the information available to you (from the system prompt + task formulation + tool description) ?

Would you need detailed instructions?

- If the instruction is to always be given to the agent (as we generally understand a system prompt to work): you can pass it as a string under argument `instructions` upon agent initialization. *(Note: instructions are appended to the system prompt, not replacing it.)*
- If it's about a specific task to solve: add all these details to the task. The task could be very long, like dozens of pages.
- If it's about how to use specific tools: include it in the `description` attribute of these tools.


### 3. Change the prompt templates (generally not advised)

If above clarifications are not sufficient, you can change the agent's prompt templates.

Let's see how it works. For example, let us check the default prompt templates for the [CodeAgent](/docs/smolagents/main/en/reference/agents#smolagents.CodeAgent) (below version is shortened by skipping zero-shot examples).

```python
print(agent.prompt_templates["system_prompt"])
```
Here is what you get:
```text
You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of Thought, Code, and Observation sequences.

At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
Then in the Code sequence you should write the code in simple Python. The code sequence must be opened with '{{code_block_opening_tag}}', and closed with '{{code_block_closing_tag}}'.
During each intermediate step, you can use 'print()' to save whatever important information you will then need.
These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
In the end you have to return a final answer using the `final_answer` tool.

Here are a few examples using notional tools:
---
Task: "Generate an image of the oldest person in this document."

Thought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
{{code_block_opening_tag}}
answer = document_qa(document=document, question="Who is the oldest person mentioned?")
print(answer)
{{code_block_closing_tag}}
Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."

Thought: I will now generate an image showcasing the oldest person.
{{code_block_opening_tag}}
image = image_generator("A portrait of John Doe, a 55-year-old man living in Canada.")
final_answer(image)
{{code_block_closing_tag}}

---
Task: "What is the result of the following operation: 5 + 3 + 1294.678?"

Thought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool
{{code_block_opening_tag}}
result = 5 + 3 + 1294.678
final_answer(result)
{{code_block_closing_tag}}

---
Task:
"Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French.
You have been provided with these additional arguments, that you can access using the keys as variables in your python code:
{'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}"

Thought: I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.
{{code_block_opening_tag}}
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(image=image, question=translated_question)
final_answer(f"The answer is {answer}")
{{code_block_closing_tag}}

---
Task:
In a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.
What does he say was the consequence of Einstein learning too much math on his creativity, in one word?

Thought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.
{{code_block_opening_tag}}
pages = web_search(query="1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein")
print(pages)
{{code_block_closing_tag}}
Observation:
No result found for query "1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein".

Thought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.
{{code_block_opening_tag}}
pages = web_search(query="1979 interview Stanislaus Ulam")
print(pages)
{{code_block_closing_tag}}
Observation:
Found 6 pages:
[Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)

[Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)

(truncated)

Thought: I will read the first 2 pages to know more.
{{code_block_opening_tag}}
for url in ["https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/", "https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/"]:
    whole_page = visit_webpage(url)
    print(whole_page)
    print("\n" + "="*80 + "\n")  # Print separator between pages
{{code_block_closing_tag}}
Observation:
Manhattan Project Locations:
Los Alamos, NM
Stanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at
(truncated)

Thought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: "He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity." Let's answer in one word.
{{code_block_opening_tag}}
final_answer("diminished")
{{code_block_closing_tag}}

---
Task: "Which city has the highest population: Guangzhou or Shanghai?"

Thought: I need to get the populations for both cities and compare them: I will use the tool `web_search` to get the population of both cities.
{{code_block_opening_tag}}
for city in ["Guangzhou", "Shanghai"]:
    print(f"Population {city}:", web_search(f"{city} population")
{{code_block_closing_tag}}
Observation:
Population Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']
Population Shanghai: '26 million (2019)'

Thought: Now I know that Shanghai has the highest population.
{{code_block_opening_tag}}
final_answer("Shanghai")
{{code_block_closing_tag}}

---
Task: "What is the current age of the pope, raised to the power 0.36?"

Thought: I will use the tool `wikipedia_search` to get the age of the pope, and confirm that with a web search.
{{code_block_opening_tag}}
pope_age_wiki = wikipedia_search(query="current pope age")
print("Pope age as per wikipedia:", pope_age_wiki)
pope_age_search = web_search(query="current pope age")
print("Pope age as per google search:", pope_age_search)
{{code_block_closing_tag}}
Observation:
Pope age: "The pope Francis is currently 88 years old."

Thought: I know that the pope is 88 years old. Let's compute the result using python code.
{{code_block_opening_tag}}
pope_current_age = 88 ** 0.36
final_answer(pope_current_age)
{{code_block_closing_tag}}

Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools, behaving like regular python functions:
{{code_block_opening_tag}}
{%- for tool in tools.values() %}
{{ tool.to_code_prompt() }}
{% endfor %}
{{code_block_closing_tag}}

{%- if managed_agents and managed_agents.values() | list %}
You can also give tasks to team members.
Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
You can also include any relevant variables or context using the 'additional_args' argument.
Here is a list of the team members that you can call:
{{code_block_opening_tag}}
{%- for agent in managed_agents.values() %}
def {{ agent.name }}(task: str, additional_args: dict[str, Any]) -> str:
    """{{ agent.description }}

    Args:
        task: Long detailed description of the task.
        additional_args: Dictionary of extra inputs to pass to the managed agent, e.g. images, dataframes, or any other contextual data it may need.
    """
{% endfor %}
{{code_block_closing_tag}}
{%- endif %}

Here are the rules you should always follow to solve your task:
1. Always provide a 'Thought:' sequence, and a '{{code_block_opening_tag}}' sequence ending with '{{code_block_closing_tag}}', else you will fail.
2. Use only variables that you have defined!
3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wikipedia_search({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wikipedia_search(query="What is the place where James Bond lives?")'.
4. For tools WITHOUT JSON output schema: Take care to not chain too many sequential tool calls in the same code block, as their output format is unpredictable. For instance, a call to wikipedia_search without a JSON output schema has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
5. For tools WITH JSON output schema: You can confidently chain multiple tool calls and directly access structured output fields in the same code block! When a tool has a JSON output schema, you know exactly what fields and data types to expect, allowing you to write robust code that directly accesses the structured response (e.g., result['field_name']) without needing intermediate print() statements.
6. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
7. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
8. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.
9. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
10. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
11. Don't give up! You're in charge of solving the task, not providing directions to solve it.

{%- if custom_instructions %}
{{custom_instructions}}
{%- endif %}

Now Begin!
```

As you can see, there are placeholders like `"{{ tool.description }}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents.

So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt can contain the following placeholders:
- To insert tool descriptions:
  ```
  {%- for tool in tools.values() %}
  - {{ tool.to_tool_calling_prompt() }}
  {%- endfor %}
  ```
- To insert the descriptions for managed agents if there are any:
  ```
  {%- if managed_agents and managed_agents.values() | list %}
  You can also give tasks to team members.
  Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
  You can also include any relevant variables or context using the 'additional_args' argument.
  Here is a list of the team members that you can call:
  {%- for agent in managed_agents.values() %}
  - {{ agent.name }}: {{ agent.description }}
  {%- endfor %}
  {%- endif %}
  ```
- For `CodeAgent` only, to insert the list of authorized imports: `"{{authorized_imports}}"`

Then you can change the system prompt as follows:

```py
agent.prompt_templates["system_prompt"] = agent.prompt_templates["system_prompt"] + "\nHere you go!"
```

This also works with the [ToolCallingAgent](/docs/smolagents/main/en/reference/agents#smolagents.ToolCallingAgent).

But generally it's just simpler to pass argument `instructions` upon agent initalization, like:
```py
agent = CodeAgent(tools=[], model=InferenceClientModel(model_id=model_id), instructions="Always talk like a 5 year old.")
```

Note that `instructions` are appended to the system prompt, not replacing it.


### 4. Extra planning

We provide a model for a supplementary planning step, that an agent can run regularly in-between normal action steps. In this step, there is no tool call, the LLM is simply asked to update a list of facts it knows and to reflect on what steps it should take next based on those facts.

```py
from smolagents import load_tool, CodeAgent, InferenceClientModel, WebSearchTool
from dotenv import load_dotenv

load_dotenv()

# Import tool from Hub
image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)

search_tool = WebSearchTool()

agent = CodeAgent(
    tools=[search_tool, image_generation_tool],
    model=InferenceClientModel(model_id="Qwen/Qwen2.5-72B-Instruct"),
    planning_interval=3 # This is where you activate planning!
)

# Run it!
result = agent.run(
    "How long would a cheetah at full speed take to run the length of Pont Alexandre III?",
)
```


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/building_good_agents.md" />

### 📚 Manage your agent's memory
https://huggingface.co/docs/smolagents/main/tutorials/memory.md

# 📚 Manage your agent's memory


In the end, an agent can be defined by simple components: it has tools, prompts.
And most importantly, it has a memory of past steps, drawing a history of planning, execution, and errors.

### Replay your agent's memory

We propose several features to inspect a past agent run.

You can instrument the agent's run to display it in a great UI that lets you zoom in/out on specific steps, as highlighted in the [instrumentation guide](./inspect_runs).

You can also use `agent.replay()`, as follows:

After the agent has run:
```py
from smolagents import InferenceClientModel, CodeAgent

agent = CodeAgent(tools=[], model=InferenceClientModel(), verbosity_level=0)

result = agent.run("What's the 20th Fibonacci number?")
```

If you want to replay this last run, just use:
```py
agent.replay()
```

### Dynamically change the agent's memory

Many advanced use cases require dynamic modification of the agent's memory.

You can access the agent's memory using:

```py
from smolagents import ActionStep

system_prompt_step = agent.memory.system_prompt
print("The system prompt given to the agent was:")
print(system_prompt_step.system_prompt)

task_step = agent.memory.steps[0]
print("\n\nThe first task step was:")
print(task_step.task)

for step in agent.memory.steps:
    if isinstance(step, ActionStep):
        if step.error is not None:
            print(f"\nStep {step.step_number} got this error:\n{step.error}\n")
        else:
            print(f"\nStep {step.step_number} got these observations:\n{step.observations}\n")
```

Use `agent.memory.get_full_steps()` to get full steps as dictionaries.

You can also use step callbacks to dynamically change the agent's memory.

Step callbacks can access the `agent` itself in their arguments, so they can access any memory step as highlighted above, and change it if needed. For instance, let's say you are observing screenshots of each step performed by a web browser agent. You want to log the newest screenshot, and remove the images from ancient steps to save on token costs.

You could run something like the following.
_Note: this code is incomplete, some imports and object definitions have been removed for the sake of concision, visit [the original script](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to get the full working code._

```py
import helium
from PIL import Image
from io import BytesIO
from time import sleep

def update_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
    sleep(1.0)  # Let JavaScript animations happen before taking the screenshot
    driver = helium.get_driver()
    latest_step = memory_step.step_number
    for previous_memory_step in agent.memory.steps:  # Remove previous screenshots from logs for lean processing
        if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= latest_step - 2:
            previous_memory_step.observations_images = None
    png_bytes = driver.get_screenshot_as_png()
    image = Image.open(BytesIO(png_bytes))
    memory_step.observations_images = [image.copy()]
```

Then you should pass this function in the `step_callbacks` argument upon initialization of your agent:

```py
CodeAgent(
    tools=[WebSearchTool(), go_back, close_popups, search_item_ctrl_f],
    model=model,
    additional_authorized_imports=["helium"],
    step_callbacks=[update_screenshot],
    max_steps=20,
    verbosity_level=2,
)
```

Head to our [vision web browser code](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to see the full working example.

### Run agents one step at a time

This can be useful in case you have tool calls that take days: you can just run your agents step by step.
This will also let you update the memory on each step.

```py
from smolagents import InferenceClientModel, CodeAgent, ActionStep, TaskStep

agent = CodeAgent(tools=[], model=InferenceClientModel(), verbosity_level=1)
agent.python_executor.send_tools({**agent.tools})
print(agent.memory.system_prompt)

task = "What is the 20th Fibonacci number?"

# You could modify the memory as needed here by inputting the memory of another agent.
# agent.memory.steps = previous_agent.memory.steps

# Let's start a new task!
agent.memory.steps.append(TaskStep(task=task, task_images=[]))

final_answer = None
step_number = 1
while final_answer is None and step_number <= 10:
    memory_step = ActionStep(
        step_number=step_number,
        observations_images=[],
    )
    # Run one step.
    final_answer = agent.step(memory_step)
    agent.memory.steps.append(memory_step)
    step_number += 1

    # Change the memory as you please!
    # For instance to update the latest step:
    # agent.memory.steps[-1] = ...

print("The final answer is:", final_answer)
```


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/memory.md" />

### Tools
https://huggingface.co/docs/smolagents/main/tutorials/tools.md

# Tools


Here, we're going to see advanced tool usage.

> [!TIP]
> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).


### What is a tool, and how to build one?

A tool is mostly a function that an LLM can use in an agentic system.

But to use it, the LLM will need to be given an API: name, tool description, input types and descriptions, output type.

So it cannot be only a function. It should be a class.

So at core, the tool is a class that wraps a function with metadata that helps the LLM understand how to use it.

Here's how it looks:

```python
from smolagents import Tool

class HFModelDownloadsTool(Tool):
    name = "model_download_counter"
    description = """
    This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
    It returns the name of the checkpoint."""
    inputs = {
        "task": {
            "type": "string",
            "description": "the task category (such as text-classification, depth-estimation, etc)",
        }
    }
    output_type = "string"

    def forward(self, task: str):
        from huggingface_hub import list_models

        model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
        return model.id

model_downloads_tool = HFModelDownloadsTool()
```

The custom tool subclasses [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) to inherit useful methods. The child class also defines:
- An attribute `name`, which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's name it `model_download_counter`.
- An attribute `description` is used to populate the agent's system prompt.
- An `inputs` attribute, which is a dictionary with keys `"type"` and `"description"`. It contains information that helps the Python interpreter make educated choices about the input.
- An `output_type` attribute, which specifies the output type. The types for both `inputs` and `output_type` should be [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema), they can be either of these: `["string", "boolean","integer", "number", "image", "audio", "array", "object", "any", "null"]`.
- A `forward` method which contains the inference code to be executed.

And that's all it needs to be used in an agent!

There's another way to build a tool. In the [guided_tour](../guided_tour), we implemented a tool using the `@tool` decorator. The [tool()](/docs/smolagents/main/en/reference/tools#smolagents.tool) decorator is the recommended way to define simple tools, but sometimes you need more than this: using several methods in a class for more clarity, or using additional class attributes.

In this case, you can build your tool by subclassing [Tool](/docs/smolagents/main/en/reference/tools#smolagents.Tool) as described above.

### Share your tool to the Hub

You can share your custom tool to the Hub as a Space repository by calling [push_to_hub()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.push_to_hub) on the tool. Make sure you've created a repository for it on the Hub and are using a token with read access.

```python
model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>")
```

For the push to Hub to work, your tool will need to respect some rules:
- All methods are self-contained, e.g. use variables that come either from their args.
- As per the above point, **all imports should be defined directly within the tool's functions**, else you will get an error when trying to call [save()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.save) or [push_to_hub()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.push_to_hub) with your custom tool.
- If you subclass the `__init__` method, you can give it no other argument than `self`. This is because arguments set during a specific tool instance's initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can still create a class attribute anywhere in your code by assigning stuff to `self.your_variable`.


Once your tool is pushed to Hub, you can visualize it. [Here](https://huggingface.co/spaces/m-ric/hf-model-downloads) is the `model_downloads_tool` that I've pushed. It has a nice gradio interface.

When diving into the tool files, you can find that all the tool's logic is under [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py). That is where you can inspect a tool shared by someone else.

Then you can load the tool with [load_tool()](/docs/smolagents/main/en/reference/tools#smolagents.load_tool) or create it with [from_hub()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.from_hub) and pass it to the `tools` parameter in your agent.
Since running tools means running custom code, you need to make sure you trust the repository, thus we require to pass `trust_remote_code=True` to load a tool from the Hub.

```python
from smolagents import load_tool, CodeAgent

model_download_tool = load_tool(
    "{your_username}/hf-model-downloads",
    trust_remote_code=True
)
```

### Use tools from an MCP server

Our `MCPClient` allows you to load tools from an MCP server, and gives you full control over the connection and tool management:

For stdio-based MCP servers:
```python
from smolagents import MCPClient, CodeAgent
from mcp import StdioServerParameters
import os

server_parameters = StdioServerParameters(
    command="uvx",  # Using uvx ensures dependencies are available
    args=["--quiet", "pubmedmcp@0.1.3"],
    env={"UV_PYTHON": "3.12", **os.environ},
)

with MCPClient(server_parameters) as tools:
    agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
    agent.run("Please find the latest research on COVID-19 treatment.")
```

For Streamable HTTP-based MCP servers:
```python
from smolagents import MCPClient, CodeAgent

with MCPClient({"url": "http://127.0.0.1:8000/mcp", "transport": "streamable-http"}) as tools:
    agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
    agent.run("Please find a remedy for hangover.")
```

You can also manually manage the connection lifecycle with the try...finally pattern:

```python
from smolagents import MCPClient, CodeAgent
from mcp import StdioServerParameters
import os

# Initialize server parameters
server_parameters = StdioServerParameters(
    command="uvx",
    args=["--quiet", "pubmedmcp@0.1.3"],
    env={"UV_PYTHON": "3.12", **os.environ},
)

# Manually manage the connection
try:
    mcp_client = MCPClient(server_parameters)
    tools = mcp_client.get_tools()

    # Use the tools with your agent
    agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
    result = agent.run("What are the recent therapeutic approaches for Alzheimer's disease?")

    # Process the result as needed
    print(f"Agent response: {result}")
finally:
    # Always ensure the connection is properly closed
    mcp_client.disconnect()
```

You can also connect to multiple MCP servers at once by passing a list of server parameters:
```python
from smolagents import MCPClient, CodeAgent
from mcp import StdioServerParameters
import os

server_params1 = StdioServerParameters(
    command="uvx",
    args=["--quiet", "pubmedmcp@0.1.3"],
    env={"UV_PYTHON": "3.12", **os.environ},
)

server_params2 = {"url": "http://127.0.0.1:8000/sse"}

with MCPClient([server_params1, server_params2]) as tools:
    agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
    agent.run("Please analyze the latest research and suggest remedies for headaches.")
```

> [!WARNING]
> **Security Warning:** Always verify the source and integrity of any MCP server before connecting to it, especially for production environments.
> Using MCP servers comes with security risks:
> - **Trust is essential:** Only use MCP servers from trusted sources. Malicious servers can execute harmful code on your machine.
> - **Stdio-based MCP servers** will always execute code on your machine (that's their intended functionality).
> - **Streamable HTTP-based MCP servers:** While remote MCP servers will not execute code on your machine, still proceed with caution.

#### Structured Output and Output Schema Support

The latest [MCP specifications (2025-06-18+)](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#structured-content) include support for `outputSchema`, which enables tools to return structured data with defined schemas. `smolagents` takes advantage of these structured output capabilities, allowing agents to work with tools that return complex data structures, JSON objects, and other structured formats. With this feature, the agent's LLMs can "see" the structure of the tool output before calling a tool, enabling more intelligent and context-aware interactions.

To enable structured output support, pass `structured_output=True` when initializing the `MCPClient`:

```python
from smolagents import MCPClient, CodeAgent

# Enable structured output support
with MCPClient(server_parameters, structured_output=True) as tools:
    agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
    agent.run("Get weather information for Paris")
```

When `structured_output=True`, the following features are enabled:
- **Output Schema Support**: Tools can define JSON schemas for their outputs
- **Structured Content Handling**: Support for `structuredContent` in MCP responses
- **JSON Parsing**: Automatic parsing of structured data from tool responses

Here's an example using a weather MCP server with structured output:

```python
# demo/weather.py - Example MCP server with structured output
from pydantic import BaseModel, Field
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Weather Service")

class WeatherInfo(BaseModel):
    location: str = Field(description="The location name")
    temperature: float = Field(description="Temperature in Celsius")
    conditions: str = Field(description="Weather conditions")
    humidity: int = Field(description="Humidity percentage", ge=0, le=100)

@mcp.tool(
    name="get_weather_info",
    description="Get weather information for a location as structured data.",
    # structured_output=True is enabled by default in FastMCP
)
def get_weather_info(city: str) -> WeatherInfo:
    """Get weather information for a city."""
    return WeatherInfo(
        location=city,
        temperature=22.5,
        conditions="partly cloudy",
        humidity=65
    )
```

Agent using output schema and structured output:

```python
from smolagents import MCPClient, CodeAgent

# Using the weather server with structured output
from mcp import StdioServerParameters

server_parameters = StdioServerParameters(
    command="python",
    args=["demo/weather.py"]
)

with MCPClient(server_parameters, structured_output=True) as tools:
    agent = CodeAgent(tools=tools, model=model)
    result = agent.run("What is the temperature in Tokyo in Fahrenheit?")
    print(result)
```

When structured output is enabled, the `CodeAgent` system prompt is enhanced to include JSON schema information for tools, helping the agent understand the expected structure of tool outputs and access the data appropriately.

**Backwards Compatibility**: The `structured_output` parameter currently defaults to `False` to maintain backwards compatibility. Existing code will continue to work without changes, receiving simple text outputs as before.

**Future Change**: In a future release, the default value of `structured_output` will change from `False` to `True`. It is recommended to explicitly set `structured_output=True` to opt into the enhanced functionality, which provides better tool output handling and improved agent performance. Use `structured_output=False` only if you specifically need to maintain the current text-only behavior.

### Import a Space as a tool

You can directly import a Gradio Space from the Hub as a tool using the [Tool.from_space()](/docs/smolagents/main/en/reference/tools#smolagents.Tool.from_space) method!

You only need to provide the id of the Space on the Hub, its name, and a description that will help your agent understand what the tool does. Under the hood, this will use [`gradio-client`](https://pypi.org/project/gradio-client/) library to call the Space.

For instance, let's import the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space from the Hub and use it to generate an image.

```python
image_generation_tool = Tool.from_space(
    "black-forest-labs/FLUX.1-schnell",
    name="image_generator",
    description="Generate an image from a prompt"
)

image_generation_tool("A sunny beach")
```
And voilà, here's your image! 🏖️

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp">

Then you can use this tool just like any other tool.  For example, let's improve the prompt `a rabbit wearing a space suit` and generate an image of it. This example also shows how you can pass additional arguments to the agent.

```python
from smolagents import CodeAgent, InferenceClientModel

model = InferenceClientModel(model_id="Qwen/Qwen3-Next-80B-A3B-Thinking")
agent = CodeAgent(tools=[image_generation_tool], model=model)

agent.run(
    "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'}
)
```

```text
=== Agent thoughts:
improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background"

Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt.
>>> Agent is executing the code below:
image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background")
final_answer(image)
```

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp">

How cool is this? 🤩

### Use LangChain tools

We love Langchain and think it has a very compelling suite of tools.
To import a tool from LangChain, use the `from_langchain()` method.

Here is how you can use it to recreate the intro's search result using a LangChain web search tool.
This tool will need `pip install langchain google-search-results -q` to work properly.
```python
from langchain.agents import load_tools

search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])

agent = CodeAgent(tools=[search_tool], model=model)

agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?")
```

### Manage your agent's toolbox

You can manage an agent's toolbox by adding or replacing a tool in attribute `agent.tools`, since it is a standard dictionary.

Let's add the `model_download_tool` to an existing agent initialized with only the default toolbox.

```python
from smolagents import InferenceClientModel

model = InferenceClientModel(model_id="Qwen/Qwen3-Next-80B-A3B-Thinking")

agent = CodeAgent(tools=[], model=model, add_base_tools=True)
agent.tools[model_download_tool.name] = model_download_tool
```
Now we can leverage the new tool:

```python
agent.run(
    "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?"
)
```


> [!TIP]
> Beware of not adding too many tools to an agent: this can overwhelm weaker LLM engines.


### Use a collection of tools

You can leverage tool collections by using [ToolCollection](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection). It supports loading either a collection from the Hub or an MCP server tools.


#### Tool Collection from any MCP server

Leverage tools from the hundreds of MCP servers available on [glama.ai](https://glama.ai/mcp/servers) or [smithery.ai](https://smithery.ai/).

The MCP servers tools can be loaded with [ToolCollection.from_mcp()](/docs/smolagents/main/en/reference/tools#smolagents.ToolCollection.from_mcp).

> [!WARNING]
> **Security Warning:** Always verify the source and integrity of any MCP server before connecting to it, especially for production environments.
> Using MCP servers comes with security risks:
> - **Trust is essential:** Only use MCP servers from trusted sources. Malicious servers can execute harmful code on your machine.
> - **Stdio-based MCP servers** will always execute code on your machine (that's their intended functionality).
> - **Streamable HTTP-based MCP servers:** While remote MCP servers will not execute code on your machine, still proceed with caution.

For stdio-based MCP servers, pass the server parameters as an instance of `mcp.StdioServerParameters`:
```py
from smolagents import ToolCollection, CodeAgent
from mcp import StdioServerParameters

server_parameters = StdioServerParameters(
    command="uvx",
    args=["--quiet", "pubmedmcp@0.1.3"],
    env={"UV_PYTHON": "3.12", **os.environ},
)

with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
    agent = CodeAgent(tools=[*tool_collection.tools], model=model, add_base_tools=True)
    agent.run("Please find a remedy for hangover.")
```

To enable structured output support with ToolCollection, add the `structured_output=True` parameter:
```py
with ToolCollection.from_mcp(server_parameters, trust_remote_code=True, structured_output=True) as tool_collection:
    agent = CodeAgent(tools=[*tool_collection.tools], model=model, add_base_tools=True)
    agent.run("Please find a remedy for hangover.")
```

For Streamable HTTP-based MCP servers, simply pass a dict with parameters to `mcp.client.streamable_http.streamablehttp_client` and add the key `transport` with the value `"streamable-http"`:
```py
from smolagents import ToolCollection, CodeAgent

with ToolCollection.from_mcp({"url": "http://127.0.0.1:8000/mcp", "transport": "streamable-http"}, trust_remote_code=True) as tool_collection:
    agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)
    agent.run("Please find a remedy for hangover.")
```

#### Tool Collection from a collection in the Hub

You can leverage it with the slug of the collection you want to use.
Then pass them as a list to initialize your agent, and start using them!

```py
from smolagents import ToolCollection, CodeAgent

image_tool_collection = ToolCollection.from_hub(
    collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f",
    token="<YOUR_HUGGINGFACEHUB_API_TOKEN>"
)
agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True)

agent.run("Please draw me a picture of rivers and lakes.")
```

To speed up the start, tools are loaded only if called by the agent.



<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/tools.md" />

### Inspecting runs with OpenTelemetry
https://huggingface.co/docs/smolagents/main/tutorials/inspect_runs.md

# Inspecting runs with OpenTelemetry


> [!TIP]
> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).

## Why log your agent runs?

Agent runs are complicated to debug.

Validating that a run went properly is hard, since agent workflows are [unpredictable by design](../conceptual_guides/intro_agents) (if they were predictable, you'd just be using good old code). 

And inspecting a run is hard as well: multi-step agents tend to quickly fill a console with logs, and most of the errors are just "LLM dumb" kind of errors, from which the LLM auto-corrects in the next step by writing better code or tool calls.

So using instrumentation to record agent runs is necessary in production for later inspection and monitoring!

We've adopted the [OpenTelemetry](https://opentelemetry.io/) standard for instrumenting agent runs.

This means that you can just run some instrumentation code, then run your agents normally, and everything gets logged into your platform. Below are some examples of how to do this with different OpenTelemetry backends.

Here's how it then looks like on the platform:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.gif"/>
</div>


## Setting up telemetry with Arize AI Phoenix
First install the required packages. Here we install [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) because that's a good solution to collect and inspect the logs, but there are other OpenTelemetry-compatible platforms that you could use for this collection & inspection part.

```shell
pip install 'smolagents[telemetry,toolkit]'
```

Then run the collector in the background.

```shell
python -m phoenix.server.main serve
```

Finally, set up `SmolagentsInstrumentor` to trace your agents and send the traces to Phoenix default endpoint.

```python
from phoenix.otel import register
from openinference.instrumentation.smolagents import SmolagentsInstrumentor

register()
SmolagentsInstrumentor().instrument()
```
Then you can run your agents!

```py
from smolagents import (
    CodeAgent,
    ToolCallingAgent,
    WebSearchTool,
    VisitWebpageTool,
    InferenceClientModel,
)

model = InferenceClientModel()

search_agent = ToolCallingAgent(
    tools=[WebSearchTool(), VisitWebpageTool()],
    model=model,
    name="search_agent",
    description="This is an agent that can do web search.",
)

manager_agent = CodeAgent(
    tools=[],
    model=model,
    managed_agents=[search_agent],
)
manager_agent.run(
    "If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?"
)
```
Voilà!
You can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run!

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.png">

You can see that the CodeAgent called its managed ToolCallingAgent (by the way, the managed agent could have been a CodeAgent as well) to ask it to run the web search for the U.S. 2024 growth rate. Then the managed agent returned its report and the manager agent acted upon it to calculate the economy doubling time! Sweet, isn't it?

## Setting up telemetry with 🪢 Langfuse

This part shows how to monitor and debug your Hugging Face **smolagents** with **Langfuse** using the `SmolagentsInstrumentor`.

> **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source platform for LLM engineering. It provides tracing and monitoring capabilities for AI agents, helping developers debug, analyze, and optimize their products. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and SDKs.

### Step 1: Install Dependencies

```python
%pip install langfuse 'smolagents[telemetry]' openinference-instrumentation-smolagents
```

### Step 2: Set Up Environment Variables

Set your Langfuse API keys and configure the OpenTelemetry endpoint to send traces to Langfuse. Get your Langfuse API keys by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting).

Also, add your [Hugging Face token](https://huggingface.co/settings/tokens) (`HF_TOKEN`) as an environment variable.

```python
import os
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# your Hugging Face token
os.environ["HF_TOKEN"] = "hf_..."
```

With the environment variables set, we can now initialize the Langfuse client. `get_client()` initializes the Langfuse client using the credentials provided in the environment variables.

```python
from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")
```

### Step 3: Initialize the `SmolagentsInstrumentor`

Initialize the `SmolagentsInstrumentor` before your application code. 


```python
from openinference.instrumentation.smolagents import SmolagentsInstrumentor
 
SmolagentsInstrumentor().instrument()
```

### Step 4: Run your smolagent

```python
from smolagents import (
    CodeAgent,
    ToolCallingAgent,
    WebSearchTool,
    VisitWebpageTool,
    InferenceClientModel,
)

model = InferenceClientModel(
    model_id="deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
)

search_agent = ToolCallingAgent(
    tools=[WebSearchTool(), VisitWebpageTool()],
    model=model,
    name="search_agent",
    description="This is an agent that can do web search.",
)

manager_agent = CodeAgent(
    tools=[],
    model=model,
    managed_agents=[search_agent],
)
manager_agent.run(
    "How can Langfuse be used to monitor and improve the reasoning and decision-making of smolagents when they execute multi-step tasks, like dynamically adjusting a recipe based on user feedback or available ingredients?"
)
```

### Step 5: View Traces in Langfuse

After running the agent, you can view the traces generated by your smolagents application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent.

![smolagents example trace](https://langfuse.com/images/cookbook/integration-smolagents/smolagent_example_trace.png)

_[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ce5160f9bfd5a6cd63b07d2bfcec6f54?timestamp=2025-02-11T09%3A25%3A45.163Z&display=details)_


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/inspect_runs.md" />

### Secure code execution
https://huggingface.co/docs/smolagents/main/tutorials/secure_code_execution.md

# Secure code execution


> [!TIP]
> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).

### Code agents

[Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of "writing actions as a JSON of tools names and arguments to use".

Why is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets were a better way, this package would have been written in JSON snippets and the devil would be laughing at us.

Code is just a better way to express actions on a computer. It has better:
- **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
- **Object management:** how do you store the output of an action like `generate_image` in JSON?
- **Generality:** code is built to express simply anything you can have a computer do.
- **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus?

This is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030).

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">

This is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters.

### Local code execution??

By default, the `CodeAgent` runs LLM-generated code in your environment.

This is inherently risky, LLM-generated code could be harmful to your environment.

Malicious code execution can occur in several ways:
- **Plain LLM error:** LLMs are still far from perfect and may unintentionally generate harmful commands while attempting to be helpful. While this risk is low, instances have been observed where an LLM attempted to execute potentially dangerous code.  
- **Supply chain attack:** Running an untrusted or compromised LLM could expose a system to harmful code generation. While this risk is extremely low when using well-known models on secure inference infrastructure, it remains a theoretical possibility.  
- **Prompt injection:** an agent browsing the web could arrive on a malicious website that contains harmful instructions, thus injecting an attack into the agent's memory
- **Exploitation of publicly accessible agents:** Agents exposed to the public can be misused by malicious actors to execute harmful code. Attackers may craft adversarial inputs to exploit the agent's execution capabilities, leading to unintended consequences.
Once malicious code is executed, whether accidentally or intentionally, it can damage the file system, exploit local or cloud-based resources, abuse API services, and even compromise network security.

One could argue that on the [spectrum of agency](../conceptual_guides/intro_agents), code agents give much higher agency to the LLM on your system than other less agentic setups: this goes hand-in-hand with higher risk.

So you need to be very mindful of security.

To improve safety, we propose a range of measures that propose elevated levels of security, at a higher setup cost.

We advise you to keep in mind that no solution will be 100% safe.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/code_execution_safety_diagram.png">

### Our local Python executor

To add a first layer of security, code execution in `smolagents` is not performed by the vanilla Python interpreter.
We have re-built a more secure `LocalPythonExecutor` from the ground up.

To be precise, this interpreter works by loading the Abstract Syntax Tree (AST) from your Code and executes it operation by operation, making sure to always follow certain rules:
- By default, imports are disallowed unless they have been explicitly added to an authorization list by the user.
- Furthermore, access to submodules is disabled by default, and each must be explicitly authorized in the import list as well, or you can pass for instance `numpy.*` to allow both `numpy` and all its subpackags, like `numpy.random` or `numpy.a.b`.
   - Note that some seemingly innocuous packages like `random` can give access to potentially harmful submodules, as in `random._os`.
- The total count of elementary operations processed is capped to prevent infinite loops and resource bloating.
- Any operation that has not been explicitly defined in our custom interpreter will raise an error.

You could try these safeguards as follows:

```py
from smolagents.local_python_executor import LocalPythonExecutor

# Set up custom executor, authorize package "numpy"
custom_executor = LocalPythonExecutor(["numpy"])

# Utilisty for pretty printing errors
def run_capture_exception(command: str):
    try:
        custom_executor(harmful_command)
    except Exception as e:
        print("ERROR:\n", e)

# Undefined command just do not work
harmful_command="!echo Bad command"
run_capture_exception(harmful_command)
# >>> ERROR: invalid syntax (<unknown>, line 1)


# Imports like os will not be performed unless explicitly added to `additional_authorized_imports`
harmful_command="import os; exit_code = os.system('echo Bad command')"
run_capture_exception(harmful_command)
# >>> ERROR: Code execution failed at line 'import os' due to: InterpreterError: Import of os is not allowed. Authorized imports are: ['statistics', 'numpy', 'itertools', 'time', 'queue', 'collections', 'math', 'random', 're', 'datetime', 'stat', 'unicodedata']

# Even in authorized imports, potentially harmful packages will not be imported
harmful_command="import random; random._os.system('echo Bad command')"
run_capture_exception(harmful_command)
# >>> ERROR: Code execution failed at line 'random._os.system('echo Bad command')' due to: InterpreterError: Forbidden access to module: os

# Infinite loop are interrupted after N operations
harmful_command="""
while True:
    pass
"""
run_capture_exception(harmful_command)
# >>> ERROR: Code execution failed at line 'while True: pass' due to: InterpreterError: Maximum number of 1000000 iterations in While loop exceeded
```

These safeguards make out interpreter is safer.
We have used it on a diversity of use cases, without ever observing any damage to the environment.

> [!WARNING]
> It's important to understand that no local python sandbox can ever be completely secure. While our interpreter provides significant safety improvements over the standard Python interpreter, it is still possible for a determined attacker or a fine-tuned malicious LLM to find vulnerabilities and potentially harm your environment. 
> 
> For example, if you've allowed packages like `Pillow` to process images, the LLM could generate code that creates thousands of large image files to fill your hard drive. Other advanced escape techniques might exploit deeper vulnerabilities in authorized packages.
> 
> Running LLM-generated code in your local environment always carries some inherent risk. The only way to run LLM-generated code with truly robust security isolation is to use remote execution options like E2B or Docker, as detailed below.

The risk of a malicious attack is low when using well-known LLMs from trusted inference providers, but it is not zero.
For high-security applications or when using less trusted models, you should consider using a remote execution sandbox.

## Sandbox approaches for secure code execution

When working with AI agents that execute code, security is paramount. There are two main approaches to sandboxing code execution in smolagents, each with different security properties and capabilities:


![Sandbox approaches comparison](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/sandboxed_execution.png)

1. **Running individual code snippets in a sandbox**: This approach (left side of diagram) only executes the agent-generated Python code snippets in a sandbox while keeping the rest of the agentic system in your local environment. It's simpler to set up using `executor_type="e2b"`, `executor_type="modal"`, or
`executor_type="docker"`, but it doesn't support multi-agents and still requires passing state data between your environment and the sandbox.

2. **Running the entire agentic system in a sandbox**: This approach (right side of diagram) runs the entire agentic system, including the agent, model, and tools, within a sandbox environment. This provides better isolation but requires more manual setup and may require passing sensitive credentials (like API keys) to the sandbox environment.

This guide describes how to set up and use both types of sandbox approaches for your agent applications.

### E2B setup

#### Installation

1. Create an E2B account at [e2b.dev](https://e2b.dev)
2. Install the required packages:
```bash
pip install 'smolagents[e2b]'
```

#### Running your agent in E2B: quick start

We provide a simple way to use an E2B Sandbox: simply add `executor_type="e2b"` to the agent initialization, as follows:

```py
from smolagents import InferenceClientModel, CodeAgent

with CodeAgent(model=InferenceClientModel(), tools=[], executor_type="e2b") as agent:
    agent.run("Can you give me the 100th Fibonacci number?")
```

> [!TIP]
> Using the agent as a context manager (with the `with` statement) ensures that the E2B sandbox is cleaned up immediately after the agent completes its task.
> Alternatively, you can manually call the agent's `cleanup()` method.

This solution send the agent state to the server at the start of each `agent.run()`.
Then the models are called from the local environment, but the generated code will be sent to the sandbox for execution, and only the output will be returned.

This is illustrated in the figure below.

<p align="center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/sandboxed_execution.png" alt="sandboxed code execution" width=60% max-width=500px>
</p>

However, since any call to a [managed agent](../examples/multiagents) would require model calls, since we do not transfer secrets to the remote sandbox, the model call would lack credentials.
Hence this solution does not work (yet) with more complicated multi-agent setups.

#### Running your agent in E2B: multi-agents

To use multi-agents in an E2B sandbox, you need to run your agents completely from within E2B.

Here is how to do it:

```python
from e2b_code_interpreter import Sandbox
import os

# Create the sandbox
sandbox = Sandbox()

# Install required packages
sandbox.commands.run("pip install smolagents")

def run_code_raise_errors(sandbox, code: str, verbose: bool = False) -> str:
    execution = sandbox.run_code(
        code,
        envs={'HF_TOKEN': os.getenv('HF_TOKEN')}
    )
    if execution.error:
        execution_logs = "\n".join([str(log) for log in execution.logs.stdout])
        logs = execution_logs
        logs += execution.error.traceback
        raise ValueError(logs)
    return "\n".join([str(log) for log in execution.logs.stdout])

# Define your agent application
agent_code = """
import os
from smolagents import CodeAgent, InferenceClientModel

# Initialize the agents
agent = CodeAgent(
    model=InferenceClientModel(token=os.getenv("HF_TOKEN"), provider="together"),
    tools=[],
    name="coder_agent",
    description="This agent takes care of your difficult algorithmic problems using code."
)

manager_agent = CodeAgent(
    model=InferenceClientModel(token=os.getenv("HF_TOKEN"), provider="together"),
    tools=[],
    managed_agents=[agent],
)

# Run the agent
response = manager_agent.run("What's the 20th Fibonacci number?")
print(response)
"""

# Run the agent code in the sandbox
execution_logs = run_code_raise_errors(sandbox, agent_code)
print(execution_logs)
```

### Modal setup

#### Installation

1. Create a Modal account at [modal.com](https://modal.com/signup)
2. Install the required packages:
```bash
pip install 'smolagents[modal]'
```

#### Running your agent in Modal: quick start

We provide a simple way to use a Modal Sandbox: simply add `executor_type="modal"` to the agent initialization, as follows:

```py
from smolagents import InferenceClientModel, CodeAgent

with CodeAgent(model=InferenceClientModel(), tools=[], executor_type="modal") as agent:
    agent.run("What is the 42th Fibonacci number?")
```

> [!TIP]
> Using the agent as a context manager (with the `with` statement) ensures that the Modal sandbox is cleaned immediately after the agent completes its task.
> Alternatively, you can manually call the agent's `cleanup()` method.

The agent state and generated code from the `InferenceClientModel` are sent to a Modal sandbox, which can securely execute code inside them.

### Docker setup

#### Installation

1. [Install Docker on your system](https://docs.docker.com/get-started/get-docker/)
2. Install the required packages:
```bash
pip install 'smolagents[docker]'
```

#### Running your agent in Docker: quick start

Similar to the E2B Sandbox above, to quickly get started with Docker, simply add `executor_type="docker"` to the agent initialization, like:

```py
from smolagents import InferenceClientModel, CodeAgent

with CodeAgent(model=InferenceClientModel(), tools=[], executor_type="docker") as agent:
    agent.run("Can you give me the 100th Fibonacci number?")
```

> [!TIP]
> Using the agent as a context manager (with the `with` statement) ensures that the Docker container is cleaned immediately after the agent completes its task.
> Alternatively, you can manually call the agent's `cleanup()` method.

#### Advanced docker usage

If you want to run multi-agent systems in Docker, you'll need to setup a custom interpreter in a sandbox.

Here is how to setup the a Dockerfile:

```dockerfile
FROM python:3.10-bullseye

# Install build dependencies
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        build-essential \
        python3-dev && \
    pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir smolagents && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /app

# Run with limited privileges
USER nobody

# Default command
CMD ["python", "-c", "print('Container ready')"]
```

Create a sandbox manager to run code:

```python
import docker
import os
from typing import Optional

class DockerSandbox:
    def __init__(self):
        self.client = docker.from_env()
        self.container = None

    def create_container(self):
        try:
            image, build_logs = self.client.images.build(
                path=".",
                tag="agent-sandbox",
                rm=True,
                forcerm=True,
                buildargs={},
                # decode=True
            )
        except docker.errors.BuildError as e:
            print("Build error logs:")
            for log in e.build_log:
                if 'stream' in log:
                    print(log['stream'].strip())
            raise

        # Create container with security constraints and proper logging
        self.container = self.client.containers.run(
            "agent-sandbox",
            command="tail -f /dev/null",  # Keep container running
            detach=True,
            tty=True,
            mem_limit="512m",
            cpu_quota=50000,
            pids_limit=100,
            security_opt=["no-new-privileges"],
            cap_drop=["ALL"],
            environment={
                "HF_TOKEN": os.getenv("HF_TOKEN")
            },
        )

    def run_code(self, code: str) -> Optional[str]:
        if not self.container:
            self.create_container()

        # Execute code in container
        exec_result = self.container.exec_run(
            cmd=["python", "-c", code],
            user="nobody"
        )

        # Collect all output
        return exec_result.output.decode() if exec_result.output else None


    def cleanup(self):
        if self.container:
            try:
                self.container.stop()
            except docker.errors.NotFound:
                # Container already removed, this is expected
                pass
            except Exception as e:
                print(f"Error during cleanup: {e}")
            finally:
                self.container = None  # Clear the reference

# Example usage:
sandbox = DockerSandbox()

try:
    # Define your agent code
    agent_code = """
import os
from smolagents import CodeAgent, InferenceClientModel

# Initialize the agent
agent = CodeAgent(
    model=InferenceClientModel(token=os.getenv("HF_TOKEN"), provider="together"),
    tools=[]
)

# Run the agent
response = agent.run("What's the 20th Fibonacci number?")
print(response)
"""

    # Run the code in the sandbox
    output = sandbox.run_code(agent_code)
    print(output)

finally:
    sandbox.cleanup()
```

### WebAssembly setup

WebAssembly (Wasm) is a binary instruction format that allows code to be run in a safe, sandboxed environment.
It is designed to be fast, efficient, and secure, making it an excellent choice for executing potentially untrusted code.

The `WasmExecutor` uses [Pyodide](https://pyodide.org/) and [Deno](https://docs.deno.com/).

#### Installation

1. [Install Deno on your system](https://docs.deno.com/runtime/getting_started/installation/)

#### Running your agent in WebAssembly: quick start

Simply pass `executor_type="wasm"` to the agent initialization, like:
```py
from smolagents import InferenceClientModel, CodeAgent

agent = CodeAgent(model=InferenceClientModel(), tools=[], executor_type="wasm")

agent.run("Can you give me the 100th Fibonacci number?")
```

### Best practices for sandboxes

These key practices apply to both E2B and Docker sandboxes:

- Resource management
  - Set memory and CPU limits
  - Implement execution timeouts
  - Monitor resource usage
- Security
  - Run with minimal privileges
  - Disable unnecessary network access
  - Use environment variables for secrets
- Environment
  - Keep dependencies minimal
  - Use fixed package versions
  - If you use base images, update them regularly

- Cleanup
  - Always ensure proper cleanup of resources, especially for Docker containers, to avoid having dangling containers eating up resources.

✨ By following these practices and implementing proper cleanup procedures, you can ensure your agent runs safely and efficiently in a sandboxed environment.

## Comparing security approaches

As illustrated in the diagram earlier, both sandboxing approaches have different security implications:

### Approach 1: Running just the code snippets in a sandbox
- **Pros**: 
  - Easier to set up with a simple parameter (`executor_type="e2b"` or `executor_type="docker"`)
  - No need to transfer API keys to the sandbox
  - Better protection for your local environment
- **Cons**:
  - Doesn't support multi-agents (managed agents)
  - Still requires transferring state between your environment and the sandbox
  - Limited to specific code execution

### Approach 2: Running the entire agentic system in a sandbox
- **Pros**:
  - Supports multi-agents
  - Complete isolation of the entire agent system
  - More flexible for complex agent architectures
- **Cons**:
  - Requires more manual setup
  - May require transferring sensitive API keys to the sandbox
  - Potentially higher latency due to more complex operations

Choose the approach that best balances your security needs with your application's requirements. For most applications with simpler agent architectures, Approach 1 provides a good balance of security and ease of use. For more complex multi-agent systems where you need full isolation, Approach 2, while more involved to set up, offers better security guarantees.


<EditOnGithub source="https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/secure_code_execution.md" />
