gpt-oss not supporting OpenAI Agents SDK tools
Unlike their closed-source models, gpt-oss currently does not support OAI tools built for their closed source models. They built custom tools like other oss agents: https://github.com/openai/gpt-oss/tree/main/gpt_oss/tools. Would really love that they can support these tools they built already.
Can you clarify what you are looking for specifically?
Can you clarify what you are looking for specifically?
For instance, in the following example from the openai-agents-python: https://github.com/openai/openai-agents-python/blob/main/examples/research_bot/agents/search_agent.py, it is convenient to instantiate an agent with the "builtin" WebSearchTool implemented in the Agents SDK. Since gpt-oss is clearly capable of performing web searches, I'd love that it is able to use this WebSearchTool within this SDK. As a minimal example, I adapted code from https://cookbook.openai.com/articles/gpt-oss/run-vllm: From my understanding, as of right now, this will not work.
import asyncio
from openai import AsyncOpenAI
from agents import Agent, Runner, OpenAIResponsesModel, set_tracing_disabled, WebSearchTool
set_tracing_disabled(True)
async def main(model: str, api_key: str):
agent = Agent(
name="Assistant",
instructions="You only respond in haikus.",
model=OpenAIResponsesModel(
model="openai/gpt-oss-120b",
openai_client=AsyncOpenAI(
base_url="http://localhost:8000/v1",
api_key="EMPTY",
),
)
tools=[WebSearchTool()],
)
result = await Runner.run(agent, "What's the weather in Tokyo?")
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())
Web search tool is a paid offering (https://platform.openai.com/docs/pricing#built-in-tools) by OpenAI, so I don't think this would work without an API key
Web search tool is a paid offering (https://platform.openai.com/docs/pricing#built-in-tools) by OpenAI, so I don't think this would work without an API key
I have the same problem. vllm supports these built-in tools (see my article), but the Agents SDK seems to not allow it?
Nevermind, I solved my own problem. Here's how I did it - assuming vllm with --tool-server running the the backend as in my article.
"""
Self-contained example showing how to use OpenAI Agents with built-in tools.
This demonstrates the workaround for using code_interpreter and web_search_preview
with the OpenAI Agents SDK when connecting to custom backends (like vllm).
The key insight: CodeInterpreterTool accepts a tool_config parameter that gets
passed through to the backend as-is. We leverage this to inject custom tool
configurations for both code_interpreter and web_search_preview.
"""
import asyncio
from typing import Dict, Any, Optional
from openai import AsyncOpenAI
from agents import (
Agent,
Runner,
OpenAIResponsesModel,
CodeInterpreterTool,
)
class HostedWebSearchPreviewTool(CodeInterpreterTool):
"""
Custom web search tool that passes through the exact configuration
required by backends that support web_search_preview.
This subclasses CodeInterpreterTool because it has the infrastructure
to pass through arbitrary tool_config dicts to the backend.
"""
def __init__(
self,
*,
search_context_size: Optional[str] = None,
user_location: Optional[Dict[str, Any]] = None,
) -> None:
# Build the exact tool config structure
tool_config: Dict[str, Any] = {"type": "web_search_preview"}
# Add optional parameters if provided
if search_context_size:
tool_config["search_context_size"] = search_context_size
if user_location:
tool_config["user_location"] = user_location
# Pass to parent - SDK will send this config directly to the backend
super().__init__(tool_config=tool_config) # type: ignore
@property
def name(self) -> str:
return "web_search_preview"
def build_hosted_tools() -> list:
"""
Build the hosted tools with exact structure required by the backend.
Returns a list containing:
1. CodeInterpreterTool with container auto-mode
2. Web search preview tool
These tools are sent to the backend exactly as:
[
{
"type": "code_interpreter",
"container": {"type": "auto"}
},
{
"type": "web_search_preview"
}
]
"""
return [
# Code interpreter with auto container
CodeInterpreterTool(
tool_config={
"type": "code_interpreter",
"container": {"type": "auto"}
} # type: ignore
),
# Web search preview
HostedWebSearchPreviewTool(),
]
async def main():
"""
Example usage of an agent with built-in code_interpreter and web_search_preview tools.
This demonstrates using a local vllm backend running the gpt-oss-120b model
with support for OpenAI's Responses API including built-in tools.
"""
# Build the hosted tools
hosted_tools = build_hosted_tools()
# Create an agent with the hosted tools
agent = Agent(
name="Assistant",
instructions="""
You are a helpful AI assistant with access to:
- code_interpreter: Execute Python code safely in a sandboxed container
- web_search_preview: Browse and search the web, visit URLs, extract content
Use these tools directly to help users with:
- Calculations, data analysis, file operations -> use code_interpreter
- Web browsing, URL visits, web searches -> use web_search_preview
""",
model=OpenAIResponsesModel(
model="openai/gpt-oss-120b",
openai_client=AsyncOpenAI(
api_key="NOT_NEEDED",
base_url="http://localhost:8000/v1",
),
),
tools=hosted_tools,
)
# Example 1: Use code interpreter
print("=" * 80)
print("Example 1: Using code_interpreter")
print("=" * 80)
result1 = await Runner.run(
agent,
"Calculate the first 10 Fibonacci numbers and plot them."
)
print(result1.final_output)
# Example 2: Use web search
print("\n" + "=" * 80)
print("Example 2: Using web_search_preview")
print("=" * 80)
result2 = await Runner.run(
agent,
"Search for the latest news about AI agents."
)
print(result2.final_output)
# Example 3: Combined usage
print("\n" + "=" * 80)
print("Example 3: Using both tools")
print("=" * 80)
result3 = await Runner.run(
agent,
"Find the current price of Bitcoin and calculate how much 5 BTC would be worth."
)
print(result3.final_output)
def main_sync():
"""Synchronous wrapper for the main function."""
asyncio.run(main())
if __name__ == "__main__":
# Run the examples
main_sync()
Output sample:
(venv) zmarty@zmarty-aorus:/git/agent-experiments$ python ./example_builtin_tools.py
================================================================================
Example 1: Using code_interpreter
================================================================================
OPENAI_API_KEY is not set, skipping trace export
**First 10 Fibonacci numbers**
| n (term) | Fibonacci number |
|----------|-----------------|
| 1 | 0 |
| 2 | 1 |
| 3 | 1 |
| 4 | 2 |
| 5 | 3 |
| 6 | 5 |
| 7 | 8 |
| 8 | 13 |
| 9 | 21 |
|10 | 34 |
**Plot**
Below is a simple line chart that visualises these values (term = n on the horizontal axis, Fibonacci number on the vertical axis).

================================================================================
Example 2: Using web_search_preview
================================================================================
OPENAI_API_KEY is not set, skipping trace export
**What’s happening right now with AI agents (October 2025)**
| Date | Source | Headline / What it’s about | Key take‑aways for AI agents |
|------|--------|----------------------------|------------------------------|
| **Oct 19 2025** | Business Insider | *OpenAI co‑founder Andrej Karpathy says functional AI agents are still a decade away* | Karpathy argues current agents “don’t work” – they lack multimodal ability, continual learning and real‑world computer use. He predicts we need about ten more years before truly autonomous agents are viable【1†L29-L64】. |
| **Oct 16 2025** | IBM press release | *IBM rolls out three new AI agents on the Oracle Fusion Applications AI Agent Marketplace* | Agents built with Oracle AI Agent Studio automate inter‑company agreement reviews, sales‑order entry, and requisition‑to‑contract conversion. IBM also promises more HR and supply‑chain agents built on watsonx Orchestrate【3†L7-L34】【3†L49-L57】. |
| **Oct 2025 (ongoing)** | IBM Think (feature article) | *AI agents in 2025: expectations vs. reality* | The piece notes the hype (“2025 is the year of the agent”) while tempering expectations: many developers are exploring agents, but ROI, reliability and truly autonomous behavior are still uncertain【4†L22-L34】【5†L36-L51】. |
| **Oct 17 2025** | MarketingProfs (AI Update) | *Round‑up of AI‑agent related news* | • **OpenAI + Walmart** – ChatGPT now can browse product catalogs and complete checkout, turning it into a conversational shopping **agent**【6†L7-L13】. <br>• **Slack** – Slackbot is being upgraded to a personal AI assistant that can answer queries, summarise channels and manage calendars – another enterprise‑focused **agent**【6†L20-L27】. <br>• **Google Gemini Enterprise** – Google launches a platform with pre‑built **agents** for data analysis and custom AI assistants for corporate users【6†L32-L38】. <br>• **Anthropic “Skills” for Claude** – lets Claude load task‑specific instruction bundles, effectively acting as a reusable **agent** for tasks such as slide creation or spreadsheet analysis【6†L44-L49】. <br>• **Microsoft Copilot in Windows 11** – AI **agents** (Copilot Vision, Voice, Actions) can perform local PC tasks, reinforcing the “AI PC” concept【6†L55-L60】. |
| **Oct 2025** | Reuters (via MarketingProfs) | *Google’s Gemini Enterprise agents* | The article highlights that Gemini Enterprise ships with a set of **pre‑built agents** that can query internal documents, run analyses and integrate with Google Workspace, positioning it as a direct competitor to Microsoft Copilot【6†L32-L38】. |
### Themes emerging from the coverage
1. **Skepticism vs. hype** – High‑profile AI leaders (e.g., Karpathy) caution that truly autonomous, multimodal agents are still far off, while industry press repeatedly touts 2025 as “the year of the agent.”
2. **Enterprise‑focused agent marketplaces** – IBM & Oracle are creating curated marketplaces where companies can pick plug‑and‑play agents for specific business processes (sales, contracts, HR, supply‑chain).
3. **Consumer‑oriented agents** – OpenAI’s partnership with Walmart embeds an e‑commerce checkout **agent** inside ChatGPT, turning a chat model into a shopping assistant.
4. **Platform‑level agent ecosystems** – Google Gemini Enterprise and Microsoft Windows 11 Copilot are releasing suites of built‑in agents that can be customized or extended by developers.
5. **Tool‑centric agent enhancements** – Anthropic’s “Skills” let the Claude model load reusable instruction sets, effectively turning it into a modular **agent** for repeated business tasks.
### What this means now
- **Developers and product teams** should expect a growing selection of ready‑made agents for specific workflows, but they still need to manage limitations around reliability, data safety and the lack of “continuous learning.”
- **Business decision‑makers** can experiment with enterprise agents (e.g., IBM‑Oracle marketplace) without building everything from scratch, but should align expectations with the current state‑of‑the‑art, which many experts say is still mostly assistive rather than fully autonomous.
- **Consumers and marketers** will see AI agents moving directly into user‑facing experiences (shopping in ChatGPT, AI‑enhanced Slack workspaces), opening new channels for engagement and data collection but also raising privacy and compliance considerations.
*In short, AI agents are buzzing everywhere—from hype‑driven headlines to concrete enterprise product releases—but the consensus among insiders is that truly autonomous, general‑purpose agents are still years away.*
================================================================================
Example 3: Using both tools
================================================================================
OPENAI_API_KEY is not set, skipping trace export
The current Bitcoin price shown on CoinDesk is **$108,959.38** per BTC【2†L13-L15】.
So, the value of **5 BTC** is:
\[
5 \times \$108{,}959.38 \;=\; \$544{,}796.90
\]
**5 BTC ≈ $544,796.90**.
OPENAI_API_KEY is not set, skipping trace export