# Mcp-Course

## Docs

- [Webhook Listener](https://huggingface.co/learn/mcp-course/unit3_1/webhook-listener.md)
- [MCP Client](https://huggingface.co/learn/mcp-course/unit3_1/mcp-client.md)
- [Quiz 2: Pull Request Agent Integration](https://huggingface.co/learn/mcp-course/unit3_1/quiz2.md)
- [Quiz 1: MCP Server Implementation](https://huggingface.co/learn/mcp-course/unit3_1/quiz1.md)
- [Conclusion](https://huggingface.co/learn/mcp-course/unit3_1/conclusion.md)
- [Build a Pull Request Agent on the Hugging Face Hub](https://huggingface.co/learn/mcp-course/unit3_1/introduction.md)
- [Creating the MCP Server](https://huggingface.co/learn/mcp-course/unit3_1/creating-the-mcp-server.md)
- [Setting up the Project](https://huggingface.co/learn/mcp-course/unit3_1/setting-up-the-project.md)
- [Welcome to the 🤗 Model Context Protocol (MCP) Course](https://huggingface.co/learn/mcp-course/unit0/introduction.md)
- [Building the Gradio MCP Server](https://huggingface.co/learn/mcp-course/unit2/gradio-server.md)
- [Using MCP with Local and Open Source Models](https://huggingface.co/learn/mcp-course/unit2/continue-client.md)
- [Building MCP Clients](https://huggingface.co/learn/mcp-course/unit2/clients.md)
- [Building an End-to-End MCP Application](https://huggingface.co/learn/mcp-course/unit2/introduction.md)
- [Local Tiny Agents with AMD NPU and iGPU Acceleration](https://huggingface.co/learn/mcp-course/unit2/lemonade-server.md)
- [Gradio as an MCP Client](https://huggingface.co/learn/mcp-course/unit2/gradio-client.md)
- [Building Tiny Agents with MCP and the Hugging Face Hub](https://huggingface.co/learn/mcp-course/unit2/tiny-agents.md)
- [The Communication Protocol](https://huggingface.co/learn/mcp-course/unit1/communication-protocol.md)
- [MCP SDK](https://huggingface.co/learn/mcp-course/unit1/sdk.md)
- [Hugging Face MCP Server](https://huggingface.co/learn/mcp-course/unit1/hf-mcp-server.md)
- [MCP Clients](https://huggingface.co/learn/mcp-course/unit1/mcp-clients.md)
- [Quiz 2: MCP SDK](https://huggingface.co/learn/mcp-course/unit1/quiz2.md)
- [Gradio MCP Integration](https://huggingface.co/learn/mcp-course/unit1/gradio-mcp.md)
- [Unit1 recap](https://huggingface.co/learn/mcp-course/unit1/unit1-recap.md)
- [Quiz 1: MCP Fundamentals](https://huggingface.co/learn/mcp-course/unit1/quiz1.md)
- [Understanding MCP Capabilities](https://huggingface.co/learn/mcp-course/unit1/capabilities.md)
- [Key Concepts and Terminology](https://huggingface.co/learn/mcp-course/unit1/key-concepts.md)
- [Introduction to Model Context Protocol (MCP)](https://huggingface.co/learn/mcp-course/unit1/introduction.md)
- [Get your certificate!](https://huggingface.co/learn/mcp-course/unit1/certificate.md)
- [Architectural Components of MCP](https://huggingface.co/learn/mcp-course/unit1/architectural-components.md)
- [Unit 3 Solution Walkthrough: Building a Pull Request Agent with MCP](https://huggingface.co/learn/mcp-course/unit3/build-mcp-server-solution-walkthrough.md)
- [Module 1: Build MCP Server](https://huggingface.co/learn/mcp-course/unit3/build-mcp-server.md)
- [Module 3: Slack Notification](https://huggingface.co/learn/mcp-course/unit3/slack-notification.md)
- [Unit 3 Conclusion: The CodeCraft Studios Transformation](https://huggingface.co/learn/mcp-course/unit3/conclusion.md)
- [Advanced MCP Development: Building Custom Workflow Servers for Claude Code](https://huggingface.co/learn/mcp-course/unit3/introduction.md)
- [Get your certificate!](https://huggingface.co/learn/mcp-course/unit3/certificate.md)
- [Module 2: GitHub Actions Integration](https://huggingface.co/learn/mcp-course/unit3/github-actions-integration.md)

### Webhook Listener
https://huggingface.co/learn/mcp-course/unit3_1/webhook-listener.md

# Webhook Listener

The webhook listener is the entry point for our Pull Request Agent. It receives real-time events from the Hugging Face Hub when discussions are created or updated, triggering our MCP-powered tagging workflow. In this section, we'll implement a webhook handler using FastAPI.

## Understanding Webhook Integration

Following the [Hugging Face Webhooks Guide](https://raw.githubusercontent.com/huggingface/hub-docs/refs/heads/main/docs/hub/webhooks-guide-discussion-bot.md), our webhook listener validates incoming requests and processes discussion events in real-time.

![Webhook Creation](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/webhook-creation.png)

### Webhook Event Flow

Understanding the webhook flow is crucial for building a reliable listener:

1. **User Action**: Someone creates a comment in a model repository discussion
2. **Hub Event**: Hugging Face generates a webhook event
3. **Webhook Delivery**: Hub sends POST request to our endpoint
4. **Authentication**: We validate the webhook secret
5. **Processing**: Extract tags from the comment content
6. **Action**: Use MCP tools to create pull requests for new tags

> [!TIP]
> Webhooks are push notifications - the Hugging Face Hub actively sends events to your application rather than you polling for changes. This enables real-time responses to discussions and comments.

## FastAPI Webhook Application

Let's build our webhook listener step by step, starting with the foundation and building up to the complete processing logic.

### 1. Application Setup

First, let's set up the basic FastAPI application with all necessary imports and configuration:

```python
import os
import json
from datetime import datetime
from typing import List, Dict, Any, Optional

from fastapi import FastAPI, Request, BackgroundTasks
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
```

These imports give us everything we need to build a robust webhook handler. `FastAPI` provides the web framework, `BackgroundTasks` enables async processing, and the typing imports help with data validation.

Now let's configure our application:

```python
# Configuration
WEBHOOK_SECRET = os.getenv("WEBHOOK_SECRET")
HF_TOKEN = os.getenv("HF_TOKEN")

# Simple storage for processed operations
tag_operations_store: List[Dict[str, Any]] = []

app = FastAPI(title="HF Tagging Bot")
app.add_middleware(CORSMiddleware, allow_origins=["*"])
```

This configuration sets up:
- **Webhook secret**: For validating incoming webhooks
- **HF token**: For authenticating with the Hub API
- **Operations store**: In-memory storage for monitoring processed operations
- **CORS middleware**: Allows cross-origin requests for the web interface

> [!TIP]
> The `tag_operations_store` list keeps track of recent webhook processing operations. This is useful for debugging and monitoring, but in production you might want to use a database or limit the size of this list.

### 2. Webhook Data Models

Based on the [Hugging Face webhook documentation](https://raw.githubusercontent.com/huggingface/hub-docs/refs/heads/main/docs/hub/webhooks-guide-discussion-bot.md), we need to understand the webhook data structure:

```python
class WebhookEvent(BaseModel):
    event: Dict[str, str]          # Contains action and scope information
    comment: Dict[str, Any]        # Comment content and metadata
    discussion: Dict[str, Any]     # Discussion information
    repo: Dict[str, str]           # Repository details
```

This Pydantic model helps us understand the webhook structure.

The key fields we care about are:
- `event.action`: Usually "create" for new comments
- `event.scope`: Usually "discussion.comment" for comment events
- `comment.content`: The actual comment text
- `repo.name`: The repository where the comment was made

### 3. Core Webhook Handler

Now for the main webhook handler - this is where the important part happens. Let's break it down into digestible pieces:

```python
@app.post("/webhook")
async def webhook_handler(request: Request, background_tasks: BackgroundTasks):
    """
    Handle incoming webhooks from Hugging Face Hub
    Following the pattern from: https://raw.githubusercontent.com/huggingface/hub-docs/refs/heads/main/docs/hub/webhooks-guide-discussion-bot.md
    """
    print("🔔 Webhook received!")
    
    # Step 1: Validate webhook secret (security)
    webhook_secret = request.headers.get("X-Webhook-Secret")
    if webhook_secret != WEBHOOK_SECRET:
        print("❌ Invalid webhook secret")
        return {"error": "incorrect secret"}, 400
```

The first step is security validation. We check the `X-Webhook-Secret` header against our configured secret to ensure the webhook is legitimate.

> [!TIP]
> Always validate webhook secrets! Without this check, anyone could send fake webhook requests to your application. The secret acts as a shared password between Hugging Face and your application.

Next, let's parse and validate the webhook data:

```python
    # Step 2: Parse webhook data
    try:
        webhook_data = await request.json()
        print(f"📥 Webhook data: {json.dumps(webhook_data, indent=2)}")
    except Exception as e:
        print(f"❌ Error parsing webhook data: {str(e)}")
        return {"error": "invalid JSON"}, 400
    
    # Step 3: Validate event structure
    event = webhook_data.get("event", {})
    if not event:
        print("❌ No event data in webhook")
        return {"error": "missing event data"}, 400
```

This parsing step handles potential JSON errors gracefully and validates that we have the expected event structure.

Now for the event filtering logic:

```python
    # Step 4: Check if this is a discussion comment creation
    # Following the webhook guide pattern:
    if (
        event.get("action") == "create" and 
        event.get("scope") == "discussion.comment"
    ):
        print("✅ Valid discussion comment creation event")
        
        # Process in background to return quickly to Hub
        background_tasks.add_task(process_webhook_comment, webhook_data)
        
        return {
            "status": "accepted",
            "message": "Comment processing started",
            "timestamp": datetime.now().isoformat()
        }
    else:
        print(f"ℹ️ Ignoring event: action={event.get('action')}, scope={event.get('scope')}")
        return {
            "status": "ignored",
            "reason": "Not a discussion comment creation"
        }
```

This filtering ensures we only process the events we care about - new discussion comments. We ignore other events like repository creation, model uploads, etc.

We use FastAPI's `background_tasks.add_task()` to process the webhook asynchronously. This allows us to return a response quickly (within seconds) while the actual tag processing happens in the background.

> [!TIP]
> Webhook endpoints should respond within 10 seconds, or the sending platform may consider them failed. Using background tasks ensures fast responses while allowing complex processing to happen asynchronously.

### 4. Comment Processing Logic

Now let's implement the core comment processing function that does the actual tag extraction and MCP tool usage:

```python
async def process_webhook_comment(webhook_data: Dict[str, Any]):
    """
    Process webhook comment to detect and add tags
    Integrates with our MCP client for Hub interactions
    """
    print("🏷️ Starting process_webhook_comment...")
    
    try:
        # Extract comment and repository information
        comment_content = webhook_data["comment"]["content"]
        discussion_title = webhook_data["discussion"]["title"]
        repo_name = webhook_data["repo"]["name"]
        discussion_num = webhook_data["discussion"]["num"]
        comment_author = webhook_data["comment"]["author"].get("id", "unknown")
        
        print(f"📝 Comment from {comment_author}: {comment_content}")
        print(f"📰 Discussion: {discussion_title}")
        print(f"📦 Repository: {repo_name}")
```

This initial section extracts all the relevant information from the webhook data. We get both the comment content and discussion title since tags might be mentioned in either place.

Next, we extract and process the tags:

```python
        # Extract potential tags from comment and title
        comment_tags = extract_tags_from_text(comment_content)
        title_tags = extract_tags_from_text(discussion_title)
        all_tags = list(set(comment_tags + title_tags))
        
        print(f"🔍 Found tags: {all_tags}")
        
        # Store operation for monitoring
        operation = {
            "timestamp": datetime.now().isoformat(),
            "repo_name": repo_name,
            "discussion_num": discussion_num,
            "comment_author": comment_author,
            "extracted_tags": all_tags,
            "comment_preview": comment_content[:100] + "..." if len(comment_content) > 100 else comment_content,
            "status": "processing"
        }
        tag_operations_store.append(operation)
```

We combine tags from both sources and create an operation record for monitoring. This record tracks the progress of each webhook processing operation.

> [!TIP]
> Storing operation records is crucial for debugging and monitoring. When something goes wrong, you can look at recent operations to understand what happened and why.

Now for the MCP agent integration:

```python
        if not all_tags:
            operation["status"] = "no_tags"
            operation["message"] = "No recognizable tags found"
            print("❌ No tags found to process")
            return
        
        # Get MCP agent for tag processing
        agent = await get_agent()
        if not agent:
            operation["status"] = "error"
            operation["message"] = "Agent not configured (missing HF_TOKEN)"
            print("❌ No agent available")
            return
        
        # Process each extracted tag
        operation["results"] = []
        for tag in all_tags:
            try:
                print(f"🤖 Processing tag '{tag}' for repo '{repo_name}'")
                
                # Create prompt for agent to handle tag processing
                prompt = f"""
                Analyze the repository '{repo_name}' and determine if the tag '{tag}' should be added.
                
                First, check the current tags using get_current_tags.
                If '{tag}' is not already present and it's a valid tag, add it using add_new_tag.
                
                Repository: {repo_name}
                Tag to process: {tag}
                
                Provide a clear summary of what was done.
                """
                
                response = await agent.run(prompt)
                print(f"🤖 Agent response for '{tag}': {response}")
                
                # Parse response and store result
                tag_result = {
                    "tag": tag,
                    "response": response,
                    "timestamp": datetime.now().isoformat()
                }
                operation["results"].append(tag_result)
                
            except Exception as e:
                error_msg = f"❌ Error processing tag '{tag}': {str(e)}"
                print(error_msg)
                operation["results"].append({
                    "tag": tag,
                    "error": str(e),
                    "timestamp": datetime.now().isoformat()
                })
        
        operation["status"] = "completed"
        print(f"✅ Completed processing {len(all_tags)} tags")
```

This section handles the core business logic:
1. **Validation**: Ensure we have tags to process and an available agent
2. **Processing**: For each tag, create a natural language prompt for the agent
3. **Recording**: Store all results for monitoring and debugging
4. **Error handling**: Gracefully handle errors for individual tags

The agent prompt is carefully crafted to instruct the AI on exactly what steps to take: check current tags first, then add the new tag if appropriate.


### 5. Health and Monitoring Endpoints

Besides the webhook handler, we need endpoints for monitoring and debugging. Let's add these essential endpoints:

```python
@app.get("/")
async def root():
    """Root endpoint with basic information"""
    return {
        "name": "HF Tagging Bot",
        "status": "running",
        "description": "Webhook listener for automatic model tagging",
        "endpoints": {
            "webhook": "/webhook",
            "health": "/health",
            "operations": "/operations"
        }
    }
```

The root endpoint provides basic information about your service and its available endpoints.

```python
@app.get("/health")
async def health_check():
    """Health check endpoint for monitoring"""
    agent = await get_agent()
    
    return {
        "status": "healthy",
        "timestamp": datetime.now().isoformat(),
        "components": {
            "webhook_secret": "configured" if WEBHOOK_SECRET else "missing",
            "hf_token": "configured" if HF_TOKEN else "missing",
            "mcp_agent": "ready" if agent else "not_ready"
        }
    }
```

The health check endpoint validates that all your components are properly configured. This is essential for production monitoring.

```python
@app.get("/operations")
async def get_operations():
    """Get recent tag operations for monitoring"""
    # Return last 50 operations
    recent_ops = tag_operations_store[-50:] if tag_operations_store else []
    return {
        "total_operations": len(tag_operations_store),
        "recent_operations": recent_ops
    }
```

The operations endpoint lets you see recent webhook processing activity, which is invaluable for debugging and monitoring.

> [!TIP]
> Health and monitoring endpoints are crucial for production deployments. They help you quickly identify configuration issues and monitor your application's activity without digging through logs.

## Webhook Configuration on Hugging Face Hub

Now that we have our webhook listener ready, let's configure it on the Hugging Face Hub. This is where we connect our application to real repository events.

### 1. Create Webhook in Settings

Following the [webhook setup guide](https://huggingface.co/docs/hub/webhooks-guide-discussion-bot):

![Webhook Settings](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/webhook-creation.png)

Navigate to your [Hugging Face Settings](https://huggingface.co/settings/webhooks) and configure:

1. **Target Repositories**: Specify which repositories to monitor
2. **Webhook URL**: Your deployed application endpoint (e.g., `https://your-space.hf.space/webhook`)
3. **Secret**: Use the same secret from your `WEBHOOK_SECRET` environment variable
4. **Events**: Subscribe to "Community (PR & discussions)" events

> [!TIP]
> Start with one or two test repositories before configuring webhooks for many repositories. This lets you validate your application works correctly before scaling up.

### 2. Space URL Configuration

For Hugging Face Spaces deployment, you'll need to get your direct URL:

![Direct URL](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/direct-url.png)

The process is:
1. Click "Embed this Space" in your Space settings
2. Copy the "Direct URL" 
3. Append `/webhook` to create your webhook endpoint
4. Update your webhook configuration with this URL

For example, if your Space URL is `https://username-space-name.hf.space`, your webhook endpoint would be `https://username-space-name.hf.space/webhook`.

![Space URL](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/direct-url.png)

## Testing the Webhook Listener

Testing is crucial before deploying to production. Let's walk through different testing approaches:

### 1. Local Testing

You can test your webhook handler locally using a simple script:

```python
# test_webhook_local.py
import requests
import json

# Test data matching webhook format
test_webhook_data = {
    "event": {
        "action": "create",
        "scope": "discussion.comment"
    },
    "comment": {
        "content": "This model needs tags: pytorch, transformers",
        "author": {"id": "test-user"}
    },
    "discussion": {
        "title": "Missing tags",
        "num": 1
    },
    "repo": {
        "name": "test-user/test-model"
    }
}

# Send test webhook
response = requests.post(
    "http://localhost:8000/webhook",
    json=test_webhook_data,
    headers={"X-Webhook-Secret": "your-test-secret"}
)

print(f"Status: {response.status_code}")
print(f"Response: {response.json()}")
```

This script simulates a real webhook request, allowing you to test your handler without waiting for real events.

### 2. Simulation Endpoint for Development

You can also add a simulation endpoint to your FastAPI application for easier testing:

```python
@app.post("/simulate_webhook")
async def simulate_webhook(
    repo_name: str, 
    discussion_title: str, 
    comment_content: str
) -> str:
    """Simulate webhook for testing purposes"""
    
    # Create mock webhook data
    mock_webhook_data = {
        "event": {
            "action": "create",
            "scope": "discussion.comment"
        },
        "comment": {
            "content": comment_content,
            "author": {"id": "test-user"}
        },
        "discussion": {
            "title": discussion_title,
            "num": 999
        },
        "repo": {
            "name": repo_name
        }
    }
    
    # Process the simulated webhook
    await process_webhook_comment(mock_webhook_data)
    
    return f"Simulated webhook processed for {repo_name}"
```

This endpoint makes it easy to test different scenarios through your application's interface.

> [!TIP]
> Simulation endpoints are incredibly useful during development. They let you test different tag combinations and edge cases without creating actual repository discussions.

## Expected Webhook Result

When everything is working correctly, you should see results like the discussion bot example:

![Discussion Result](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/discussion-result.png)

<!-- This shows the bot responding to a discussion comment, creating a PR -->

This screenshot shows a successful webhook processing where the bot creates a pull request in response to a discussion comment.

## Next Steps

With our webhook listener implemented, we now have:

1. **Secure webhook validation** following Hugging Face best practices
2. **Real-time event processing** with background task handling
3. **MCP integration** for intelligent tag management
4. **Monitoring and debugging** capabilities

In the next section, we'll integrate everything into a complete Pull Request Agent that demonstrates the full workflow from webhook to PR creation.

> [!TIP]
> Always return webhook responses quickly (within 10 seconds) to avoid timeouts. Use background tasks for longer processing operations like MCP tool execution and pull request creation.

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/webhook-listener.mdx" />

### MCP Client
https://huggingface.co/learn/mcp-course/unit3_1/mcp-client.md

# MCP Client

Now that we have our MCP server with tagging tools, we need to create a client that can interact with these tools. The MCP client serves as the bridge between our webhook handler and the MCP server, enabling our agent to use the Hub tagging functionality.

For the sake of this project, we'll build both an API and a Gradio app. The API will be used to test the MCP server and the webhook listener, and the Gradio app will be used to test the MCP client with simulated webhook events.

> [!TIP]
> For educational purposes, we will build the MCP Server and MCP Client in the same repo. In a real-world application, you would likely have a separate repo for the MCP Server and MCP Client. In fact, you might only build one of these components.

## Understanding the MCP Client Architecture

In our application, the MCP client is integrated into the main FastAPI application (`app.py`). It creates and manages connections to our MCP server, providing a seamless interface for tool execution.

![MCP Client Integration](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit3/app.png)

## Agent-Based MCP Client

We use the `huggingface_hub` Agent class that has built-in MCP support. This provides both language model capabilities and MCP tool integration in a single component.

### 1. Agent Configuration

Let's start by setting up the agent configuration and understanding each component:

```python
from huggingface_hub.inference._mcp.agent import Agent
from typing import Optional, Literal

# Configuration
HF_TOKEN = os.getenv("HF_TOKEN")
HF_MODEL = os.getenv("HF_MODEL", "microsoft/DialoGPT-medium")
DEFAULT_PROVIDER: Literal["hf-inference"] = "hf-inference"

# Global agent instance
agent_instance: Optional[Agent] = None
```

We start with the necessary imports and configuration. The global `agent_instance` variable ensures we create the agent only once and reuse it across multiple requests. This is important for performance since agent initialization can be expensive.

Now let's implement the function that creates and manages our agent:

```python
async def get_agent():
    """Get or create Agent instance"""
    print("🤖 get_agent() called...")
    global agent_instance
    if agent_instance is None and HF_TOKEN:
        print("🔧 Creating new Agent instance...")
        print(f"🔑 HF_TOKEN present: {bool(HF_TOKEN)}")
        print(f"🤖 Model: {HF_MODEL}")
        print(f"🔗 Provider: {DEFAULT_PROVIDER}")
```

The function starts by checking if we already have an agent instance. This singleton pattern prevents unnecessary recreations and ensures consistent state.

Let's continue with the agent creation:

```python
        try:
            agent_instance = Agent(
                model=HF_MODEL,
                provider=DEFAULT_PROVIDER,
                api_key=HF_TOKEN,
                servers=[
                    {
                        "type": "stdio",
                        "command": "python",
                        "args": ["mcp_server.py"],
                        "cwd": ".",
                        "env": {"HF_TOKEN": HF_TOKEN} if HF_TOKEN else {},
                    }
                ],
            )
            print("✅ Agent instance created successfully")
            print("🔧 Loading tools...")
            await agent_instance.load_tools()
            print("✅ Tools loaded successfully")
        except Exception as e:
            print(f"❌ Error creating/loading agent: {str(e)}")
            agent_instance = None
```

This is where the important part happens! Let's break down the Agent configuration:

**Agent Parameters:**
- `model`: The language model that will reason about tool usage
- `provider`: How to access the model (Hugging Face Inference Providers)
- `api_key`: Hugging Face API key

**MCP Server Connection:**
- `type: "stdio"`: Connect to the MCP server via standard input/output
- `command: "python"`: Run our MCP server as a Python subprocess
- `args: ["mcp_server.py"]`: The script file to execute
- `env`: Pass the HF_TOKEN to the server process

> [!TIP]
> The `stdio` connection type means the agent starts your MCP server as a subprocess and communicates with it through standard input/output. This is perfect for development and single-machine deployments.

The `load_tools()` call is crucial - it discovers what tools are available from the MCP server and makes them accessible to the agent's reasoning engine.

This completes our agent management function with proper error handling and logging.

## Tool Discovery and Usage

Once the agent is created and tools are loaded, it can automatically discover and use the MCP tools. This is where the real power of the Agent approach shines.

### Available Tools

The agent discovers our MCP tools automatically:
- `get_current_tags(repo_id: str)` - Retrieve existing repository tags
- `add_new_tag(repo_id: str, new_tag: str)` - Add new tag via pull request

The agent doesn't just call these tools blindly - it reasons about when and how to use them based on the prompt you give it.

### Tool Execution Example

Here's how the agent intelligently uses tools:

```python
# Example of how the agent would use tools
async def example_tool_usage():
    agent = await get_agent()
    
    if agent:
        # The agent can reason about which tools to use
        response = await agent.run(
            "Check the current tags for microsoft/DialoGPT-medium and add the tag 'conversational-ai' if it's not already present"
        )
        print(response)
```

Notice how we give the agent a natural language instruction, and it figures out:
1. First call `get_current_tags` to see what tags exist
2. Check if `conversational-ai` is already there
3. If not, call `add_new_tag` to add it
4. Provide a summary of what it did

This is much more intelligent than calling tools directly!

## Integration with Webhook Processing

Now let's see how the MCP client integrates into our webhook processing pipeline. This is where everything comes together.

### 1. Tag Extraction and Processing

Here's the main function that processes webhook events and uses our MCP agent:

```python
async def process_webhook_comment(webhook_data: Dict[str, Any]):
    """Process webhook to detect and add tags"""
    print("🏷️ Starting process_webhook_comment...")

    try:
        comment_content = webhook_data["comment"]["content"]
        discussion_title = webhook_data["discussion"]["title"]
        repo_name = webhook_data["repo"]["name"]
        
        # Extract potential tags from the comment and discussion title
        comment_tags = extract_tags_from_text(comment_content)
        title_tags = extract_tags_from_text(discussion_title)
        all_tags = list(set(comment_tags + title_tags))

        print(f"🔍 All unique tags: {all_tags}")

        if not all_tags:
            return ["No recognizable tags found in the discussion."]
```

This first part extracts and combines tags from both the comment content and discussion title. We use a set to deduplicate any tags that appear in both places.

> [!TIP]
> Processing both the comment and discussion title increases our chances of catching relevant tags. Users might mention tags in the title like "Missing pytorch tag" or in comments like "This needs #transformers".

Next, we get our agent and process each tag:

```python
        # Get agent instance
        agent = await get_agent()
        if not agent:
            return ["Error: Agent not configured (missing HF_TOKEN)"]

        # Process each tag
        result_messages = []
        for tag in all_tags:
            try:
                # Use agent to process the tag
                prompt = f"""
                For the repository '{repo_name}', check if the tag '{tag}' already exists.
                If it doesn't exist, add it via a pull request.
                
                Repository: {repo_name}
                Tag to check/add: {tag}
                """
                
                print(f"🤖 Processing tag '{tag}' for repo '{repo_name}'")
                response = await agent.run(prompt)
                
                # Parse agent response for success/failure
                if "success" in response.lower():
                    result_messages.append(f"✅ Tag '{tag}' processed successfully")
                else:
                    result_messages.append(f"⚠️ Issue with tag '{tag}': {response}")
                    
            except Exception as e:
                error_msg = f"❌ Error processing tag '{tag}': {str(e)}"
                print(error_msg)
                result_messages.append(error_msg)

        return result_messages
```

The key insight here is that we give the agent a clear, structured prompt for each tag. The agent then:
1. Understands it needs to check the current tags first
2. Compares with the new tag we want to add
3. Creates a pull request if needed
4. Returns a summary of its actions

This approach handles the complexity of tool orchestration automatically.

### 2. Tag Extraction Logic

Let's examine the tag extraction logic that feeds into our MCP processing:

```python
import re
from typing import List

# Recognized ML/AI tags for validation
RECOGNIZED_TAGS = {
    "pytorch", "tensorflow", "jax", "transformers", "diffusers",
    "text-generation", "text-classification", "question-answering",
    "text-to-image", "image-classification", "object-detection",
    "fill-mask", "token-classification", "translation", "summarization",
    "feature-extraction", "sentence-similarity", "zero-shot-classification",
    "image-to-text", "automatic-speech-recognition", "audio-classification",
    "voice-activity-detection", "depth-estimation", "image-segmentation",
    "video-classification", "reinforcement-learning", "tabular-classification",
    "tabular-regression", "time-series-forecasting", "graph-ml", "robotics",
    "computer-vision", "nlp", "cv", "multimodal",
}
```

This curated list of recognized tags helps us focus on relevant ML/AI tags and avoid adding inappropriate tags to repositories.

Now the extraction function itself:

```python
def extract_tags_from_text(text: str) -> List[str]:
    """Extract potential tags from discussion text"""
    text_lower = text.lower()
    explicit_tags = []

    # Pattern 1: "tag: something" or "tags: something"
    tag_pattern = r"tags?:\s*([a-zA-Z0-9-_,\s]+)"
    matches = re.findall(tag_pattern, text_lower)
    for match in matches:
        tags = [tag.strip() for tag in match.split(",")]
        explicit_tags.extend(tags)

    # Pattern 2: "#hashtag" style
    hashtag_pattern = r"#([a-zA-Z0-9-_]+)"
    hashtag_matches = re.findall(hashtag_pattern, text_lower)
    explicit_tags.extend(hashtag_matches)

    # Pattern 3: Look for recognized tags mentioned in natural text
    mentioned_tags = []
    for tag in RECOGNIZED_TAGS:
        if tag in text_lower:
            mentioned_tags.append(tag)

    # Combine and deduplicate
    all_tags = list(set(explicit_tags + mentioned_tags))

    # Filter to only include recognized tags or explicitly mentioned ones
    valid_tags = []
    for tag in all_tags:
        if tag in RECOGNIZED_TAGS or tag in explicit_tags:
            valid_tags.append(tag)

    return valid_tags
```

This function uses multiple strategies to extract tags:

1. **Explicit patterns**: "tags: pytorch, transformers" or "tag: nlp"
2. **Hashtags**: "#pytorch #nlp"
3. **Natural mentions**: "This transformers model does text-generation"

The validation step ensures we only suggest appropriate tags, preventing spam or irrelevant tags from being added.


## Performance Considerations

When building production MCP clients, performance is critical for maintaining responsive webhook processing. Let's look at some of the considerations we've made.

### 1. Agent Singleton Pattern

The agent is created once and reused to avoid:
- Repeated MCP server startup overhead
- Tool loading delays
- Connection establishment costs

This pattern is essential for webhook handlers that need to respond quickly.

### 2. Async Processing

All MCP operations are async to:
- Handle multiple webhook requests concurrently
- Avoid blocking the main FastAPI thread
- Provide responsive webhook responses

The async nature allows your webhook handler to accept new requests while processing tags in the background.

### 3. Background Task Processing

FastAPI has a built in `BackgroundTasks` class that can be used to run tasks in the background. This is useful for running long running tasks without blocking the main thread.

```python
from fastapi import BackgroundTasks

@app.post("/webhook")
async def webhook_handler(request: Request, background_tasks: BackgroundTasks):
    """Handle webhook and process in background"""
    
    # Validate webhook quickly
    if request.headers.get("X-Webhook-Secret") != WEBHOOK_SECRET:
        return {"error": "Invalid secret"}
    
    webhook_data = await request.json()
    
    # Process in background to return quickly
    background_tasks.add_task(process_webhook_comment, webhook_data)
    
    return {"status": "accepted"}
```

This pattern ensures webhook responses are fast (under 1 second) while allowing complex tag processing to happen in the background.

> [!TIP]
> Webhook endpoints should respond within 10 seconds or the platform may consider them timed out. Using background tasks ensures you can always respond quickly while handling complex processing asynchronously.

## Next Steps

With our MCP client implemented, we can now:

1. **Implement the Webhook Listener** - Create the FastAPI endpoint that receives Hub events
2. **Integrate Everything** - Connect webhooks, client, and server into a complete system
3. **Add Testing Interface** - Create a Gradio interface for development and monitoring
4. **Deploy and Test** - Validate the complete system in production

In the next section, we'll implement the webhook listener that will trigger our MCP-powered tagging agent.

> [!TIP]
> The Agent class from `huggingface_hub` provides both MCP tool integration and language model reasoning, making it perfect for building intelligent automation workflows like our PR agent. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/mcp-client.mdx" />

### Quiz 2: Pull Request Agent Integration
https://huggingface.co/learn/mcp-course/unit3_1/quiz2.md

# Quiz 2: Pull Request Agent Integration

Test your knowledge of the complete Pull Request Agent system including MCP client integration and webhook handling.

### Q1: What is the primary purpose of the webhook listener in the Pull Request Agent architecture?

<Question
  choices={[
    {
      text: "To provide a user interface for managing pull requests",
      explain: "The webhook listener handles GitHub events, not user interfaces."
    },
    {
      text: "To receive and process Hugging Face Hub discussion comment events in real-time",
      explain: "Correct! The webhook listener responds to Hub discussion events to trigger agent actions.",
      correct: true
    },
    {
      text: "To store pull request data permanently in a database",
      explain: "While it may process PR data, its primary role is event handling, not storage."
    },
    {
      text: "To authenticate users with the Hugging Face Hub",
      explain: "Webhook listeners handle events, not user authentication."
    }
  ]}
/>

### Q2: In the Agent-based MCP client implementation, how does the client connect to the MCP server?

<Question
  choices={[
    {
      text: "Through direct function calls in the same process",
      explain: "The Agent uses subprocess communication, not direct function calls."
    },
    {
      text: "Using stdio connection type to communicate with the MCP server as a subprocess",
      explain: "Correct! The Agent starts the MCP server with 'python mcp_server.py' and communicates via stdin/stdout.",
      correct: true
    },
    {
      text: "By writing files to a shared directory",
      explain: "MCP uses real-time communication, not file-based communication."
    },
    {
      text: "Through HTTP REST API calls",
      explain: "The stdio connection type doesn't use HTTP - it uses standard input/output streams."
    }
  ]}
/>

### Q3: Why does the webhook handler use FastAPI's `background_tasks.add_task()` instead of processing requests synchronously?

<Question
  choices={[
    {
      text: "To reduce server memory usage",
      explain: "Background tasks don't necessarily reduce memory usage."
    },
    {
      text: "To comply with Hugging Face Hub requirements",
      explain: "While Hub expects timely responses, this isn't a specific Hub requirement."
    },
    {
      text: "To return responses quickly (within 10 seconds) while allowing complex tag processing in the background",
      explain: "Correct! Webhook endpoints must respond quickly or be considered failed by the sending platform.",
      correct: true
    },
    {
      text: "To enable multiple webhook requests to be processed in parallel",
      explain: "While this enables parallelism, the primary reason is response time requirements."
    }
  ]}
/>

### Q4: What is the purpose of validating the `X-Webhook-Secret` header in the webhook handler?

<Question
  choices={[
    {
      text: "To identify which repository sent the webhook",
      explain: "Repository information comes from the webhook payload, not the secret header."
    },
    {
      text: "To prevent unauthorized requests and ensure the webhook is legitimate from Hugging Face",
      explain: "Correct! The shared secret acts as authentication between Hugging Face and your application.",
      correct: true
    },
    {
      text: "To decode the webhook payload data",
      explain: "The secret is for authentication, not for decoding payload data."
    },
    {
      text: "To determine which MCP tools to use",
      explain: "Tool selection is based on the webhook content, not the secret header."
    }
  ]}
/>

### Q5: In the Agent implementation, what happens when `await agent_instance.load_tools()` is called?

<Question
  choices={[
    {
      text: "It downloads tools from the Hugging Face Hub",
      explain: "The tools are local MCP server tools, not downloaded from the Hub."
    },
    {
      text: "It discovers and makes available the MCP tools from the connected server (get_current_tags and add_new_tag)",
      explain: "Correct! This discovers what tools the MCP server provides and makes them available to the agent's reasoning engine.",
      correct: true
    },
    {
      text: "It starts the FastAPI webhook server",
      explain: "load_tools() is specific to MCP tool discovery, not starting web servers."
    },
    {
      text: "It authenticates with the Hugging Face API",
      explain: "Authentication happens during agent creation, not during tool loading."
    }
  ]}
/>

### Q6: How does the Agent intelligently use MCP tools when processing a natural language instruction?

<Question
  choices={[
    {
      text: "It randomly calls available tools until one works",
      explain: "The Agent uses reasoning to determine which tools to call and in what order."
    },
    {
      text: "It always calls get_current_tags first, then add_new_tag second",
      explain: "While this might be a common pattern, the Agent reasons about which tools to use based on the instruction."
    },
    {
      text: "It reasons about the instruction and determines which tools to call and in what sequence",
      explain: "Correct! The Agent can understand complex instructions and create tool execution plans automatically.",
      correct: true
    },
    {
      text: "It requires explicit function calls to be specified in the instruction",
      explain: "The Agent can work with natural language instructions without explicit function specifications."
    }
  ]}
/>

### Q7: What filtering logic determines whether a webhook event should trigger tag processing?

<Question
  choices={[
    {
      text: "All webhook events are processed regardless of type",
      explain: "The handler filters events to only process relevant ones."
    },
    {
      text: "Only events where action='create' and scope='discussion.comment'",
      explain: "Correct! This ensures we only process new discussion comments, ignoring other Hub events.",
      correct: true
    },
    {
      text: "Only events from verified repository owners",
      explain: "The filtering is based on event type, not user verification status."
    },
    {
      text: "Only events that contain the word 'tag' in the comment",
      explain: "Event filtering happens before content analysis - we filter by event type first."
    }
  ]}
/>

Congrats on finishing this Quiz 🥳! If you need to review any elements, take the time to revisit the chapter to reinforce your knowledge. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/quiz2.mdx" />

### Quiz 1: MCP Server Implementation
https://huggingface.co/learn/mcp-course/unit3_1/quiz1.md

# Quiz 1: MCP Server Implementation

Test your knowledge of MCP server concepts and implementation for the Pull Request Agent.

### Q1: What is the primary role of an MCP Server in the Pull Request Agent architecture?

<Question
  choices={[
    {
      text: "To host the user interface for the application",
      explain: "The MCP Server provides backend capabilities, not the user interface."
    },
    {
      text: "To expose tools and resources that the AI agent can use to interact with GitHub",
      explain: "Close, but this project focuses on the Hugging Face Hub, not GitHub."
    },
    {
      text: "To expose tools for reading and updating model repository tags on the Hugging Face Hub",
      explain: "Correct! The MCP Server provides get_current_tags and add_new_tag tools for Hub interactions.",
      correct: true
    },
    {
      text: "To train the AI model on pull request data",
      explain: "MCP Servers provide runtime capabilities, not model training functionality."
    }
  ]}
/>

### Q2: In the FastMCP implementation, why must all MCP tool functions return strings instead of Python objects?

<Question
  choices={[
    {
      text: "To improve performance by reducing memory usage",
      explain: "While strings might be more memory efficient, this is not the primary reason."
    },
    {
      text: "To ensure reliable data exchange between the MCP server and client",
      explain: "Correct! MCP protocol requires string responses, so we use json.dumps() to serialize data.",
      correct: true
    },
    {
      text: "To make the code easier to debug",
      explain: "While JSON strings are readable, this is not the primary technical requirement."
    },
    {
      text: "To comply with Hugging Face Hub API requirements",
      explain: "This is an MCP protocol requirement, not specific to the Hub API."
    }
  ]}
/>

### Q3: When implementing the `add_new_tag` tool, what is the purpose of checking if a tag already exists before creating a pull request?

<Question
  choices={[
    {
      text: "To reduce API calls and improve performance",
      explain: "While this helps performance, it's not the primary reason for the check."
    },
    {
      text: "To prevent creating duplicate pull requests and provide better user feedback",
      explain: "Correct! This validation prevents unnecessary PRs and returns meaningful status messages.",
      correct: true
    },
    {
      text: "To comply with Hugging Face Hub rate limits",
      explain: "While avoiding unnecessary calls helps with rate limits, this is not the primary purpose."
    },
    {
      text: "To ensure the tag format is valid",
      explain: "Tag validation is separate from checking if it already exists."
    }
  ]}
/>

### Q4: In the MCP server implementation, what happens when a model repository doesn't have an existing README.md file?

<Question
  choices={[
    {
      text: "The add_new_tag tool will fail with an error",
      explain: "The implementation handles this case gracefully."
    },
    {
      text: "The tool creates a new ModelCard with ModelCardData and proceeds with the tag addition",
      explain: "Correct! The code handles HfHubHTTPError and creates a new model card when none exists.",
      correct: true
    },
    {
      text: "The tool skips adding the tag and returns a warning",
      explain: "The tool doesn't skip the operation - it creates what's needed."
    },
    {
      text: "The tool automatically creates a default README with placeholder content",
      explain: "It creates a minimal model card structure, not placeholder content."
    }
  ]}
/>

### Q5: What is the significance of using `create_pr=True` in the `hf_api.create_commit()` function call?

<Question
  choices={[
    {
      text: "It makes the commit directly to the main branch",
      explain: "Setting create_pr=True creates a pull request, not a direct commit to main."
    },
    {
      text: "It automatically creates a pull request instead of committing directly to the main branch",
      explain: "Correct! This enables the review workflow and follows repository governance practices.",
      correct: true
    },
    {
      text: "It creates a private branch that only the repository owner can see",
      explain: "Pull requests are visible to repository collaborators and can be public."
    },
    {
      text: "It validates the commit before creating it",
      explain: "Validation happens regardless of the create_pr parameter."
    }
  ]}
/>

### Q6: Why does the MCP server implementation use extensive logging with emojis throughout the code?

<Question
  choices={[
    {
      text: "To make the code more fun and engaging for developers",
      explain: "While emojis are visually appealing, there's a more practical reason."
    },
    {
      text: "To help with debugging and monitoring when the server runs autonomously in response to Hub events",
      explain: "Correct! Since the agent responds to webhooks automatically, detailed logs are crucial for troubleshooting.",
      correct: true
    },
    {
      text: "To comply with FastMCP logging requirements",
      explain: "FastMCP doesn't require specific logging formats or emojis."
    },
    {
      text: "To reduce the amount of text in log files",
      explain: "Emojis don't significantly reduce log file size and this isn't the primary goal."
    }
  ]}
/>

Congrats on finishing this Quiz 🥳! If you need to review any elements, take the time to revisit the chapter to reinforce your knowledge. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/quiz1.mdx" />

### Conclusion
https://huggingface.co/learn/mcp-course/unit3_1/conclusion.md

# Conclusion

Congratulations! 🎉 You've successfully built a Pull Request Agent that automatically enhances Hugging Face model repositories through intelligent tagging using MCP (Model Context Protocol).

The patterns you've learned - webhook processing, MCP tool integration, agent orchestration, and production deployment - are foundational skills for agent and MCP building. These techniques are applicable far beyond model tagging and represent a powerful approach to building intelligent systems that augment human capabilities.

## What we've built

Throughout this unit, you created a complete automation system with four key components:

- **MCP Server** (`mcp_server.py`) - FastMCP-based server with Hub API integration
- **MCP Client** (Agent) - Intelligent orchestration with language model reasoning  
- **Webhook Listener** (FastAPI) - Real-time event processing from Hugging Face Hub
- **Testing Interface** (Gradio) - Development and monitoring dashboard

## Next Steps

### Continue Learning
- Explore advanced MCP patterns and tools
- Study other automation frameworks and AI system architecture
- Learn about multi-agent systems and tool composition

### Build More Agents
- Develop domain-specific automation tools for your own projects
- Try out other types of webhooks (e.g. model uploads, model downloads, etc.)
- Experiment with different workflows

### Share Your Work
- Open source your agent for the community
- Write about your learnings and automation patterns
- Contribute to the MCP ecosystem

### Scale Your Impact
- Deploy agents for multiple repositories or organizations
- Build more sophisticated automation workflows
- Explore commercial applications of AI automation

> [!TIP]
> Consider documenting your experience and sharing it with the community! Your journey from learning MCP to building a production agent will help others explore AI automation.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/conclusion.mdx" />

### Build a Pull Request Agent on the Hugging Face Hub
https://huggingface.co/learn/mcp-course/unit3_1/introduction.md

# Build a Pull Request Agent on the Hugging Face Hub

Welcome to Unit 3 of the MCP Course! 

In this unit, we'll build a pull request agent that automatically tags Hugging Face model repositories based on discussions and comments. This real-world application demonstrates how to integrate MCP with webhook listeners and automated workflows.

> [!TIP]
> This unit showcases a real world use case where MCP servers can respond to real-time events from the Hugging Face Hub, automatically creating pull requests to improve repository metadata.

## What You'll Learn

In this unit, you will:

- Create an MCP Server that interacts with the Hugging Face Hub API
- Implement webhook listeners to respond to discussion events
- Set up automated tagging workflows for model repositories
- Deploy a complete webhook-driven application to Hugging Face Spaces

By the end of this unit, you'll have a working PR agent that can monitor discussions and automatically improve repository metadata through pull requests.

## Prerequisites

Before proceeding with this unit, make sure you:

- Have completed Units 1 and 2, or have experience with MCP concepts
- Are comfortable with Python, FastAPI, and webhook concepts
- Have a basic understanding of Hugging Face Hub workflows and pull requests
- Have a development environment with:
  - Python 3.11+
  - A Hugging Face account with API access

## Our Pull Request Agent Project

We'll build a tagging agent that consists of four main components: the MCP server, webhook listener, agent logic, and deployment infrastructure. The agent will be able to tag model repositories based on discussions and comments. This should save model authors time by receiving ready to use PRs, instead of having to manually tag their repositories.

![PR Agent Architecture](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit3/architecture.png)

In the diagram above, we have a MCP server that can read and update model tags. We have a webhook listener that can receive webhooks from the Hugging Face Hub. We have an agent that can analyze discussions and comments and create PRs to update model tags. We have a deployment infrastructure that can deploy the MCP server to Hugging Face Spaces.

### Project Overview

  To build this application we will need the following files:

| File | Purpose | Description |
|------|---------|-------------|
| `mcp_server.py` | **Core MCP Server** | FastMCP-based server with tools for reading and updating model tags |
| `app.py` | **Webhook Listener & Agent** | FastAPI app that receives webhooks, processes discussions, and creates PRs |
| `requirements.txt` | **Dependencies** | Python packages including FastMCP, FastAPI, and huggingface-hub |
| `pyproject.toml` | **Project Configuration** | Modern Python packaging with uv dependency management |
| `Dockerfile` | **Deployment** | Container configuration for Hugging Face Spaces |
| `env.example` | **Configuration Template** | Required environment variables and secrets |
| `cleanup.py` | **Utility** | Helper script for development and testing cleanup |

Let's go through each of these files and understand their purpose.

### MCP Server (`mcp_server.py`)

The heart of our application - a FastMCP server that provides tools for:
- Reading current tags from model repositories
- Adding new tags via pull requests to the Hub
- Error handling and validation

This is where you will implement the MCP server and do most of the work for this project. The Gradio app and FastAPI app will be used to test the MCP server and the webhook listener, and they are ready to use.

### Webhook Integration

Following the [Hugging Face Webhooks Guide](https://huggingface.co/docs/hub/webhooks-guide-discussion-bot), our agent:
- Listens for discussion comment events
- Validates webhook signatures for security
- Processes mentions and tag suggestions
- Creates pull requests automatically

### Agent Functionality

The agent analyzes discussion content to:
- Extract explicit tag mentions (`tag: pytorch`, `#transformers`)
- Recognize implicit tags from natural language
- Validate tags against known ML/AI categories
- Generate appropriate pull request descriptions

### Deployment & Production

- Containerized deployment to Hugging Face Spaces
- Environment variable management for secrets
- Background task processing for webhook responses
- Gradio interface for testing and monitoring

## Webhook Integration Overview

Our PR agent leverages the same webhook infrastructure used by Hugging Face's discussion bots. Here's how webhooks enable real-time responses:

![Webhook Flow](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/webhooks-guides/001-discussion-bot/webhook-creation.png)

The webhook flow works as follows:
1. **Event Trigger**: A user creates a comment in a model repository discussion
2. **Webhook Delivery**: Hugging Face sends a POST request to our endpoint
3. **Authentication**: We validate the webhook secret for security
4. **Processing**: Our agent analyzes the comment for tag suggestions
5. **Action**: If relevant tags are found, we create a pull request
6. **Response**: The webhook returns immediately while PR creation happens in the background

## Let's Get Started!

Ready to build a production-ready PR agent that can automatically improve Hugging Face repositories? Let's begin by setting up the project structure and understanding the MCP server implementation.



<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/introduction.mdx" />

### Creating the MCP Server
https://huggingface.co/learn/mcp-course/unit3_1/creating-the-mcp-server.md

# Creating the MCP Server

The MCP server is the heart of our Pull Request Agent. It provides the tools that our agent will use to interact with the Hugging Face Hub, specifically for reading and updating model repository tags. In this section, we'll build the server using FastMCP and the Hugging Face Hub Python SDK.

## Understanding the MCP Server Architecture

Our MCP server provides two essential tools:

| Tool | Description |
| --- | --- |
| `get_current_tags` | Retrieves existing tags from a model repository |
| `add_new_tag` | Adds a new tag to a repository via pull request |

These tools abstract the complexity of Hub API interactions and provide a clean interface for our agent to work with.

![MCP Server Tools](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit3/architecture.png)

## Complete MCP Server Implementation

Let's create our `mcp_server.py` file step by step. We'll build this incrementally so you understand each component and how they work together.

### 1. Imports and Configuration

First, let's set up all the necessary imports and configuration. 

```python
#!/usr/bin/env python3
"""
Simplified MCP Server for HuggingFace Hub Tagging Operations using FastMCP
"""

import os
import json
from fastmcp import FastMCP
from huggingface_hub import HfApi, model_info, ModelCard, ModelCardData
from huggingface_hub.utils import HfHubHTTPError
from dotenv import load_dotenv

load_dotenv()
```

The imports above give us everything we need to build our MCP server. `FastMCP` provides the server framework, while the `huggingface_hub` imports give us the tools to interact with model repositories.

The `load_dotenv()` call automatically loads environment variables from a `.env` file, making it easy to manage secrets like API tokens during development.

> [!TIP]
> If you're using uv, you can create a `.env` file in the root of the project and you won't need to use `load_dotenv()` if you use `uv run` to run the server.

Next, we'll configure our server with the necessary credentials and create the FastMCP instance:

```python
# Configuration
HF_TOKEN = os.getenv("HF_TOKEN")

# Initialize HF API client
hf_api = HfApi(token=HF_TOKEN) if HF_TOKEN else None

# Create the FastMCP server
mcp = FastMCP("hf-tagging-bot")
```

This configuration block does three important things:
1. Retrieves the Hugging Face token from environment variables
2. Creates an authenticated API client (only if a token is available)
3. Initializes our FastMCP server with a descriptive name

The conditional creation of `hf_api` ensures our server can start even without a token, which is useful for testing the basic structure.

### 2. Get Current Tags Tool

Now let's implement our first tool - `get_current_tags`. This tool retrieves the existing tags from a model repository:

```python
@mcp.tool()
def get_current_tags(repo_id: str) -> str:
    """Get current tags from a HuggingFace model repository"""
    print(f"🔧 get_current_tags called with repo_id: {repo_id}")

    if not hf_api:
        error_result = {"error": "HF token not configured"}
        json_str = json.dumps(error_result)
        print(f"❌ No HF API token - returning: {json_str}")
        return json_str
```

The function starts with validation - checking if we have an authenticated API client. Notice how we return JSON strings instead of Python objects. This is crucial for MCP communication.

> [!TIP]
> All MCP tools must return strings, not Python objects. That's why we use `json.dumps()` to convert our results to JSON strings. This ensures reliable data exchange between the MCP server and client.

Let's continue with the main logic of the `get_current_tags` function:

```python
    try:
        print(f"📡 Fetching model info for: {repo_id}")
        info = model_info(repo_id=repo_id, token=HF_TOKEN)
        current_tags = info.tags if info.tags else []
        print(f"🏷️ Found {len(current_tags)} tags: {current_tags}")

        result = {
            "status": "success",
            "repo_id": repo_id,
            "current_tags": current_tags,
            "count": len(current_tags),
        }
        json_str = json.dumps(result)
        print(f"✅ get_current_tags returning: {json_str}")
        return json_str

    except Exception as e:
        print(f"❌ Error in get_current_tags: {str(e)}")
        error_result = {"status": "error", "repo_id": repo_id, "error": str(e)}
        json_str = json.dumps(error_result)
        print(f"❌ get_current_tags error returning: {json_str}")
        return json_str
```

This implementation follows a clear pattern:
1. **Fetch data** using the Hugging Face Hub API
2. **Process the response** to extract tag information
3. **Structure the result** in a consistent JSON format
4. **Handle errors gracefully** with detailed error messages

> [!TIP]
> The extensive logging might seem overkill, but it helps with debugging and monitoring when the server is running. Remember, your application will autonomously reacting to events from the Hub, so you won't be able to see the logs in real time.

### 3. Add New Tag Tool

Now for the more complex tool - `add_new_tag`. This tool adds a new tag to a repository by creating a pull request. Let's start with the initial setup and validation:

```python
@mcp.tool()
def add_new_tag(repo_id: str, new_tag: str) -> str:
    """Add a new tag to a HuggingFace model repository via PR"""
    print(f"🔧 add_new_tag called with repo_id: {repo_id}, new_tag: {new_tag}")

    if not hf_api:
        error_result = {"error": "HF token not configured"}
        json_str = json.dumps(error_result)
        print(f"❌ No HF API token - returning: {json_str}")
        return json_str
```

Similar to our first tool, we start with validation. Now let's fetch the current repository state to check if the tag already exists:

```python
    try:
        # Get current model info and tags
        print(f"📡 Fetching current model info for: {repo_id}")
        info = model_info(repo_id=repo_id, token=HF_TOKEN)
        current_tags = info.tags if info.tags else []
        print(f"🏷️ Current tags: {current_tags}")

        # Check if tag already exists
        if new_tag in current_tags:
            print(f"⚠️ Tag '{new_tag}' already exists in {current_tags}")
            result = {
                "status": "already_exists",
                "repo_id": repo_id,
                "tag": new_tag,
                "message": f"Tag '{new_tag}' already exists",
            }
            json_str = json.dumps(result)
            print(f"🏷️ add_new_tag (already exists) returning: {json_str}")
            return json_str
```

This section demonstrates an important principle: **validate before acting**. We check if the tag already exists to avoid creating unnecessary pull requests.

> [!TIP]
> Always check the current state before making changes. This prevents duplicate work and provides better user feedback. It's especially important when creating pull requests, as duplicate PRs can clutter the repository.

Next, we'll prepare the updated tag list and handle the model card:

```python
        # Add the new tag to existing tags
        updated_tags = current_tags + [new_tag]
        print(f"🆕 Will update tags from {current_tags} to {updated_tags}")

        # Create model card content with updated tags
        try:
            # Load existing model card
            print(f"📄 Loading existing model card...")
            card = ModelCard.load(repo_id, token=HF_TOKEN)
            if not hasattr(card, "data") or card.data is None:
                card.data = ModelCardData()
        except HfHubHTTPError:
            # Create new model card if none exists
            print(f"📄 Creating new model card (none exists)")
            card = ModelCard("")
            card.data = ModelCardData()

        # Update tags - create new ModelCardData with updated tags
        card_dict = card.data.to_dict()
        card_dict["tags"] = updated_tags
        card.data = ModelCardData(**card_dict)
```

This section handles model card management. We try to load an existing model card first, but create a new one if none exists. This ensures our tool works with any repository, even if it's empty.

The model card (`README.md`) contains the repository metadata, including tags. By updating the model card data and creating a pull request, we're following the standard Hugging Face workflow for metadata changes.

Now for the pull request creation - the main part of our tool:

```python
        # Create a pull request with the updated model card
        pr_title = f"Add '{new_tag}' tag"
        pr_description = f"""
## Add tag: {new_tag}

This PR adds the `{new_tag}` tag to the model repository.

**Changes:**
- Added `{new_tag}` to model tags
- Updated from {len(current_tags)} to {len(updated_tags)} tags

**Current tags:** {", ".join(current_tags) if current_tags else "None"}
**New tags:** {", ".join(updated_tags)}

🤖 This is a pull request created by the Hugging Face Hub Tagging Bot.
"""

        print(f"🚀 Creating PR with title: {pr_title}")
```

We create a detailed pull request description that explains what's changing and why. This transparency is crucial for repository maintainers who will review the PR.

> [!TIP]
> Clear, detailed PR descriptions are essential for automated pull requests. They help repository maintainers understand what's happening and make informed decisions about whether to merge the changes.
>
> Also, it's good practice to clearly state that the PR is created by an automated tool. This helps repository maintainers understand how to deal with the PR.

Finally, we create the commit and pull request:

```python
        # Create commit with updated model card using CommitOperationAdd
        from huggingface_hub import CommitOperationAdd

        commit_info = hf_api.create_commit(
            repo_id=repo_id,
            operations=[
                CommitOperationAdd(
                    path_in_repo="README.md", path_or_fileobj=str(card).encode("utf-8")
                )
            ],
            commit_message=pr_title,
            commit_description=pr_description,
            token=HF_TOKEN,
            create_pr=True,
        )

        # Extract PR URL from commit info
        pr_url_attr = commit_info.pr_url
        pr_url = pr_url_attr if hasattr(commit_info, "pr_url") else str(commit_info)

        print(f"✅ PR created successfully! URL: {pr_url}")

        result = {
            "status": "success",
            "repo_id": repo_id,
            "tag": new_tag,
            "pr_url": pr_url,
            "previous_tags": current_tags,
            "new_tags": updated_tags,
            "message": f"Created PR to add tag '{new_tag}'",
        }
        json_str = json.dumps(result)
        print(f"✅ add_new_tag success returning: {json_str}")
        return json_str
```

The `create_commit` function with `create_pr=True` is the key to our automation. It creates a commit with the updated `README.md` file and automatically opens a pull request for review.

Don't forget the error handling for this complex operation:

```python
    except Exception as e:
        print(f"❌ Error in add_new_tag: {str(e)}")
        print(f"❌ Error type: {type(e)}")
        import traceback
        print(f"❌ Traceback: {traceback.format_exc()}")

        error_result = {
            "status": "error",
            "repo_id": repo_id,
            "tag": new_tag,
            "error": str(e),
        }
        json_str = json.dumps(error_result)
        print(f"❌ add_new_tag error returning: {json_str}")
        return json_str
```

The comprehensive error handling includes the full traceback, which is invaluable for debugging when things go wrong.

Emojis in log messages might seem silly, but they make scanning logs much faster. 🔧 for function calls, 📡 for API requests, ✅ for success, and ❌ for errors create visual patterns that help you quickly find what you're looking for.

> [!TIP]
> Whilst building this application, it's easy to accidentally create an infinite loop of PRs. This is because the `create_commit` function with `create_pr=True` will create a PR for every commit. If the PR is not merged, the `create_commit` function will be called again, and again, and again...
>
> We've added checks to prevent this, but it's something to be aware of.

## Next Steps

Now that we have our MCP server implemented with robust tagging tools, we need to:

1. **Create the MCP Client** - Build the interface between our agent and MCP server
2. **Implement Webhook Handling** - Listen for Hub discussion events
3. **Integrate Agent Logic** - Connect webhooks with MCP tool calls
4. **Test the Complete System** - Validate end-to-end functionality

In the next section, we'll create the MCP client that will allow our webhook handler to interact with these tools intelligently.

> [!TIP]
> The MCP server runs as a separate process from your main application. This isolation provides better error handling and allows the server to be reused by multiple clients or applications. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/creating-the-mcp-server.mdx" />

### Setting up the Project
https://huggingface.co/learn/mcp-course/unit3_1/setting-up-the-project.md

# Setting up the Project

In this section, we'll set up the development environment for our Pull Request Agent. 

> [!TIP]
> We'll use modern Python tooling with `uv` for dependency management and create the necessary configuration files. If you're not familiar with `uv`, you can learn more about it [here](https://docs.astral.sh/uv/).


## Project Structure

Let's start by creating the project directory and understanding the file structure:

```bash
git clone https://huggingface.co/spaces/mcp-course/tag-this-repo
```

Our final project structure will look like this:

```
hf-pr-agent/
├── mcp_server.py              # Core MCP server with tagging tools
├── app.py                     # FastAPI webhook listener and agent
├── requirements.txt           # Python dependencies
├── pyproject.toml             # Project configuration
├── env.example                # Environment variables template
├── cleanup.py                 # Development utility
```

## Dependencies and Configuration

Let's walk through the dependencies and configuration for our project. 

### 1. Python Project Configuration

We will use `uv` to create the `pyproject.toml` file to define our project:

> [!TIP]
> If you don't have `uv` installed, you can follow the instructions [here](https://docs.astral.sh/uv/getting-started/installation/).

```toml
[project]
name = "mcp-course-unit3-example"
version = "0.1.0"
description = "FastAPI and Gradio app for Hugging Face Hub discussion webhooks"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
    "fastapi>=0.104.0",
    "uvicorn[standard]>=0.24.0",
    "gradio>=4.0.0",
    "huggingface-hub[mcp]>=0.32.0",
    "pydantic>=2.0.0",
    "python-multipart>=0.0.6",
    "requests>=2.31.0",
    "python-dotenv>=1.0.0",
    "fastmcp>=2.0.0",
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.build.targets.wheel]
packages = ["src"]
```

For compatibility with various deployment platforms, the same is repeated in `requirements.txt`

To create a virtual environment, run:

```bash
uv venv
source .venv/bin/activate # or .venv/Scripts/activate on Windows
```

To install the dependencies, run:

```bash
uv sync
```

### 2. Environment Configuration

Create `env.example` to document required environment variables:

```bash
# Hugging Face API Token (required)
# Get from: https://huggingface.co/settings/tokens
HF_TOKEN=hf_your_token_here

# Webhook Secret (required for production)
# Use a strong, random string
WEBHOOK_SECRET=your-webhook-secret-here

# Model for the agent (optional)
HF_MODEL=owner/model

# Provider for MCP agent (optional)
HF_PROVIDER=huggingface
```

You will need to get your Hugging Face API token from [here](https://huggingface.co/settings/tokens).

You will also need to generate a webhook secret. You can do this by running the following command:

```bash
python -c "import secrets; print(secrets.token_hex(32))"
```

You will then need to add the webhook secret to your `.env` file based on the `env.example` file.

## Next Steps

With our project structure and environment set up, we're ready to:

1. **Create the MCP Server** - Implement the core tagging functionality
2. **Build the Webhook Listener** - Handle incoming discussion events
3. **Integrate the Agent** - Connect MCP tools with webhook processing
4. **Test and Deploy** - Validate functionality and deploy to Spaces

In the next section, we'll dive into creating our MCP server that will handle all the Hugging Face Hub interactions.

> [!TIP]
> Keep your `.env` file secure and never commit it to version control. The `.env` file should be added to your `.gitignore` file to prevent accidental exposure of secrets. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3_1/setting-up-the-project.mdx" />

### Welcome to the 🤗 Model Context Protocol (MCP) Course
https://huggingface.co/learn/mcp-course/unit0/introduction.md

# Welcome to the 🤗 Model Context Protocol (MCP) Course

![MCP Course thumbnail](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/1.png)

Welcome to the most exciting topic in AI today: **Model Context Protocol (MCP)**!

This free course, built in partnership with [Anthropic](https://www.anthropic.com), will take you on a journey, **from beginner to informed**, in understanding, using, and building applications with MCP.

This first unit will help you onboard:

* Discover the **course's syllabus**.
* **Get more information about the certification process and the schedule**.
* Get to know the team behind the course.
* Create your **account**.
* **Sign-up to our Discord server**, and meet your classmates and us.

Let's get started!

## What to expect from this course?

In this course, you will:

* 📖 Study Model Context Protocol in **theory, design, and practice.**
* 🧑‍💻 Learn to **use established MCP SDKs and frameworks**.
* 💾 **Share your projects** and explore applications created by the community.
* 🏆 Participate in challenges where you will **evaluate your MCP implementations against other students'.**
* 🎓 **Earn a certificate of completion** by completing assignments.

And more!

At the end of this course, you'll understand **how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards**.

Don't forget to [**sign up to the course!**](https://huggingface.co/mcp-course)

## What does the course look like?

The course is composed of:

* _Foundational Units_: where you learn MCP **concepts in theory**.
* _Hands-on_: where you'll learn **to use established MCP SDKs** to build your applications. These hands-on sections will have pre-configured environments.
* _Use case assignments_: where you'll apply the concepts you've learned to solve a real-world problem that you'll choose.
* _Collaborations_: We're collaborating with Hugging Face's partners to give you the latest MCP implementations and tools.
 
This **course is a living project, evolving with your feedback and contributions!** Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.

## What's the syllabus?

Here is the **general syllabus for the course**. A more detailed list of topics will be released with each unit.

| Chapter | Topic                                       | Description                                                                                                            |
| ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| 0       | Onboarding                                  | Set you up with the tools and platforms that you will use.                                                             |
| 1       | MCP Fundamentals, Architecture and Core Concepts | Explain core concepts, architecture, and components of Model Context Protocol. Show a simple use case using MCP.       |
| 2       | End-to-end Use case: MCP in Action              | Build a simple end-to-end MCP application that you can share with the community. |
| 3       | Deployed Use case: MCP in Action               | Build a deployed MCP application using the Hugging Face ecosystem and partners' services.                                        |
| 4       | Bonus Units                                  | Bonus units to help you get more out of the course, working with partners' libraries and services.                                        |

## What are the prerequisites?

To be able to follow this course, you should have:

* Basic understanding of AI and LLM concepts
* Familiarity with software development principles and API concepts
* Experience with at least one programming language (Python or TypeScript examples will be shown)

If you don't have any of these, don't worry! Here are some resources that can help you:

* [LLM Course](https://huggingface.co/learn/llm-course/) will guide you through the basics of using and building with LLMs.
* [Agents Course](https://huggingface.co/learn/agents-course/) will guide you through building AI agents with LLMs.

> [!TIP]
> The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and agents, you can start the course now!

## What tools do I need?

You only need 2 things:

* _A computer_ with an internet connection.
* An _account_: to access the course resources and create projects. If you don't have an account yet, you can create one [here](https://huggingface.co/join) (it's free).

## The Certification Process

You can choose to follow this course _in audit mode_, or do the activities and _get one of the two certificates we'll issue_. If you audit the course, you can participate in all the challenges and do assignments if you want, and **you don't need to notify us**.

The certification process is **completely free**:

* _To get a certification for fundamentals_: you need to complete Unit 1 of the course. This is intended for students that want to get up to date with the latest trends in MCP, without the need to build a full application.
* _To get a certificate of completion_: you need to complete the use case units (2 and 3). This is intended for students that want to build a full application and share it with the community.

## What is the recommended pace?

Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**.

Since there's a deadline, we provide you a recommended pace:

![Recommended Pace](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/2.png)

## How to get the most out of the course?

To get the most out of the course, we have some advice:

1. [Join study groups in Discord](https://discord.gg/UrrTSsSyjb): Studying in groups is always easier. To do that, you need to join our discord server and verify your account.
2. **Do the quizzes and assignments**: The best way to learn is through hands-on practice and self-assessment.
3. **Define a schedule to stay in sync**: You can use our recommended pace schedule below or create yours.

![Course advice](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/3.png)

## Who are we

About the authors:

### Ben Burtenshaw

Ben is a Machine Learning Engineer at Hugging Face who focuses on building LLM applications, with post training and agentic approaches. [Follow Ben on the Hub](https://huggingface.co/burtenshaw) to see his latest projects.

### Alex Notov

Alex is Technical Partner Enablement Lead at [Anthropic](https://www.anthropic.com) and worked on unit 3 of this course. Alex trains Anthropic's partners on Claude best practices for their use cases. Follow Alex on [LinkedIn](https://linkedin.com/in/zealoushacker) and [GitHub](https://github.com/zealoushacker).

## Acknowledgments

We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support:

- [Gradio](https://www.gradio.app/)
- [Continue](https://continue.dev)
- [Llama.cpp](https://github.com/ggerganov/llama.cpp)
- [Anthropic](https://www.anthropic.com)

## I found a bug, or I want to improve the course

Contributions are **welcome** 🤗

* If you _found a bug 🐛 in a notebook_, please [open an issue](https://github.com/huggingface/mcp-course/issues/new) and **describe the problem**.
* If you _want to improve the course_, you can [open a Pull Request](https://github.com/huggingface/mcp-course/pulls).
* If you _want to add a full section or a new unit_, the best is to [open an issue](https://github.com/huggingface/mcp-course/issues/new) and **describe what content you want to add before starting to write it so that we can guide you**.

## I still have questions

Please ask your question in our discord server #mcp-course-questions.

Now that you have all the information, let's get on board ⛵ 


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit0/introduction.mdx" />

### Building the Gradio MCP Server
https://huggingface.co/learn/mcp-course/unit2/gradio-server.md

# Building the Gradio MCP Server

In this section, we'll create our sentiment analysis MCP server using Gradio. This server will expose a sentiment analysis tool that can be used by both human users through a web interface and AI models through the MCP protocol.

## Introduction to Gradio MCP Integration

Gradio provides a straightforward way to create MCP servers by automatically converting your Python functions into MCP tools. When you set `mcp_server=True` in `launch()`, Gradio:

1. Automatically converts your functions into MCP Tools
2. Maps input components to tool argument schemas
3. Determines response formats from output components
4. Sets up JSON-RPC over HTTP+SSE for client-server communication
5. Creates both a web interface and an MCP server endpoint

## Setting Up the Project

First, let's create a new directory for our project and set up the required dependencies:

```bash
mkdir mcp-sentiment
cd mcp-sentiment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install "gradio[mcp]" textblob
```

## Creating the Server

> Hugging face spaces needs an app.py file to build the space. So the name of the python file has to be app.py 

Create a new file called `app.py` with the following code:

```python
import json
import gradio as gr
from textblob import TextBlob

def sentiment_analysis(text: str) -> str:
    """
    Analyze the sentiment of the given text.

    Args:
        text (str): The text to analyze

    Returns:
        str: A JSON string containing polarity, subjectivity, and assessment
    """
    blob = TextBlob(text)
    sentiment = blob.sentiment
    
    result = {
        "polarity": round(sentiment.polarity, 2),  # -1 (negative) to 1 (positive)
        "subjectivity": round(sentiment.subjectivity, 2),  # 0 (objective) to 1 (subjective)
        "assessment": "positive" if sentiment.polarity > 0 else "negative" if sentiment.polarity < 0 else "neutral"
    }

    return json.dumps(result)

# Create the Gradio interface
demo = gr.Interface(
    fn=sentiment_analysis,
    inputs=gr.Textbox(placeholder="Enter text to analyze..."),
    outputs=gr.Textbox(),  # Changed from gr.JSON() to gr.Textbox()
    title="Text Sentiment Analysis",
    description="Analyze the sentiment of text using TextBlob"
)

# Launch the interface and MCP server
if __name__ == "__main__":
    demo.launch(mcp_server=True)
```

## Understanding the Code

Let's break down the key components:

1. **Function Definition**:
   - The `sentiment_analysis` function takes a text input and returns the string representation of a JSON dictionary
   - It uses TextBlob to analyze the sentiment
   - The docstring is crucial as it helps Gradio generate the MCP tool schema
   - Type hints (`str` and `dict`) help define the input/output schema

2. **Gradio Interface**:
   - `gr.Interface` creates both the web UI and MCP server
   - The function is exposed as an MCP tool automatically
   - Input and output components define the tool's schema
   - The JSON output component ensures proper serialization

3. **MCP Server**:
   - Setting `mcp_server=True` enables the MCP server
   - The server will be available at `http://localhost:7860/gradio_api/mcp/sse`
   - You can also enable it using the environment variable:
     ```bash
     export GRADIO_MCP_SERVER=True
     ```

## Running the Server

Start the server by running:

```bash
python app.py
```

You should see output indicating that both the web interface and MCP server are running. The web interface will be available at `http://localhost:7860`, and the MCP server at `http://localhost:7860/gradio_api/mcp/sse`.

## Testing the Server

You can test the server in two ways:

1. **Web Interface**:
   - Open `http://localhost:7860` in your browser
   - Enter some text and click "Submit"
   - You should see the sentiment analysis results

2. **MCP Schema**:
   - Visit `http://localhost:7860/gradio_api/mcp/schema`
   - This shows the MCP tool schema that clients will use
   - You can also find this in the "View API" link in the footer of your Gradio app

## Troubleshooting Tips

1. **Type Hints and Docstrings**:
   - Always provide type hints for your function parameters and return values
   - Include a docstring with an "Args:" block for each parameter
   - This helps Gradio generate accurate MCP tool schemas

2. **String Inputs**:
   - When in doubt, accept input arguments as `str`
   - Convert them to the desired type inside the function
   - This provides better compatibility with MCP clients

3. **SSE Support**:
   - Some MCP clients don't support SSE-based MCP Servers
   - In those cases, use `mcp-remote`:
     ```json
     {
       "mcpServers": {
         "gradio": {
           "command": "npx",
           "args": [
             "mcp-remote",
             "http://localhost:7860/gradio_api/mcp/sse"
           ]
         }
       }
     }
     ```

4. **Connection Issues**:
   - If you encounter connection problems, try restarting both the client and server
   - Check that the server is running and accessible
   - Verify that the MCP schema is available at the expected URL

## Deploying to Hugging Face Spaces

To make your server available to others, you can deploy it to Hugging Face Spaces:

1. Create a new Space on Hugging Face:
   - Go to [huggingface.co/spaces](https://huggingface.co/spaces)
   - Click "New Space"
   - Name your space (e.g., "mcp-sentiment")
   - Choose "Gradio" as the SDK
   - Click "Create Space"

2. Create a `requirements.txt` file:
```txt
gradio[mcp]
textblob
```

3. Push your code to the Space:
```bash
git init
git add app.py requirements.txt
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-sentiment
git push -u origin main
```

Your MCP server will now be available at:
```
https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse
```

## Next Steps

Now that we have our MCP server running, we'll create clients to interact with it. In the next sections, we'll:

1. Create a HuggingFace.js-based client inspired by Tiny Agents
2. Implement a SmolAgents-based Python client
3. Test both clients with our deployed server

Let's move on to building our first client! 


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/gradio-server.mdx" />

### Using MCP with Local and Open Source Models
https://huggingface.co/learn/mcp-course/unit2/continue-client.md

# Using MCP with Local and Open Source Models

In this section, we'll connect MCP with local and open-source models using
Continue, a tool for building AI coding assistants that works with local tools
like Ollama.

## Setup Continue

You can install Continue from the VS Code marketplace.

> [!TIP]
> *Continue also has an extension for [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue).*

### VS Code extension

1. Click `Install` on the [Continue extension page in the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue)
2. This will open the Continue extension page in VS Code, where you will need to click `Install` again
3. The Continue logo will appear on the left sidebar. For a better experience, move Continue to the right sidebar

![move-to-right-sidebar](https://mintlify.s3.us-west-1.amazonaws.com/continue-docs/images/move-to-right-sidebar-b2d315296198e41046fc174d8178f30a.gif)

With Continue configured, we'll move on to setting up Ollama to pull local models. 

### Local Models

There are many ways to run local models that are compatible with Continue. Three popular options are Ollama, Llama.cpp, and LM Studio. Ollama is an open-source tool that allows users to easily run large language models (LLMs) locally. Llama.cpp is a high-performance C++ library for running LLMs that also includes an OpenAI-compatible server. LM Studio provides a graphical interface for running local models.


You can access local models from the Hugging Face Hub and get commands and quick links for all major local inference apps.

![hugging face hub](https://cdn-uploads.huggingface.co/production/uploads/64445e5f1bc692d87b27e183/d6XMR5q9DwVpdEKFeLW9t.png)

<hfoptions id="local-models">
<hfoption id="llamacpp">

Llama.cpp provides `llama-server`, a lightweight, OpenAI API compatible, HTTP server for serving LLMs. You can either build it from source by following the instructions in the [Llama.cpp repository](https://github.com/ggml-org/llama.cpp), or use a pre-built binary if available for your system. Check out the [Llama.cpp documentation](https://github.com/ggerganov/llama.cpp) for more information.

Once you have `llama-server`, you can run a model from Hugging Face with a command like this:

```bash
llama-server -hf unsloth/Devstral-Small-2505-GGUF:Q4_K_M
```

</hfoption>
<hfoption id="lmstudio">
LM Studio is an application for Mac, Windows, and Linux that makes it easy to run open-source models locally with a graphical interface. To get started:

1.  [Click here to open the model in LM Studio](lmstudio://open_from_hf?model=unsloth/Devstral-Small-2505-GGUF).
2.  Once the model is downloaded, go to the "Local Server" tab and click "Start Server".
</hfoption>
<hfoption id="ollama">
To use Ollama, you can [install](https://ollama.com/download) it and download the model you want to run with the `ollama run` command.

For example, you can download and run the [Devstral-Small](https://huggingface.co/unsloth/Devstral-Small-2505-GGUF?local-app=ollama) model with:

```bash
ollama run hf.co/unsloth/Devstral-Small-2505-GGUF:Q4_K_M
```
This model is around 14GB in size, so you need to ensure that the machine you are running it on has enough free RAM. Otherwise you might see an error like: model requires more system memory than is available. 
</hfoption>
</hfoptions>

> [!TIP]
> Continue supports various local model providers. Besides Ollama, Llama.cpp, and LM Studio you can also use other providers. For a complete list of supported providers and detailed configuration options, please refer to the [Continue documentation](https://docs.continue.dev/customize/model-providers).

It is important that we use models that have tool calling as a built-in feature, i.e. Codestral Qwen and Llama 3.1x.

1. Create a folder called `.continue/models` at the top level of your workspace
2. Add a file to this folder to configure your model provider. For example, `local-models.yaml`.
3. Add the following configuration, depending on whether you are using Ollama, Llama.cpp, or LM Studio.

<hfoptions id="local-models">
<hfoption id="llamacpp">
This configuration is for a `llama.cpp` model served with `llama-server`. Note that the `model` field should match the model you are serving.

```yaml
name: Llama.cpp model
version: 0.0.1
schema: v1
models:
  - provider: llama.cpp
    model: unsloth/Devstral-Small-2505-GGUF
    apiBase: http://localhost:8080
    defaultCompletionOptions:
      contextLength: 8192 # Adjust based on the model
    name: Llama.cpp Devstral-Small
    roles:
      - chat
      - edit
```
</hfoption>
<hfoption id="lmstudio">
This configuration is for a model served via LM Studio. The model identifier should match what is loaded in LM Studio.

```yaml
name: LM Studio Model
version: 0.0.1
schema: v1
models:
  - provider: lmstudio
    model: unsloth/Devstral-Small-2505-GGUF
    name: LM Studio Devstral-Small
    apiBase: http://localhost:1234/v1
    roles:
      - chat
      - edit
```
</hfoption>
<hfoption id="ollama">
This configuration is for an Ollama model.

```yaml
name: Ollama Devstral model
version: 0.0.1
schema: v1
models:
  - provider: ollama
    model: unsloth/devstral-small-2505-gguf:Q4_K_M
    defaultCompletionOptions:
      contextLength: 8192
    name: Ollama Devstral-Small
    roles:
      - chat
      - edit
```
</hfoption>
</hfoptions>

By default, each model has a max context length, in this case it is `128000` tokens. This setup includes a larger use of
that context window to perform multiple MCP requests and needs to be able to handle more tokens.

## How it works

### The tool handshake

Tools provide a powerful way for models to interface with the external world.
They are provided to the model as a JSON object with a name and an arguments
schema. For example, a `read_file` tool with a `filepath` argument will give the
model the ability to request the contents of a specific file.

![autonomous agents diagram](https://gist.github.com/user-attachments/assets/c7301fc0-fa5c-4dc4-9955-7ba8a6587b7a)

The following handshake describes how the Agent uses tools:

1. In Agent mode, available tools are sent along with `user` chat requests
2. The model can choose to include a tool call in its response
3. The user gives permission. This step is skipped if the policy for that tool is set to `Automatic`
4. Continue calls the tool using built-in functionality or the MCP server that offers that particular tool
5. Continue sends the result back to the model
6. The model responds, potentially with another tool call, and step 2 begins again

Continue supports multiple local model providers. You can use different models
for different tasks or switch models as needed. This section focuses on
local-first solutions, but Continue does work with popular providers
like OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can also run
your own model provider.

### Local Model Integration with MCP

Now that we have everything set up, let's add an existing MCP server. Below is a quick example of setting up a new MCP server for use in your assistant:

1. Create a folder called `.continue/mcpServers` at the top level of your workspace
2. Add a file called `playwright-mcp.yaml` to this folder
3. Write the following contents to `playwright-mcp.yaml` and save

```yaml
name: Playwright mcpServer
version: 0.0.1
schema: v1
mcpServers:
  - name: Browser search
    command: npx
    args:
      - "@playwright/mcp@latest"
```

Now test your MCP server by prompting the following command:

```
1. Using playwright, navigate to https://news.ycombinator.com.

2. Extract the titles and URLs of the top 4 posts on the homepage.

3. Create a file named hn.txt in the root directory of the project.

4. Save this list as plain text in the hn.txt file, with each line containing the title and URL separated by a hyphen.

Do not output code or instructions—just complete the task and confirm when it is done.
```

The result will be a generated file called `hn.txt` in the current working directory.

![mcp output example](https://deploy-preview-6060--continuedev.netlify.app/assets/images/mcp-playwright-50b192a2ff395f7a6cc11618c5e2d5b1.png)

## Conclusion

By combining Continue with local models like Llama 3.1 and MCP servers, you've
unlocked a powerful development workflow that keeps your code and data private
while leveraging cutting-edge AI capabilities. 

This setup gives you the flexibility to customize your AI assistant with
specialized tools, from web automation to file management, all running entirely
on your local machine. Ready to take your development workflow to the next
level? Start by experimenting with different MCP servers from the [Continue Hub
MCP explore page](https://hub.continue.dev/explore/mcp) and discover how
local AI can transform your coding experience.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/continue-client.mdx" />

### Building MCP Clients
https://huggingface.co/learn/mcp-course/unit2/clients.md

# Building MCP Clients

In this section, we'll create clients that can interact with our MCP server using different programming languages. We'll implement both a JavaScript client using HuggingFace.js and a Python client using smolagents.

## Configuring MCP Clients

Effective deployment of MCP servers and clients requires proper configuration. The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.

### MCP Configuration Files

MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.

The configuration files are very simple, easy to understand, and consistent across major MCP hosts.

#### `mcp.json` Structure

The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:

```json
{
  "servers": [
    {
      "name": "MCP Server",
      "transport": {
        "type": "sse",
        "url": "http://localhost:7860/gradio_api/mcp/sse"
      }
    }
  ]
}
```

In this example, we have a single server configured to use SSE transport, connecting to a local Gradio server running on port 7860.

> [!TIP]
> We've connected to the Gradio app via SSE transport because we assume that the gradio app is running on a remote server. However, if you want to connect to a local script, `stdio` transport instead of `sse` transport is a better option.

#### Configuration for HTTP+SSE Transport

For remote servers using HTTP+SSE transport, the configuration includes the server URL:

```json
{
  "servers": [
    {
      "name": "Remote MCP Server",
      "transport": {
        "type": "sse",
        "url": "https://example.com/gradio_api/mcp/sse"
      }
    }
  ]
}
```

This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.

## Configuring a UI MCP Client

When working with Gradio MCP servers, you can configure your UI client to connect to the server using the MCP protocol. Here's how to set it up:

### Basic Configuration

Create a new file called `config.json` with the following configuration:

```json
{
  "mcpServers": {
    "mcp": {
      "url": "http://localhost:7860/gradio_api/mcp/sse"
    }
  }
}
```

This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.

## Configuring an MCP Client within Cursor IDE

Cursor provides built-in MCP support, allowing you to connect your deployed MCP servers directly to your development environment.

### Configuration

Open Cursor settings (`Ctrl + Shift + J` / `Cmd + Shift + J`) → **Tools & Integrations** tab → **Add Custom MCP**:

**macOS:**
```json
{
  "mcpServers": {
    "sentiment-analysis": {
      "command": "npx",
      "args": [
        "-y", 
        "mcp-remote", 
        "https://YOURUSENAME-mcp-sentiment.hf.space/gradio_api/mcp/sse", 
        "--transport", 
        "sse-only"
      ]
    }
  }
}
```

**Windows:**
```json
{
  "mcpServers": {
    "sentiment-analysis": {
      "command": "cmd",
      "args": [
        "/c", 
        "npx", 
        "-y", 
        "mcp-remote", 
        "https://YOURUSENAME-mcp-sentiment.hf.space/gradio_api/mcp/sse", 
        "--transport", 
        "sse-only"
      ]
    }
  }
}
```

### Why We Use `mcp-remote`

> **Note**: As of mid-2025, Cursor supports direct remote MCP connections via HTTP+SSE and OAuth. You may not need `mcp-remote` unless working with legacy setups or encountering specific compatibility issues.

Earlier versions of MCP clients, including Cursor, only supported local servers via `stdio` transport and lacked support for remote servers with authentication. The `mcp-remote` tool was introduced as a workaround that:

- Runs locally on your machine  
- Bridges Cursor with remote MCP servers  
- Handles transport and authentication implicitly  
- Uses the familiar configuration file format

While this is still useful in some edge cases, Cursor now supports native remote MCP integration. You can directly configure a remote server like this:

```json
{
  "mcpServers": {
    "my-server": {
      "url": "https://your-mcp-server.hf.space/gradio_api/mcp/sse"
    }
  }
}
```
> See [Cursor’s official documentation](https://docs.cursor.com/context/mcp) for up-to-date setup instructions.

Once configured, you can ask Cursor to use your sentiment analysis tool for tasks like analyzing code comments, user feedback, or pull request descriptions.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/clients.mdx" />

### Building an End-to-End MCP Application
https://huggingface.co/learn/mcp-course/unit2/introduction.md

# Building an End-to-End MCP Application

Welcome to Unit 2 of the MCP Course! 

In this unit, we'll build a complete MCP application from scratch, focusing on creating a server with Gradio and connecting it with multiple clients. This hands-on approach will give you practical experience with the entire MCP ecosystem.

> [!TIP]
> In this unit, we're going to build a simple MCP server and client using Gradio and the HuggingFace hub. In the next unit, we'll build a more complex server that tackles a real-world use case.

## What You'll Learn

In this unit, you will:

- Create an MCP Server using Gradio's built-in MCP support
- Build a sentiment analysis tool that can be used by AI models
- Connect to the server using different client implementations:
  - A HuggingFace.js-based client
  - A SmolAgents-based client for Python
- Deploy your MCP Server to Hugging Face Spaces
- Test and debug the complete system

By the end of this unit, you'll have a working MCP application that demonstrates the power and flexibility of the protocol.

## Prerequisites

Before proceeding with this unit, make sure you:

- Have completed Unit 1 or have a basic understanding of MCP concepts
- Are comfortable with both Python and JavaScript/TypeScript
- Have a basic understanding of APIs and client-server architecture
- Have a development environment with:
  - Python 3.10+
  - Node.js 18+
  - A Hugging Face account (for deployment)

## Our End-to-End Project

We'll build a sentiment analysis application that consists of three main parts: the server, the client, and the deployment.

![sentiment analysis application](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit2/1.png)

### Server Side

- Uses Gradio to create a web interface and MCP server via `gr.Interface`
- Implements a sentiment analysis tool using TextBlob
- Exposes the tool through both HTTP and MCP protocols

### Client Side

- Implements a HuggingFace.js client
- Or, creates a smolagents Python client
- Demonstrates how to use the same server with different client implementations

### Deployment

- Deploys the server to Hugging Face Spaces
- Configures the clients to work with the deployed server

## Let's Get Started!

Are you ready to build your first end-to-end MCP application? Let's begin by setting up the development environment and creating our Gradio MCP server.

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/introduction.mdx" />

### Local Tiny Agents with AMD NPU and iGPU Acceleration
https://huggingface.co/learn/mcp-course/unit2/lemonade-server.md

# Local Tiny Agents with AMD NPU and iGPU Acceleration

In this section, we'll show you how to accelerate our end-to-end Tiny Agents application using AMD Neural Processing Unit (NPU) and integrated GPU (iGPU). We then enhance our end-to-end application by providing it with access to local files and creating an assistant to handle sensitive information locally, ensuring maximum privacy.

To enable this, we'll use Lemonade Server, a tool for running models locally with NPU and iGPU acceleration.

## Setup
### Setup Lemonade Server

You can install Lemonade Server on both Windows and Linux. Additional documentation can be found at [lemonade-server.ai](https://lemonade-server.ai/). 

<hfoptions id="local-models">
<hfoption id="windows">

To install Lemonade Server on Windows, simply download and run the latest installer [here](https://github.com/lemonade-sdk/lemonade/releases/latest/download/Lemonade_Server_Installer.exe).

Lemonade Server supports CPU inference across all platforms and engines on Windows x86/x64. GPU acceleration is enabled via the llamacpp engine using Vulkan, with a focus on AMD Ryzen™ AI 7000/8000/300 series and AMD Radeon™ 7000/9000 series. For NPU acceleration, the ONNX Runtime GenAI (OGA) engine enables support for AMD Ryzen™ AI 300 series devices.

Once you have installed Lemonade Server, you can launch it by clicking the `Lemonade` icon added to the Desktop.

</hfoption>
<hfoption id="linux">

To install Lemonade on Linux, first create and activate a venv:

> [!TIP]
> If you don't have `uv` installed, you can install it following the instructions [here](https://docs.astral.sh/uv/getting-started/installation/).

```bash
uv venv --python 3.11
source .venv/bin/activate
```

Then, install the `lemonade-sdk` package:
```bash
uv pip install lemonade-sdk==8.0.3
```

Altenatively, you can also install from source by cloning the repository and building the package:
```bash
git clone https://github.com/lemonade-sdk/lemonade-sdk.git
cd lemonade-sdk
pip install -e .
```

Once installed, you can launch Lemonade by running the following command:

```bash
lemonade-server-dev serve
```

Lemonade Server supports CPU inference across all platforms and engines on Windows x86/x64. For GPU acceleration is enabled through llamacpp engine (Vulkan), with a focus on AMD Ryzen™ AI 7000/8000/300 series and Radeon™ 7000/9000 series.

> [!TIP]
> *NPU acceleration is only available for AMD Ryzen™ AI 300 series on Windows.*

</hfoption>
</hfoptions>

### Tiny Agents and NPX Setup

This section of the course assumes you have already installed `npx` and `Tiny Agents`. If you haven't, please refer to the [Tiny Agents](https://huggingface.co/learn/mcp-course/en/unit2/tiny-agents) section of the course. Please make sure to use `huggingface_hub[mcp]==0.33.2`.

## Running your Tiny Agents application with AMD NPU and iGPU

To run your Tiny Agents application with AMD NPU and iGPU, simply point to the MCP server we created in the [previous section](https://huggingface.co/learn/mcp-course/en/unit2/tiny-agents) to the Lemonade Server, as shown below:

<hfoptions id="agent-config">
<hfoption id="windows">

```json
{
  "model": "Qwen3-8B-GGUF",
  "endpointUrl": "http://localhost:8000/api/",
  "servers": [
    {
      "type": "stdio",
      "command": "C:\\Program Files\\nodejs\\npx.cmd",
      "args": [
        "mcp-remote",
        "http://localhost:7860/gradio_api/mcp/sse"
      ]
    }
  ]
}
```

</hfoption>
<hfoption id="linux">

```json
{
  "model": "Qwen3-8B-GGUF",
  "endpointUrl": "http://localhost:8000/api/",
  "servers": [
    {
      "type": "stdio",
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:7860/gradio_api/mcp/sse"
      ]
    }
  ]
}
```

</hfoption>
</hfoptions>

You can then choose from a variety of models to run on your local machine. For this example, used the [`Qwen3-8B-GGUF`](https://huggingface.co/Qwen/Qwen3-8B-GGUF) model, which runs efficiently on AMD GPUs through Vulkan acceleration. You can find the list of models supported and even import your own models by navigating to http://localhost:8000/#model-management.

## Creating an assistant to handle sensitive information locally

![Lemonade Server Interface](https://raw.githubusercontent.com/lemonade-sdk/assets/refs/heads/main/huggingface_course/hf_lemonade.png)

Now let's enhance our end-to-end application by enabling access to local files and introducing an assistant that processes sensitive information entirely on-device. Specifically, this assistant will help us evaluate candidate resumes and support decision-making in the hiring process—all while keeping the data private and secure.

To do this, we'll use the [Desktop Commander](https://github.com/wonderwhy-er/DesktopCommanderMCP) MCP server, which allows you to run commands on your local machine and provides comprehensive file system access, terminal control, and code editing capabilities.

Let's setup a project with a basic Tiny Agent.

```bash
mkdir file-assistant
cd file-assistant
```

Let's then create a new `agent.json` file in the `file-assistant` folder.

<hfoptions id="agent-file-config">
<hfoption id="windows">

```json
{
  "model": "user.jan-nano",
  "endpointUrl": "http://localhost:8000/api/",
  "servers": [
    {
      "type": "stdio",
      "command": "C:\\Program Files\\nodejs\\npx.cmd",
      "args": [
        "-y",
        "@wonderwhy-er/desktop-commander"
      ]
    }
  ]
}
```

</hfoption>
<hfoption id="linux">

```json
{
  "model": "user.jan-nano",
  "endpointUrl": "http://localhost:8000/api/",
  "servers": [
    {
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@wonderwhy-er/desktop-commander"
      ]
    }
  ]
}
```

</hfoption>
</hfoptions>

Finally, we have to download the `Jan Nano` model. You can do this by navigating to http://localhost:8000/#model-management, clicking on `Add a Model` and providing the following information:

```
Model Name: user.jan-nano
Checkpoint: Menlo/Jan-nano-gguf:jan-nano-4b-Q4_0.gguf
Recipe: llamacpp
```

![Custom Model](https://raw.githubusercontent.com/lemonade-sdk/assets/refs/heads/main/huggingface_course/custom_model.png)

All done! Now let's give it a try.

### Taking it for a spin

![recording](https://raw.githubusercontent.com/lemonade-sdk/assets/refs/heads/main/huggingface_course/recording.gif)

Our goal is to create an assistant that can help us handle sensitive information locally. To do this, we'll first create a job description file for our assistant to work with.

Create a file called `job_description.md` in the `file-assistant` folder.

```md
# Senior Food Technology Engineer

## About the Role
We're seeking a culinary innovator to transform cooking processes into precise algorithms and AI systems.

## What You'll Do
- Convert cooking instructions into measurable algorithms
- Develop AI-powered kitchen tools
- Create food quality assessment systems
- Build recipe-following AI models

## Requirements
- MS in Computer Science (food-related thesis preferred)
- Python and PyTorch expertise
- Proven experience combining food science with ML
- Strong communication skills using culinary metaphors

## Perks
- Access to experimental kitchen
- Continuous taste-testing opportunities
- Collaborative tech-foodie team environment

*Note: Must attend conferences and publish on algorithmic cooking optimization.*

```

Now, let's create a `candidates` folder inside the `file-assistant` folder and add a sample resume file for our assistant to work with.

```bash
mkdir candidates
touch candidates/john_resume.md
```

Add the following sample resume or include your own.

```md
# John Doe

**Contact Information**
- Email: email@example.com
- Phone: (+1) 123-456-7890
- Location: 1234 Abc Street, Example, EX 01234
- GitHub: github.com/example
- LinkedIn: linkedin.com/in/example
- Website: example.com

## Experience

**Machine Learning Engineer Intern** | Slow Feet Technology | Jul 2021 - Present
- Developed food-agnostic formulation for cross-ingredient meal cooking
- Created competitive cream of mushroom soup recipe, published in NeurIPS 2099
- Built specialized pan for meal cooking research

**Research Intern** | Paddling University | Aug 2020 - Present
- Designed efficient mapo tofu quality estimation method using thermometer
- Proposed fast stir frying algorithm for tofu cooking, published in CVPR 2077
- Outperformed SOTA methods with improved efficiency

**Research Assistant** | Huangdu Institute of Technology | Mar 2020 - Jun 2020
- Developed novel framework using spoon and chopsticks for eating mapo tofu
- Designed tofu filtering strategy inspired by beans grinding method
- Created evaluation criteria for eating plan novelty and diversity

**Research Intern** | Paddling University | Jul 2018 - Aug 2018
- Designed dual sandwiches using traditional burger ingredients
- Utilized structure duality to boost cooking speed for shared ingredients
- Outperformed baselines on QWE'15 and ASDF'14 datasets

## Education

**M.S. in Computer Science** | University of Charles River | Sep 2021 - Jan 2023
- Location: Boston, MA

**B.Eng. in Software Engineering** | Huangdu Institute of Technology | Sep 2016 - Jul 2020
- Location: Shanghai, China

## Skills

**Programming Languages:** Python, JavaScript/TypeScript, HTML/CSS, Java
**Tools and Frameworks:** Git, PyTorch, Keras, scikit-learn, Linux, Vue, React, Django, LaTeX
**Languages:** English (proficient), Indonesia (native)

## Awards and Honors

- **Gold**, International Collegiate Catching Fish Contest (ICCFC) | 2018
- **First Prize**, China National Scholarship for Outstanding Culinary Skills | 2017, 2018

## Publications

**Eating is All You Need** | NeurIPS 2099
- Authors: Haha Ha, San Zhang

**You Only Cook Once: Unified, Real-Time Mapo Tofu Recipe** | CVPR 2077 (Best Paper Honorable Mention)
- Authors: Haha Ha, San Zhang, Si Li, Wu Wang
```

We can then run the agent with the following command:

```bash
tiny-agents run agent.json
```

You should see the following output:

```
Agent loaded with 18 tools:
 • get_config
 • set_config_value
 • read_file
 • read_multiple_files
 • write_file
 • create_directory
 • list_directory
 • move_file
 • search_files
 • search_code
 • get_file_info
 • edit_block
 • execute_command
 • read_output
 • force_terminate
 • list_sessions
 • list_processes
 • kill_process
 »
 ```

Now let's provide the asistant with some info to get started. 

```
» Read the contents of C:\Users\your_username\file-assistant\job_description.md
```

You should see an output similar to the following:

```
<Tool iNtxGmOuXHqZVBWmKnfxsc61xsJbsoAM>read_file {"path":"C:\\Users\\your_username\\file-assistant\\job_description.md","length":23}

Tool iNtxGmOuXHqZVBWmKnfxsc61xsJbsoAM
[Reading 23 lines from start]

(...)

The job description for the Senior Food Technology Engineer position emphasizes the need for a candidate who can bridge the gap between food science and artificial intelligence (...). Candidates are also expected to attend conferences and publish research on algorithmic cooking optimization.
```

> [!TIP]
> We are using the default system prompt, which may cause the assistant to call some tools multiple times. To create a more assertive assistant, you can provide a custom `PROMPT.md` file in the same directory as your `agent.json`.

Great! Now let's read the candidate's resume.

```
» Inside the same folder you can find a candidates folder. Check for john_resume.md and let me know if he is a good fit for the job.
```

You should see an output similar to the following:

```
<Tool ll2oWo73YeGIft5VbOIpF9GNf0kevjEy>read_file {"path":"C:\\Users\\your_username\\file-assistant\\candidates\\john_resume.md"}

Tool ll2oWo73YeGIft5VbOIpF9GNf0kevjEy
[Reading 58 lines from start]

(...)
John Wayne is a **strong fit** for the Senior Food Technology Engineer role. His technical expertise in AI and machine learning, combined with his experience in food-related research and publications, makes him an excellent candidate. He also has the soft skills and cultural fit needed to thrive in a collaborative, innovative environment.
```

Amazing! Now we can move forward with inviting the candidate to the interview.

```
» Create a file called "invitation.md" in the "file-assistant" folder and write a short invitation to John to come in for an interview.
```

You should see content similar to the following being written to the `invitation.md` file:

```markdown
# Interview Invitation

Dear John,

We would like to invite you for an interview for the Senior Food Technology Engineer position. The interview will be held on [insert date and time] at [insert location or virtual meeting details].

Please confirm your availability and let us know if you need any additional information.

Best regards,
[Your Name]
[Your Contact Information]
```

Woohoo! We successfully created an assistant that can help us handle sensitive information locally.

### Exploring other models and acceleration options

In the example above, the Jan-Nano model leveraged Vulkan acceleration for efficient local LLM inference on AMD GPUs. You can also try out other models and acceleration options by navigating to http://localhost:8000/#model-management or checking the [models documentation](https://lemonade-server.ai/docs/server/server_models/).

For Windows applications that require a concise context and would benefit from NPU + iGPU acceleration, you can try the Hybrid models available with Lemonade Server — optimized for AMD Ryzen AI 300 series PCs. Models such as `Llama-xLAM-2-8b-fc-r-Hybrid` are specifically fine-tuned for tool-calling and deliver fast, responsive performance!

## Conclusion

In this unit, we've shown how to accelerate our end-to-end Tiny Agents application with AMD NPU and iGPU. We've also shown how to create an assistant to handle sensitive information locally.

Now that you've learned how to leverage Lemonade Server for local model acceleration and privacy-focused applications, you can explore more examples and features in the [Lemonade GitHub repository](https://github.com/lemonade-sdk/lemonade). The repository contains additional documentation, example implementations, and is actively maintained by the community. 


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/lemonade-server.mdx" />

### Gradio as an MCP Client
https://huggingface.co/learn/mcp-course/unit2/gradio-client.md

# Gradio as an MCP Client

In the previous section, we explored how to create an MCP Server using Gradio and connect to it using an MCP Client. In this section, we're going to explore how to use Gradio as an MCP Client to connect to an MCP Server.

> [!TIP]
> Gradio is best suited to the creation of UI clients and MCP servers, but it is also possible to use it as an MCP Client and expose that as a UI.

We'll connect to an MCP server similar to the one we created in the previous section but with additional features, and use it to answer questions.

## MCP Client in Gradio

### Connect to an example MCP Server

Let's connect to an example MCP Server that is already running on Hugging Face. We'll use [this one](https://huggingface.co/spaces/abidlabs/mcp-tool-http) for this example. It's a space that contains a collection of MCP tools.

```python
from smolagents.mcp_client import MCPClient

with MCPClient(
    {"url": "https://abidlabs-mcp-tool-http.hf.space/gradio_api/mcp/sse", "transport": "sse",}
) as tools:
    # Tools from the remote server are available
    print("\n".join(f"{t.name}: {t.description}" for t in tools))

```

<details>
<summary>Output</summary>
<pre>
<code>
prime_factors: Compute the prime factorization of a positive integer.
generate_cheetah_image: Generate a cheetah image.
image_orientation: Returns whether image is portrait or landscape.
sepia: Apply a sepia filter to the input image.
</code>
</pre>
</details>

### Connect to the MCP Server from Gradio

Great, now that you've connected to an example MCP Server. Now, you need to use it in an example application.

First, we need to install the `smolagents`, Gradio and mcp-client libraries, if we haven't already:

```bash
pip install "smolagents[mcp]" "gradio[mcp]" mcp fastmcp
```

Now, we can import the necessary libraries and create a simple Gradio interface that uses the MCP Client to connect to the MCP Server.


```python
import gradio as gr
import os

from mcp import StdioServerParameters
from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
```

Next, we'll connect to the MCP Server and get the tools that we can use to answer questions.

```python
mcp_client = MCPClient(
    {"url": "https://abidlabs-mcp-tool-http.hf.space/gradio_api/mcp/sse", "transport": "sse",} # This is the MCP Client we created in the previous section
)
tools = mcp_client.get_tools()
```

Now that we have the tools, we can create a simple agent that uses them to answer questions. We'll just use a simple `InferenceClientModel` and the default model from `smolagents` for now.

It is important to pass your api_key to the InferenceClientModel. You can access the token from your huggingface account. [check here.](https://huggingface.co/docs/hub/en/security-tokens), and set the access token with the environment variable  `HF_TOKEN`.

```python
model = InferenceClientModel(token=os.getenv("HF_TOKEN"))
agent = CodeAgent(tools=[*tools], model=model)
```

Now, we can create a simple Gradio interface that uses the agent to answer questions.

```python
demo = gr.ChatInterface(
    fn=lambda message, history: str(agent.run(message)),
    type="messages",
    examples=["Prime factorization of 68"],
    title="Agent with MCP Tools",
    description="This is a simple agent that uses MCP tools to answer questions."
)

demo.launch()
```

And that's it! We've created a simple Gradio interface that uses the MCP Client to connect to the MCP Server and answer questions.

<iframe
	src="https://mcp-course-unit2-gradio-client.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>


## Complete Example

Here's the complete example of the usage of an MCP Client in Gradio:

```python
import gradio as gr
import os

from smolagents import InferenceClientModel, CodeAgent, MCPClient


try:
    mcp_client = MCPClient(
        {"url": "https://abidlabs-mcp-tool-http.hf.space/gradio_api/mcp/sse", "transport": "sse",}
    )
    tools = mcp_client.get_tools()

    model = InferenceClientModel(token=os.getenv("HUGGINGFACE_API_TOKEN"))
    agent = CodeAgent(tools=[*tools], model=model, additional_authorized_imports=["json", "ast", "urllib", "base64"])

    demo = gr.ChatInterface(
        fn=lambda message, history: str(agent.run(message)),
        type="messages",
        examples=["Analyze the sentiment of the following text 'This is awesome'"],
        title="Agent with MCP Tools",
        description="This is a simple agent that uses MCP tools to answer questions.",
    )

    demo.launch()
finally:
    mcp_client.disconnect()
```

You'll notice that we're closing the MCP Client in the `finally` block. This is important because the MCP Client is a long-lived object that needs to be closed when the program exits.

## Deploying to Hugging Face Spaces

To make your server available to others, you can deploy it to Hugging Face Spaces, just like we did in the previous section.
To deploy your Gradio MCP client to Hugging Face Spaces:

1. Create a new Space on Hugging Face:
   - Go to huggingface.co/spaces
   - Click "Create new Space"
   - Choose "Gradio" as the SDK
   - Name your space (e.g., "mcp-client")

2. Update MCP Server URL in the code:

```python
mcp_client = MCPClient(
    {"url": "https://abidlabs-mcp-tool-http.hf.space/gradio_api/mcp/sse", "transport": "sse"}
)
```

3. Create a `requirements.txt` file:
```txt
gradio[mcp]
smolagents[mcp]
```

4. Push your code to the Space:
```bash
git init
git add app.py requirements.txt
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-client
git push -u origin main
```

Note: While adding remote origin, Refer to [password-git-deprecation](https://huggingface.co/blog/password-git-deprecation) for adding with AccessToken.

## Conclusion

In this section, we've explored how to use Gradio as an MCP Client to connect to an MCP Server. We've also seen how to deploy the MCP Client in Hugging Face Spaces.




<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/gradio-client.mdx" />

### Building Tiny Agents with MCP and the Hugging Face Hub
https://huggingface.co/learn/mcp-course/unit2/tiny-agents.md

# Building Tiny Agents with MCP and the Hugging Face Hub

Now that we've built MCP servers in Gradio and learned about creating MCP clients, let's complete our end-to-end application by building an agent that can seamlessly interact with our sentiment analysis tool. This section builds on the project [Tiny Agents](https://huggingface.co/blog/tiny-agents), which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.

In this final exercise of Unit 2, we will walk you through how to implement both TypeScript (JS) and Python MCP clients that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous sections. This completes our end-to-end MCP application flow: from building a Gradio MCP server exposing a sentiment analysis tool, to creating a flexible agent that can use this tool alongside other capabilities.

![meme](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/thumbnail.jpg)
<figcaption>Image credit https://x.com/adamdotdev</figcaption>

## Installation

Let's install the necessary packages to build our Tiny Agents.

> [!TIP]
> Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install Node.js. Then, add the following to your own MCP Client config:

Tiny Agent can run MCP servers with a command line environment. To do this, we will need to install `npm` and run the server with `npx`. **We'll need these for both Python and JavaScript.**

Let's install `npx` with `npm`. If you don't have `npm` installed, check out the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).

```bash
# install npx
npm install -g npx
```

Then, we need to install the `mcp-remote` package.

```bash
npm i mcp-remote
```

<hfoptions id="tiny-agents">
<hfoption id="typescript">

For JavaScript, we need to install the `tiny-agents` package.

```bash
npm install @huggingface/tiny-agents
```

</hfoption>
<hfoption id="python">

For Python, you need to install the latest version of `huggingface_hub` with the `mcp` extra to get all the necessary components.

```bash
pip install "huggingface_hub[mcp]>=0.32.0"
```

</hfoption>
</hfoptions>

## Tiny Agents MCP Client in the Command Line

Let's repeat the example from [Unit 1](https://huggingface.co/learn/mcp-course/unit1/mcp-clients#tiny-agents-clients) to create a basic Tiny Agent. Tiny Agents can create MCP clients from the command line based on JSON configuration files.

<hfoptions id="tiny-agents">
<hfoption id="typescript">

Let's setup a project with a basic Tiny Agent.

```bash
mkdir my-agent
touch my-agent/agent.json
cd my-agent
```

The JSON file will look like this:

```json
{
	"model": "Qwen/Qwen2.5-72B-Instruct",
	"provider": "nebius",
	"servers": [
		{
			"type": "stdio",
			"command": "npx",
			"args": [
				"mcp-remote",
				"http://localhost:7860/gradio_api/mcp/sse"
			]
		}
	]
}
```

We can then run the agent with the following command:

```bash
npx @huggingface/tiny-agents run agent.json
```

</hfoption>
<hfoption id="python">

Let's setup a project with a basic Tiny Agent.

```bash
mkdir my-agent
touch my-agent/agent.json
cd my-agent
```

The JSON file will look like this:

```json
{
	"model": "Qwen/Qwen2.5-72B-Instruct",
	"provider": "nebius",
	"servers": [
		{
			"type": "stdio",
			"command": "npx",
			"args": [
				"mcp-remote", 
				"http://localhost:7860/gradio_api/mcp/sse"
			]
		}
	]
}
```

We can then run the agent with the following command:

```bash
tiny-agents run agent.json
```

</hfoption>
</hfoptions>

Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It includes a model, provider, and a server configuration.

| Field | Description |
|-------|-------------|
| `model` | The open source model to use for the agent |
| `provider` | The inference provider to use for the agent |
| `servers` | The servers to use for the agent. We'll use the `mcp-remote` server for our Gradio MCP server. |

> [!TIP]
> We could also use an open source model running locally with Tiny Agents. If we start a local inference server with 
>
> ```json
> {
> 	"model": "Qwen/Qwen3-32B",
> 	"endpointUrl": "http://localhost:1234/v1",
> 	"servers": [
> 		{
> 			"type": "stdio",
> 			"command": "npx",
> 			"args": [
> 				"mcp-remote",
> 				"http://localhost:1234/v1/mcp/sse"
> 			]
> 		}
> 	]
> }
> ```
>
>
> Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (`http://localhost:1234/v1`), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.

## Custom Tiny Agents MCP Client

Now that we understand both Tiny Agents and Gradio MCP servers, let's see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server from earlier sections.

### Using the Gradio Server with Tiny Agents

To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier in this unit, we just need to add it to our list of servers. Here's how we can modify our agent configuration:

<hfoptions id="tiny-agents">
<hfoption id="typescript">

```ts
const agent = new Agent({
    provider: process.env.PROVIDER ?? "nebius",
    model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
    apiKey: process.env.HF_TOKEN,
    servers: [
        // ... existing servers ...
        {
            command: "npx",
            args: [
                "mcp-remote",
                "http://localhost:7860/gradio_api/mcp/sse"  // Your Gradio MCP server
            ]
        }
    ],
});
```

</hfoption>
<hfoption id="python">

```python
import os

from huggingface_hub import Agent

agent = Agent(
    model="Qwen/Qwen2.5-72B-Instruct",
    provider="nebius",
    servers=[
        {
            "command": "npx",
            "args": [
                "mcp-remote",
                "http://localhost:7860/gradio_api/mcp/sse"  # Your Gradio MCP server
            ]
        }
    ],
)
```

</hfoption>
</hfoptions>

Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
1. Read text from a file using the filesystem server
2. Analyze its sentiment using our Gradio server
3. Write the results back to a file

### Deployment Considerations

When deploying your Gradio MCP server to Hugging Face Spaces, you'll need to update the server URL in your agent configuration to point to your deployed space:


```json
{
    command: "npx",
    args: [
        "mcp-remote",
        "https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
    ]
}
```


This allows your agent to use the sentiment analysis tool from anywhere, not just locally!

## Conclusion: Our Complete End-to-End MCP Application

In this unit, we've gone from understanding MCP basics to building a complete end-to-end application:

1. We created a Gradio MCP server that exposes a sentiment analysis tool
2. We learned how to connect to this server using MCP clients
3. We built a tiny agent in TypeScript and Python that can interact with our tool

This demonstrates the power of the Model Context Protocol - we can create specialized tools using frameworks we're familiar with (like Gradio), expose them through a standardized interface (MCP), and then have agents seamlessly use these tools alongside other capabilities.

The complete flow we've built allows an agent to:
- Connect to multiple tool providers
- Dynamically discover available tools
- Use our custom sentiment analysis tool
- Combine it with other capabilities like file system access and web browsing

This modular approach is what makes MCP so powerful for building flexible AI applications.

## Next Steps

- Check out the Tiny Agents blog posts in [Python](https://huggingface.co/blog/python-tiny-agents) and [TypeScript](https://huggingface.co/blog/tiny-agents)
- Review the [Tiny Agents documentation](https://huggingface.co/docs/huggingface.js/main/en/tiny-agents/README)
- Build something with Tiny Agents!


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit2/tiny-agents.mdx" />

### The Communication Protocol
https://huggingface.co/learn/mcp-course/unit1/communication-protocol.md

# The Communication Protocol

MCP defines a standardized communication protocol that enables Clients and Servers to exchange messages in a consistent, predictable way. This standardization is critical for interoperability across the community. In this section, we'll explore the protocol structure and transport mechanisms used in MCP.

> [!WARNING]
> We're getting down to the nitty-gritty details of the MCP protocol. You won't need to know all of this to build with MCP, but it's good to know that it exists and how it works.

## JSON-RPC: The Foundation

At its core, MCP uses **JSON-RPC 2.0** as the message format for all communication between Clients and Servers. JSON-RPC is a lightweight remote procedure call protocol encoded in JSON, which makes it:

- Human-readable and easy to debug
- Language-agnostic, supporting implementation in any programming environment
- Well-established, with clear specifications and widespread adoption

![message types](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/5.png)

The protocol defines three types of messages:

### 1. Requests

Sent from Client to Server to initiate an operation. A Request message includes:
- A unique identifier (`id`)
- The method name to invoke (e.g., `tools/call`)
- Parameters for the method (if any)

Example Request:

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "weather",
    "arguments": {
      "location": "San Francisco"
    }
  }
}
```

### 2. Responses

Sent from Server to Client in reply to a Request. A Response message includes:
- The same `id` as the corresponding Request
- Either a `result` (for success) or an `error` (for failure)

Example Success Response:
```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "temperature": 62,
    "conditions": "Partly cloudy"
  }
}
```

Example Error Response:
```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid location parameter"
  }
}
```

### 3. Notifications

One-way messages that don't require a response. Typically sent from Server to Client to provide updates or notifications about events.

Example Notification:
```json
{
  "jsonrpc": "2.0",
  "method": "progress",
  "params": {
    "message": "Processing data...",
    "percent": 50
  }
}
```

## Transport Mechanisms

JSON-RPC defines the message format, but MCP also specifies how these messages are transported between Clients and Servers. Two primary transport mechanisms are supported:

### stdio (Standard Input/Output)

The stdio transport is used for local communication, where the Client and Server run on the same machine:

The Host application launches the Server as a subprocess and communicates with it by writing to its standard input (stdin) and reading from its standard output (stdout).

> [!TIP]
> **Use cases** for this transport are local tools like file system access or running local scripts.

The main **Advantages** of this transport are that it's simple, no network configuration required, and securely sandboxed by the operating system.

### HTTP + SSE (Server-Sent Events) / Streamable HTTP

The HTTP+SSE transport is used for remote communication, where the Client and Server might be on different machines:

Communication happens over HTTP, with the Server using Server-Sent Events (SSE) to push updates to the Client over a persistent connection.

> [!TIP]
> **Use cases** for this transport are connecting to remote APIs, cloud services, or shared resources.

The main **Advantages** of this transport are that it works across networks, enables integration with web services, and is compatible with serverless environments.

Recent updates to the MCP standard have introduced or refined "Streamable HTTP," which offers more flexibility by allowing servers to dynamically upgrade to SSE for streaming when needed, while maintaining compatibility with serverless environments.

## The Interaction Lifecycle

In the previous section, we discussed the lifecycle of a single interaction between a Client (💻) and a Server (🌐). Let's now look at the lifecycle of a complete interaction between a Client and a Server in the context of the MCP protocol.

The MCP protocol defines a structured interaction lifecycle between Clients and Servers:

### Initialization

The Client connects to the Server and they exchange protocol versions and capabilities, and the Server responds with its supported protocol version and capabilities.

<table style="width: 100%;">
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>initialize</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">←<br>response</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>initialized</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
</table>

The Client confirms the initialization is complete via a notification message.

### Discovery

The Client requests information about available capabilities and the Server responds with a list of available tools.

<table style="width: 100%;">
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>tools/list</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">←<br>response</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
</table>

This process could be repeated for each tool, resource, or prompt type.

### Execution

The Client invokes capabilities based on the Host's needs.

<table style="width: 100%;">
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>tools/call</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">←<br>notification (optional progress)</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">←<br>response</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
</table>

### Termination

The connection is gracefully closed when no longer needed and the Server acknowledges the shutdown request.

<table style="width: 100%;">
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>shutdown</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">←<br>response</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
  <tr>
    <td style="background-color: lightgreen; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">💻</td>
    <td style="text-align: center;">→<br>exit</td>
    <td style="background-color: lightblue; text-align: center; padding: 10px; border: 1px solid #ccc; border-radius: 4px;">🌐</td>
  </tr>
</table>

The Client sends the final exit message to complete the termination.

## Protocol Evolution

The MCP protocol is designed to be extensible and adaptable. The initialization phase includes version negotiation, allowing for backward compatibility as the protocol evolves. Additionally, capability discovery enables Clients to adapt to the specific features each Server offers, enabling a mix of basic and advanced Servers in the same ecosystem.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/communication-protocol.mdx" />

### MCP SDK
https://huggingface.co/learn/mcp-course/unit1/sdk.md

# MCP SDK

The Model Context Protocol provides official SDKs for both JavaScript, Python and other languages. This makes it easy to implement MCP clients and servers in your applications. These SDKs handle the low-level protocol details, allowing you to focus on building your application's capabilities.

## SDK Overview

Both SDKs provide similar core functionality, following the MCP protocol specification we discussed earlier. They handle:

- Protocol-level communication
- Capability registration and discovery
- Message serialization/deserialization
- Connection management
- Error handling

## Core Primitives Implementation

Let's explore how to implement each of the core primitives (Tools, Resources, and Prompts) using both SDKs.

<hfoptions id="server-implementation">
<hfoption id="python">

<Youtube id="exzrb5QNUis" />

```python
from mcp.server.fastmcp import FastMCP

# Create an MCP server
mcp = FastMCP("Weather Service")

# Tool implementation
@mcp.tool()
def get_weather(location: str) -> str:
    """Get the current weather for a specified location."""
    return f"Weather in {location}: Sunny, 72°F"

# Resource implementation
@mcp.resource("weather://{location}")
def weather_resource(location: str) -> str:
    """Provide weather data as a resource."""
    return f"Weather data for {location}: Sunny, 72°F"

# Prompt implementation
@mcp.prompt()
def weather_report(location: str) -> str:
    """Create a weather report prompt."""
    return f"""You are a weather reporter. Weather report for {location}?"""


# Run the server
if __name__ == "__main__":
    mcp.run()
```

Once you have your server implemented, you can start it by running the server script.

```bash
mcp dev server.py
```

</hfoption>
<hfoption id="javascript">

```javascript
// index.mjs
import {
  McpServer,
  ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create an MCP server
const server = new McpServer({
  name: "Weather Service",
  version: "1.0.0",
});

// Tool implementation
server.tool("get_weather", { location: z.string() }, async ({ location }) => ({
  content: [
    {
      type: "text",
      text: `Weather in ${location}: Sunny, 72°F`,
    },
  ],
}));

// Resource implementation
server.resource(
  "weather",
  new ResourceTemplate("weather://{location}", { list: undefined }),
  async (uri, { location }) => ({
    contents: [
      {
        uri: uri.href,
        text: `Weather data for ${location}: Sunny, 72°F`,
      },
    ],
  })
);

// Prompt implementation
server.prompt(
  "weather_report",
  { location: z.string() },
  async ({ location }) => ({
    messages: [
      {
        role: "assistant",
        content: {
          type: "text",
          text: "You are a weather reporter.",
        },
      },
      {
        role: "user",
        content: {
          type: "text",
          text: `Weather report for ${location}?`,
        },
      },
    ],
  })
);

// Run the server
const transport = new StdioServerTransport();
await server.connect(transport);
```

Once you have your server implemented, you can start it by running the server script.

```bash
npx @modelcontextprotocol/inspector node ./index.mjs
```

</hfoption>
</hfoptions>

This will initialize a development server running the file `server.py`. And log the following output:

```bash
Starting MCP inspector...
⚙️ Proxy server listening on port 6277
Spawned stdio transport
Connected MCP client to backing server transport
Created web app transport
Set up MCP proxy
🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
```

You can then open the MCP Inspector at [http://127.0.0.1:6274](http://127.0.0.1:6274) to see the server's capabilities and interact with them.

You'll see the server's capabilities and the ability to call them via the UI.

![MCP Inspector](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/6.png)

## MCP SDKs

MCP is designed to be language-agnostic, and there are official SDKs available for several popular programming languages:

| Language   | Repository                                                                                               | Maintainer(s)       | Status           |
| ---------- | -------------------------------------------------------------------------------------------------------- | ------------------- | ---------------- |
| TypeScript | [github.com/modelcontextprotocol/typescript-sdk](https://github.com/modelcontextprotocol/typescript-sdk) | Anthropic           | Active           |
| Python     | [github.com/modelcontextprotocol/python-sdk](https://github.com/modelcontextprotocol/python-sdk)         | Anthropic           | Active           |
| Java       | [github.com/modelcontextprotocol/java-sdk](https://github.com/modelcontextprotocol/java-sdk)             | Spring AI (VMware)  | Active           |
| Kotlin     | [github.com/modelcontextprotocol/kotlin-sdk](https://github.com/modelcontextprotocol/kotlin-sdk)         | JetBrains           | Active           |
| C#         | [github.com/modelcontextprotocol/csharp-sdk](https://github.com/modelcontextprotocol/csharp-sdk)         | Microsoft           | Active (Preview) |
| Swift      | [github.com/modelcontextprotocol/swift-sdk](https://github.com/modelcontextprotocol/swift-sdk)           | loopwork-ai         | Active           |
| Rust       | [github.com/modelcontextprotocol/rust-sdk](https://github.com/modelcontextprotocol/rust-sdk)             | Anthropic/Community | Active           |
| Dart       | [https://github.com/leehack/mcp_dart](https://github.com/leehack/mcp_dart)                               | Flutter Community   | Active           |

These SDKs provide language-specific abstractions that simplify working with the MCP protocol, allowing you to focus on implementing the core logic of your servers or clients rather than dealing with low-level protocol details.

## Next Steps

We've only scratched the surface of what you can do with the MCP but you've already got a basic server running. In fact, you've also connected to it using the MCP Client in the browser.

In the next section, we'll look at how to connect to your server from an LLM.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/sdk.mdx" />

### Hugging Face MCP Server
https://huggingface.co/learn/mcp-course/unit1/hf-mcp-server.md

# Hugging Face MCP Server

The Hugging Face MCP (Model Context Protocol) Server connects your MCP‑compatible AI assistant (for example VS Code, Cursor, Zed, or Claude Desktop) directly to the Hugging Face Hub. Once connected, your assistant can search and explore Hub resources and use community tools, all from within your editor, chat or CLI.

> [!TIP]
> The main advanatage of the Hugging Face MCP Server is that it provides a built-in tools for the hub as well as community tools based on Gradio Spaces. As we start to build our own MCP servers, we'll see that we can use the Hugging Face MCP Server as a reference for our own MCP servers.

## What you can do

- Search and explore Hub resources: models, datasets, Spaces, and papers.
- Run community tools via MCP‑compatible Gradio apps hosted on [Spaces](https://hf.co/spaces).
- Bring results back into your assistant with metadata, links, and context.

## Built-in tools

The server provides curated tools that work across supported clients:

- Models search and exploration (filter by task, library, downloads, likes)
- Datasets search and exploration (filter by tags, size, modality)
- Spaces semantic search (find apps by capability, e.g., TTS, ASR, OCR)
- Papers semantic search (discover relevant research on the Hub)

## Get started

1. Open your MCP settings: visit https://huggingface.co/settings/mcp while logged in.

2. Pick your client: select your MCP‑compatible client (for example VS Code, Cursor, Zed, Claude Desktop). The page shows client‑specific instructions and a ready‑to‑copy configuration snippet.

3. Paste and restart: copy the snippet into your client’s MCP configuration, save, and restart/reload the client. You should see “Hugging Face” (or similar) listed as a connected MCP server in your client.

> [!TIP]
> The settings page generates the exact configuration your client expects. Use it rather than writing config by hand.

![MCP Settings Example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hf-mcp-settings.png)

## Using the server

After connecting, ask your assistant to use the Hugging Face tools. Example prompts:

- “Search Hugging Face models for Qwen 3 quantizations.”
- “Find a Space that can transcribe audio files.”
- “Show datasets about weather time‑series.”
- “Create a 1024 × 1024 image of a cat in Ghibli style.”

Your assistant will call MCP tools exposed by the Hugging Face MCP Server (including Spaces) and return results (titles, owners, downloads, links, and so on). You can then open the resource on the Hub or continue iterating in the same chat.

![HF MCP with Spaces in VS Code](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hf-mcp-vscode.png)

## Add community tools (Spaces)

You can extend your setup with MCP‑compatible Gradio Spaces built by the community:

- Explore Spaces with MCP support [here](https://huggingface.co/spaces?search=mcp).
- Add the relevant space in your MCP settings on Hugging Face [here](https://huggingface.co/settings/mcp).

Gradio MCP apps expose their functions as tools (with arguments and descriptions) so your assistant can call them directly. Please, restart or refresh your client so it picks up new tools you add.

## Learn more

- Settings and client setup: https://huggingface.co/settings/mcp
- Changelog announcement: https://huggingface.co/changelog/hf-mcp-server
- HF MCP Server documentation: https://huggingface.co/docs/hub/en/hf-mcp-server
- Building your own MCP server with Gradio and Hub tools: https://huggingface.co/docs/hub/main/agents



<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/hf-mcp-server.mdx" />

### MCP Clients
https://huggingface.co/learn/mcp-course/unit1/mcp-clients.md

# MCP Clients

Now that we have a basic understanding of the Model Context Protocol, we can explore the essential role of MCP Clients in the Model Context Protocol ecosystem.

 In this part of Unit 1, we'll explore the essential role of MCP Clients in the Model Context Protocol ecosystem.

In this section, you will:

* Understand what MCP Clients are and their role in the MCP architecture
* Learn about the key responsibilities of MCP Clients
* Explore the major MCP Client implementations
* Discover how to connect to the Hugging Face MCP Server and built-in tools
* See practical examples of MCP Client usage

> [!TIP]
> In this page we're going to show examples of how to set up MCP Clients in a few different ways using the JSON notation. For now, we will use *examples* like `path/to/server.py` to represent the path to the MCP Server. In the next unit, we'll implement this with real MCP Servers.  
>
> For now, focus on understanding the MCP Client notation. We'll implement the MCP Servers in the next unit.

## Understanding MCP Clients

MCP Clients are crucial components that act as the bridge between AI applications (Hosts) and external capabilities provided by MCP Servers. Think of the Host as your main application (like an AI assistant or IDE) and the Client as a specialized module within that Host responsible for handling MCP communications.

## User Interface Client

Let's start by exploring the user interface clients that are available for the MCP.

### Chat Interface Clients

- Claude Desktop (Anthropic)

### Interactive Development Clients

- VS Code extensions with MCP (e.g., Continue)
- Cursor IDE (built-in MCP client)
- Zed editor

These clients support connecting to multiple MCP servers and real-time tool invocation.

## Quick connect to Hugging Face MCP Server

Hugging Face provides a hosted MCP server with built-in tools for exploring models, datasets, Spaces, and papers.

1. Visit https://huggingface.co/settings/mcp while logged in.
2. Select your MCP-compatible client (e.g., VS Code, Cursor, Zed, Claude Desktop).
3. Copy the generated configuration snippet into your client's MCP config.
4. Restart or reload your client. You should see “Hugging Face” connected.

Tip: Prefer the generated snippet over hand-written config; it’s tailored per client.

## Configuring MCP Clients

Now that we've covered the core of the MCP protocol, let's look at how to configure your MCP servers and clients.

Effective deployment of MCP servers and clients requires proper configuration. 

> [!TIP]
> The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.

### MCP Configuration Files

MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.

Fortunately, the configuration files are very simple, easy to understand, and consistent across major MCP hosts.

#### `mcp.json` Structure

The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:

This is the basic structure of the `mcp.json` can be passed to applications like Claude Desktop, Cursor, or VS Code.

```json
{
  "servers": [
    {
      "name": "Server Name",
      "transport": {
        "type": "stdio|sse",
        // Transport-specific configuration
      }
    }
  ]
}
```

In this example, we have a single server with a name and a transport type. The transport type is either `stdio` or `sse`.

#### Configuration for stdio Transport

For local servers using stdio transport, the configuration includes the command and arguments to launch the server process:

```json
{
  "servers": [
    {
      "name": "File Explorer",
      "transport": {
        "type": "stdio",
        "command": "python",
        "args": ["/path/to/file_explorer_server.py"] // This is an example, we'll use a real server in the next unit
      }
    }
  ]
}
```

Here, we have a server called "File Explorer" that is a local script.

#### Configuration for HTTP+SSE Transport

For remote servers using HTTP+SSE transport, the configuration includes the server URL:

```json
{
  "servers": [
    {
      "name": "Remote API Server",
      "transport": {
        "type": "sse",
        "url": "https://example.com/mcp-server"
      }
    }
  ]
}
```

#### Environment Variables in Configuration

Environment variables can be passed to server processes using the `env` field. Here's how to access them in your server code:

<hfoptions id="env-variables">
<hfoption id="python">

In Python, we use the `os` module to access environment variables:

```python
import os

# Access environment variables
github_token = os.environ.get("GITHUB_TOKEN")
if not github_token:
    raise ValueError("GITHUB_TOKEN environment variable is required")

# Use the token in your server code
def make_github_request():
    headers = {"Authorization": f"Bearer {github_token}"}
    # ... rest of your code
```

</hfoption>
<hfoption id="javascript">

In JavaScript, we use the `process.env` object to access environment variables:

```javascript
// Access environment variables
const githubToken = process.env.GITHUB_TOKEN;
if (!githubToken) {
    throw new Error("GITHUB_TOKEN environment variable is required");
}

// Use the token in your server code
function makeGithubRequest() {
    const headers = { "Authorization": `Bearer ${githubToken}` };
    // ... rest of your code
}
```

</hfoption>
</hfoptions>

The corresponding configuration in `mcp.json` would look like this:

```json
{
  "servers": [
    {
      "name": "GitHub API",
      "transport": {
        "type": "stdio",
        "command": "python",
        "args": ["/path/to/github_server.py"], // This is an example, we'll use a real server in the next unit
        "env": {
          "GITHUB_TOKEN": "your_github_token"
        }
      }
    }
  ]
}
```

### Configuration Examples

Let's look at some real-world configuration scenarios:

#### Scenario 1: Local Server Configuration

In this scenario, we have a local server that is a Python script which could be a file explorer or a code editor.

```json
{
  "servers": [
    {
      "name": "File Explorer",
      "transport": {
        "type": "stdio",
        "command": "python",
        "args": ["/path/to/file_explorer_server.py"] // This is an example, we'll use a real server in the next unit
      }
    }
  ]
}
```

#### Scenario 2: Remote Server Configuration

In this scenario, we have a remote server that is a weather API.

```json
{
  "servers": [
    {
      "name": "Weather API",
      "transport": {
        "type": "sse",
        "url": "https://example.com/mcp-server" // This is an example, we'll use a real server in the next unit
      }
    }
  ]
}
```

Proper configuration is essential for successfully deploying MCP integrations. By understanding these aspects, you can create robust and reliable connections between AI applications and external capabilities.

In the next section, we'll explore the ecosystem of MCP servers available on Hugging Face Hub and how to publish your own servers there. 

## Tiny Agents Clients

Now, let's explore how to use MCP Clients within code.

You can also use tiny agents as MCP Clients to connect directly to MCP servers from your code. Tiny agents provide a simple way to create AI agents that can use tools from MCP servers.

Tiny Agent can run MCP servers with a command line environment. To do this, we will need to install `npm` and run the server with `npx`. **We'll need these for both Python and JavaScript.**

Let's install `npx` with `npm`. If you don't have `npm` installed, check out the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).

### Setup

First, we will need to install `npx` if you don't have it installed. You can do this with the following command:

```bash
# install npx
npm install -g npx
```

Then, we will need to install the huggingface_hub package with the MCP support. This will allow us to run MCP servers and clients.

```bash
pip install "huggingface_hub[mcp]>=0.32.0"
```

Then, we will need to log in to the Hugging Face Hub to access the MCP servers. You can do this with the `huggingface-cli` command line tool. You will need a [login token](https://huggingface.co/docs/huggingface_hub/v0.32.3/en/quick-start#authentication) to do this.

```bash
huggingface-cli login
```
### Configure Access Token Permissions

After creating your Hugging Face access token and logging in, you need to ensure your token has the proper permissions to work with inference providers.

> [!WARNING]
> **Important:** If you skip this step, you may encounter authentication errors when running tiny agents with hosted models.

1. Go to your [Hugging Face Access Tokens page](https://huggingface.co/settings/tokens)
2. Find your MCP token and click the three dots (⋮) next to it
3. Select **"Edit permissions"**
4. Under the **Inference** section, check the box for:
   - **"Make calls to Inference Providers"**
5. Save your changes

This permission is required because tiny agents need to make API calls to hosted models like `Qwen/Qwen2.5-72B-Instruct` through providers like Nebius.


<hfoptions id="language">
<hfoption id="python">

### Connecting to MCP Servers

Now, let's create an agent configuration file `agent.json`.

```json
{
    "name": "playwright-agent",
    "description": "Agent with Playwright MCP server",
    "model": "Qwen/Qwen2.5-72B-Instruct",
    "provider": "nebius",
    "servers": [
        {
            "type": "stdio",
            "command": "npx",
            "args": ["@playwright/mcp@latest"]
        }
    ]
}
```

In this configuration, we are using the `@playwright/mcp` MCP server. This is an MCP server that can control a browser with Playwright.

Now you can run the agent:

```bash
tiny-agents run agent.json
```
</hfoption>
<hfoption id="javascript">

First, install the tiny agents package with [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).

```bash
npm install @huggingface/tiny-agents
```

### Connecting to MCP Servers

Make an agent project directory and create an `agent.json` file.

```bash
mkdir my-agent
touch my-agent/agent.json
```

Create an agent configuration file at `my-agent/agent.json`:

```json
{
    "model": "Qwen/Qwen2.5-72B-Instruct",
    "provider": "nebius",
    "servers": [
        {
            "type": "stdio",
            "command": "npx",
            "args": ["@playwright/mcp@latest"]
        }
    ]
}
```

Now you can run the agent:

```bash
npx @huggingface/tiny-agents run ./my-agent
```

</hfoption>
</hfoptions>

In the video below, we run the agent and ask it to open a new tab in the browser.

The following example shows a web-browsing agent configured to use the [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model via Nebius inference provider, and it comes equipped with a playwright MCP server, which lets it use a web browser! The agent config is loaded specifying [its path in the `tiny-agents/tiny-agents`](https://huggingface.co/datasets/tiny-agents/tiny-agents/tree/main/celinah/web-browser) Hugging Face dataset.

<video controls autoplay loop>
  <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/python-tiny-agents/web_browser_agent.mp4" type="video/mp4">
</video>

When you run the agent, you'll see it load, listing the tools it has discovered from its connected MCP servers. Then, it's ready for your prompts!

Prompt used in this demo:

> do a Web Search for HF inference providers on Brave Search and open the first result and then give me the list of the inference providers supported on Hugging Face 

## Next Steps

Now that you understand MCP Clients, you're ready to:
* Explore specific MCP Server implementations
* Learn about creating custom MCP Clients
* Dive into advanced MCP integration patterns

Let's continue our journey into the world of Model Context Protocol!


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/mcp-clients.mdx" />

### Quiz 2: MCP SDK
https://huggingface.co/learn/mcp-course/unit1/quiz2.md

# Quiz 2: MCP SDK

Test your knowledge of the MCP SDKs and their functionalities.

### Q1: What is the main purpose of the MCP SDKs?

<Question
  choices={[
    {
      text: "To define the MCP protocol specification",
      explain: "The SDKs implement the protocol, they don't define it. The specification is separate."
    },
    {
      text: "To make it easier to implement MCP clients and servers",
      explain: "Correct! SDKs abstract away low-level protocol details.",
      correct: true
    },
    {
      text: "To provide a visual interface for MCP interactions",
      explain: "While some tools might offer this (like MCP Inspector), it's not the primary purpose of the SDKs themselves."
    },
    {
      text: "To replace the need for programming languages",
      explain: "SDKs are libraries used within programming languages."
    }
  ]}
/>

### Q2: Which of the following functionalities do the MCP SDKs typically handle?

<Question
  choices={[
    {
      text: "Optimizing MCP Servers",
      explain: "This is outside the scope of MCP SDKs, which focus on protocol implementation."
    },
    {
      text: "Defining new AI algorithms",
      explain: "This is outside the scope of MCP SDKs, which focus on protocol implementation."
    },
    {
      text: "Message serialization/deserialization",
      explain: "Correct! This is a core function for handling JSON-RPC messages.",
      correct: true
    },
    {
      text: "Hosting Large Language Models",
      explain: "MCP enables connection to LLMs, but the SDKs themselves don't host them."
    }
  ]}
/>

### Q3: According to the provided text, which company maintains the official Python SDK for MCP?

<Question
  choices={[
    {
      text: "Google",
      explain: "The text lists Anthropic as the maintainer."
    },
    {
      text: "Anthropic",
      explain: "Correct! The course material indicates Anthropic maintains the Python SDK.",
      correct: true
    },
    {
      text: "Microsoft",
      explain: "Microsoft maintains the C# SDK according to the text."
    },
    {
      text: "JetBrains",
      explain: "JetBrains maintains the Kotlin SDK according to the text."
    }
  ]}
/>

### Q4: What command is used to start a development MCP server using a Python file named `server.py`?

<Question
  choices={[
    {
      text: "python server.py run",
      explain: "While you run Python scripts with `python`, MCP has a specific CLI command."
    },
    {
      text: "mcp start server.py",
      explain: "The command is `mcp dev`, not `mcp start`."
    },
    {
      text: "mcp dev server.py",
      explain: "Correct! This command initializes the development server.",
      correct: true
    },
    {
      text: "serve mcp server.py",
      explain: "This is not the standard MCP CLI command shown in the course material."
    }
  ]}
/>

### Q5: What is the role of JSON-RPC 2.0 in MCP?

<Question
  choices={[
    {
      text: "As a primary transport mechanism for remote communication",
      explain: "HTTP+SSE or Streamable HTTP are transport mechanisms; JSON-RPC is the message format."
    },
    {
      text: "As the message format for all communication between Clients and Servers",
      explain: "Correct! MCP uses JSON-RPC 2.0 for structuring messages.",
      correct: true
    },
    {
      text: "As a tool for debugging AI models",
      explain: "While its human-readable nature helps in debugging communications, it's not a debugging tool for AI models themselves."
    },
    {
      text: "As a method for defining AI capabilities like Tools and Resources",
      explain: "Capabilities are defined by their own schemas; JSON-RPC is used to invoke them and exchange data."
    }
  ]}
/>

Congrats on finishing this Quiz 🥳! If you need to review any elements, take the time to revisit the chapter to reinforce your knowledge. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/quiz2.mdx" />

### Gradio MCP Integration
https://huggingface.co/learn/mcp-course/unit1/gradio-mcp.md

# Gradio MCP Integration

We've now explored the core concepts of the MCP protocol and how to implement MCP Servers and Clients. In this section, we're going to make things slightly easier by using Gradio to create an MCP Server!

> [!TIP]
> Gradio is a popular Python library for quickly creating customizable web interfaces for machine learning models.

## Introduction to Gradio

Gradio allows developers to create UIs for their models with just a few lines of Python code. It's particularly useful for:

- Creating demos and prototypes
- Sharing models with non-technical users
- Testing and debugging model behavior

With the addition of MCP support, Gradio now offers a straightforward way to expose AI model capabilities through the standardized MCP protocol.

Combining Gradio with MCP allows you to create both human-friendly interfaces and AI-accessible tools with minimal code. But best of all, Gradio is already well-used by the AI community, so you can use it to share your MCP Servers with others.

## Prerequisites

To use Gradio with MCP support, you'll need to install Gradio with the MCP extra:

```bash
uv pip install "gradio[mcp]"
```

You'll also need an LLM application that supports tool calling using the MCP protocol, such as Cursor ( known as "MCP Hosts").

## Creating an MCP Server with Gradio

Let's walk through a basic example of creating an MCP Server using Gradio:

```python
import gradio as gr

def letter_counter(word: str, letter: str) -> int:
    """
    Count the number of occurrences of a letter in a word or text.

    Args:
        word (str): The input text to search through
        letter (str): The letter to search for

    Returns:
        int: The number of times the letter appears in the text
    """
    word = word.lower()
    letter = letter.lower()
    count = word.count(letter)
    return count

# Create a standard Gradio interface
demo = gr.Interface(
    fn=letter_counter,
    inputs=["textbox", "textbox"],
    outputs="number",
    title="Letter Counter",
    description="Enter text and a letter to count how many times the letter appears in the text."
)

# Launch both the Gradio web interface and the MCP server
if __name__ == "__main__":
    demo.launch(mcp_server=True)
```

With this setup, your letter counter function is now accessible through:

1. A traditional Gradio web interface for direct human interaction
2. An MCP Server that can be connected to compatible clients

The MCP server will be accessible at:
```
http://your-server:port/gradio_api/mcp/sse
```

The application itself will still be accessible and it looks like this:

![Gradio MCP Server](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/7.png)

## How It Works Behind the Scenes

When you set `mcp_server=True` in `launch()`, several things happen:

1. Gradio functions are automatically converted to MCP Tools
2. Input components map to tool argument schemas
3. Output components determine the response format
4. The Gradio server now also listens for MCP protocol messages
5. JSON-RPC over HTTP+SSE is set up for client-server communication

## Key Features of the Gradio <> MCP Integration

1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit `http://your-server:port/gradio_api/mcp/schema` or go to the "View API" link in the footer of your Gradio app, and then click on "MCP".

2. **Environment Variable Support**: There are two ways to enable the MCP server functionality:
- Using the `mcp_server` parameter in `launch()`:
  ```python
  demo.launch(mcp_server=True)
  ```
- Using environment variables:
  ```bash
  export GRADIO_MCP_SERVER=True
  ```

3. **File Handling**: The server automatically handles file data conversions, including:
   - Converting base64-encoded strings to file data
   - Processing image files and returning them in the correct format
   - Managing temporary file storage

   It is **strongly** recommended that input images and files be passed as full URLs ("http://..." or "https://...") as MCP Clients do not always handle local files correctly.

4. **Hosted MCP Servers on 🤗 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools

## Use MCP-compatible Spaces from your client

You can connect any MCP-compatible Space to your assistant via Hugging Face MCP settings:

1. Explore MCP-compatible Spaces: https://huggingface.co/spaces?search=mcp
2. Open https://huggingface.co/settings/mcp (logged in) and add the Space.
3. Restart or refresh your MCP client so it discovers the new tools.

These Spaces expose their functions as tools with arguments and descriptions, so your assistant can call them directly.

## Troubleshooting Tips

1. **Type Hints and Docstrings**: Ensure you provide type hints and valid docstrings for your functions. The docstring should include an "Args:" block with indented parameter names.

2. **String Inputs**: When in doubt, accept input arguments as `str` and convert them to the desired type inside the function.

3. **SSE Support**: Some MCP Hosts don't support SSE-based MCP Servers. In those cases, you can use `mcp-remote`:
   ```json
   {
     "mcpServers": {
       "gradio": {
         "command": "npx",
         "args": [
           "mcp-remote",
           "http://your-server:port/gradio_api/mcp/sse"
         ]
       }
     }
   }
   ```

4. **Restart**: If you encounter connection issues, try restarting both your MCP Client and MCP Server.

## Share your MCP Server

You can share your MCP Server by publishing your Gradio app to Hugging Face Spaces. The video below shows how to create a Hugging Face Space.

<Youtube id="3bSVKNKb_PY" />

Now, you can share your MCP Server with others by sharing your Hugging Face Space.

## Conclusion

Gradio's integration with MCP provides an accessible entry point to the MCP ecosystem. By leveraging Gradio's simplicity and adding MCP's standardization, developers can quickly create both human-friendly interfaces and AI-accessible tools with minimal code.

As we progress through this course, we'll explore more sophisticated MCP implementations, but Gradio offers an excellent starting point for understanding and experimenting with the protocol.

In the next unit, we'll dive deeper into building MCP applications, focusing on setting up development environments, exploring SDKs, and implementing more advanced MCP Servers and Clients. 


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/gradio-mcp.mdx" />

### Unit1 recap
https://huggingface.co/learn/mcp-course/unit1/unit1-recap.md

# Unit1 recap

## Model Context Protocol (MCP)

The MCP is a standardized protocol designed to connect AI models with external tools, data sources, and environments. It addresses the limitations of existing AI systems by enabling interoperability and access to real-time information.

## Key Concepts

### Client-Server Architecture
MCP follows a client-server model where clients manage communication between users and servers. This architecture promotes modularity, allowing for easy addition of new servers without requiring changes to existing hosts.

### Components
#### Host
The user-facing AI application that serves as the interface for end-users.

##### Client
A component within the host application responsible for managing communication with a specific MCP server. Clients maintain 1:1 connections with servers and handle protocol-level details.

#### Server
An external program or service that provides access to tools, data sources, or services via the MCP protocol. Servers act as lightweight wrappers around existing functionalities.

### Capabilities
#### Tools
Executable functions that can perform actions (e.g., sending messages, querying APIs). Tools are typically model-controlled and require user approval due to their ability to perform actions with side effects.

#### Resources
Read-only data sources for context retrieval without significant computation. Resources are application-controlled and designed for data retrieval similar to GET endpoints in REST APIs.

#### Prompts
Pre-defined templates or workflows that guide interactions between users, AI models, and available capabilities. Prompts are user-controlled and set the context for interactions.

#### Sampling
Server-initiated requests for LLM processing, enabling server-driven agentic behaviors and potentially recursive or multi-step interactions. Sampling operations typically require user approval.

### Communication Protocol
The MCP protocol uses JSON-RPC 2.0 as the message format for communication between clients and servers. Two primary transport mechanisms are supported: stdio (for local communication) and HTTP+SSE (for remote communication). Messages include requests, responses, and notifications.

### Discovery Process
MCP allows clients to dynamically discover available tools, resources, and prompts through list methods (e.g., `tools/list`). This dynamic discovery mechanism enables clients to adapt to the specific capabilities each server offers without requiring hardcoded knowledge of server functionality.

### MCP SDKs
Official SDKs are available in various programming languages for implementing MCP clients and servers. These SDKs handle protocol-level communication, capability registration, and error handling, simplifying the development process.

### Gradio Integration
Gradio allows easy creation of web interfaces that expose capabilities to the MCP protocol, making it accessible for both humans and AI models. This integration provides a human-friendly interface alongside AI-accessible tools with minimal code.

### Hugging Face MCP enhancements
- Hosted MCP Server now integrates with more clients (VS Code, Cursor, Zed, Claude Desktop).
- Curated built-in tools for searching models, datasets, Spaces, and papers.
- One-click configuration snippets from `https://huggingface.co/settings/mcp`.
- Add MCP-compatible Gradio Spaces via settings and use them directly from your assistant.


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/unit1-recap.mdx" />

### Quiz 1: MCP Fundamentals
https://huggingface.co/learn/mcp-course/unit1/quiz1.md

# Quiz 1: MCP Fundamentals

Test your knowledge of the core concepts of Model Context Protocol.

### Q1: What is the primary purpose of Model Context Protocol (MCP)?

<Question
  choices={[
    {
      text: "To limit the training data of AI models",
      explain: "MCP aims to expand, not limit, the contexts AI models can access."
    },
    {
      text: "To enable AI models to connect with external data sources, tools, and environments",
      explain: "Correct! MCP's main goal is to facilitate interoperability.",
      correct: true
    },
    {
      text: "To replace prompting when using Large Language Models",
      explain: "MCP is a protocol that enhances prompting, not a replacement for it."
    },
    {
      text: "To create a new programming language for AI",
      explain: "MCP is a protocol, not a programming language."
    }
  ]}
/>

### Q2: What problem does MCP primarily aim to solve?

<Question
  choices={[
    {
      text: "The lack of AI models",
      explain: "MCP addresses integration challenges, not the availability of AI models themselves."
    },
    {
      text: "The high cost of training LLMs",
      explain: "While MCP can improve efficiency, its primary focus is not on reducing training costs directly."
    },
    {
      text: "The M×N Integration Problem",
      explain: "Correct! MCP standardizes connections to avoid M×N custom integrations.",
      correct: true
    },
    {
      text: "The difficulty in creating new AI algorithms",
      explain: "MCP facilitates using existing algorithms and tools, not creating new ones from scratch."
    }
  ]}
/>

### Q3: Which of the following is a key benefit of MCP?

<Question
  choices={[
    {
      text: "Reduced AI model accuracy",
      explain: "MCP aims to enhance AI capabilities, which should ideally lead to improved or maintained accuracy, not reduced."
    },
    {
      text: "Increased complexity in AI development",
      explain: "MCP aims to simplify integration, thereby reducing complexity."
    },
    {
      text: "Standardization and interoperability in the AI ecosystem",
      explain: "Correct! This is a primary goal and benefit of MCP.",
      correct: true
    },
    {
      text: "Isolation of AI models from external systems",
      explain: "MCP promotes connection and interaction, not isolation."
    }
  ]}
/>

### Q4: In MCP terminology, what is a "Host"?

<Question
  choices={[
    {
      text: "The external program exposing capabilities",
      explain: "This describes an MCP Server."
    },
    {
      text: "The user-facing AI application",
      explain: "Correct! The Host is the application users interact with.",
      correct: true
    },
    {
      text: "A read-only data source",
      explain: "This describes a type of MCP Capability (Resource)."
    },
    {
      text: "A pre-defined template for interactions",
      explain: "This describes a type of MCP Capability (Prompt)."
    }
  ]}
/>

### Q5: What does "M×N Integration Problem" refer to in the context of AI applications?

<Question
  choices={[
    {
      text: "The difficulty of training M models with N datasets",
      explain: "This relates to model training, not the integration problem MCP addresses."
    },
    {
      text: "The challenge of connecting M AI applications to N external tools without a standard",
      explain: "Correct! MCP provides the standard to solve this M*N complexity.",
      correct: true
    },
    {
      text: "The problem of managing M users across N applications",
      explain: "This is a user management or identity problem, not the focus of MCP."
    },
    {
      text: "The complexity of developing M features for N different user segments",
      explain: "This relates to product development strategy, not system integration in the way MCP defines."
    }
  ]}
/>

Congrats on finishing this Quiz 🥳! If you need to review any elements, take the time to revisit the chapter to reinforce your knowledge. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/quiz1.mdx" />

### Understanding MCP Capabilities
https://huggingface.co/learn/mcp-course/unit1/capabilities.md

# Understanding MCP Capabilities

MCP Servers expose a variety of capabilities to Clients through the communication protocol. These capabilities fall into four main categories, each with distinct characteristics and use cases. Let's explore these core primitives that form the foundation of MCP's functionality.

> [!TIP]
> In this section, we'll show examples as framework agnostic functions in each language. This is to focus on the concepts and how they work together, rather than the complexities of any framework.
>
> In the coming units, we'll show how these concepts are implemented in MCP specific code.

## Tools

Tools are executable functions or actions that the AI model can invoke through the MCP protocol.

- **Control**: Tools are typically **model-controlled**, meaning that the AI model (LLM) decides when to call them based on the user's request and context.
- **Safety**: Due to their ability to perform actions with side effects, tool execution can be dangerous. Therefore, they typically require explicit user approval.
- **Use Cases**: Sending messages, creating tickets, querying APIs, performing calculations.

**Example**: A weather tool that fetches current weather data for a given location:

<hfoptions id="tool-example">
<hfoption id="python">

```python
def get_weather(location: str) -> dict:
    """Get the current weather for a specified location."""
    # Connect to weather API and fetch data
    return {
        "temperature": 72,
        "conditions": "Sunny",
        "humidity": 45
    }
```

</hfoption>
<hfoption id="javascript">

```javascript
function getWeather(location) {
    // Connect to weather API and fetch data
    return {
        temperature: 72,
        conditions: 'Sunny',
        humidity: 45
    };
}
```

</hfoption>
</hfoptions>

## Resources

Resources provide read-only access to data sources, allowing the AI model to retrieve context without executing complex logic.

- **Control**: Resources are **application-controlled**, meaning the Host application typically decides when to access them.
- **Nature**: They are designed for data retrieval with minimal computation, similar to GET endpoints in REST APIs.
- **Safety**: Since they are read-only, they typically present lower security risks than Tools.
- **Use Cases**: Accessing file contents, retrieving database records, reading configuration information.

**Example**: A resource that provides access to file contents:

<hfoptions id="resource-example">
<hfoption id="python">

```python
def read_file(file_path: str) -> str:
    """Read the contents of a file at the specified path."""
    with open(file_path, 'r') as f:
        return f.read()
```

</hfoption>
<hfoption id="javascript">

```javascript
function readFile(filePath) {
    // Using fs.readFile to read file contents
    const fs = require('fs');
    return new Promise((resolve, reject) => {
        fs.readFile(filePath, 'utf8', (err, data) => {
            if (err) {
                reject(err);
                return;
            }
            resolve(data);
        });
    });
}
```

</hfoption>
</hfoptions>

## Prompts

Prompts are predefined templates or workflows that guide the interaction between the user, the AI model, and the Server's capabilities.

- **Control**: Prompts are **user-controlled**, often presented as options in the Host application's UI.
- **Purpose**: They structure interactions for optimal use of available Tools and Resources.
- **Selection**: Users typically select a prompt before the AI model begins processing, setting context for the interaction.
- **Use Cases**: Common workflows, specialized task templates, guided interactions.

**Example**: A prompt template for generating a code review:

<hfoptions id="prompt-example">
<hfoption id="python">

```python
def code_review(code: str, language: str) -> list:
    """Generate a code review for the provided code snippet."""
    return [
        {
            "role": "system",
            "content": f"You are a code reviewer examining {language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement."
        },
        {
            "role": "user",
            "content": f"Please review this {language} code:\n\n```{language}\n{code}\n```"
        }
    ]
```

</hfoption>
<hfoption id="javascript">

```javascript
function codeReview(code, language) {
    return [
        {
            role: 'system',
            content: `You are a code reviewer examining ${language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement.`
        },
        {
            role: 'user',
            content: `Please review this ${language} code:\n\n\`\`\`${language}\n${code}\n\`\`\``
        }
    ];
}
```

</hfoption>
</hfoptions>

## Sampling

Sampling allows Servers to request the Client (specifically, the Host application) to perform LLM interactions.

- **Control**: Sampling is **server-initiated** but requires Client/Host facilitation.
- **Purpose**: It enables server-driven agentic behaviors and potentially recursive or multi-step interactions.
- **Safety**: Like Tools, sampling operations typically require user approval.
- **Use Cases**: Complex multi-step tasks, autonomous agent workflows, interactive processes.

**Example**: A Server might request the Client to analyze data it has processed:

<hfoptions id="sampling-example">
<hfoption id="python">

```python
def request_sampling(messages, system_prompt=None, include_context="none"):
    """Request LLM sampling from the client."""
    # In a real implementation, this would send a request to the client
    return {
        "role": "assistant",
        "content": "Analysis of the provided data..."
    }
```

</hfoption>
<hfoption id="javascript">

```javascript
function requestSampling(messages, systemPrompt = null, includeContext = 'none') {
    // In a real implementation, this would send a request to the client
    return {
        role: 'assistant',
        content: 'Analysis of the provided data...'
    };
}

function handleSamplingRequest(request) {
    const { messages, systemPrompt, includeContext } = request;
    // In a real implementation, this would process the request and return a response
    return {
        role: 'assistant',
        content: 'Response to the sampling request...'
    };
}
```

</hfoption>
</hfoptions>

The sampling flow follows these steps:
1. Server sends a `sampling/createMessage` request to the client
2. Client reviews the request and can modify it
3. Client samples from an LLM
4. Client reviews the completion
5. Client returns the result to the server

> [!TIP]
> This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. When implementing sampling, it's important to provide clear, well-structured prompts and include relevant context.

## How Capabilities Work Together

Let's look at how these capabilities work together to enable complex interactions. In the table below, we've outlined the capabilities, who controls them, the direction of control, and some other details.

| Capability | Controlled By | Direction | Side Effects | Approval Needed | Typical Use Cases |
|------------|---------------|-----------|--------------|-----------------|-------------------|
| Tools      | Model (LLM)   | Client → Server | Yes (potentially) | Yes | Actions, API calls, data manipulation |
| Resources  | Application   | Client → Server | No (read-only) | Typically no | Data retrieval, context gathering |
| Prompts    | User          | Server → Client | No | No (selected by user) | Guided workflows, specialized templates |
| Sampling   | Server        | Server → Client → Server | Indirectly | Yes | Multi-step tasks, agentic behaviors |

These capabilities are designed to work together in complementary ways:

1. A user might select a **Prompt** to start a specialized workflow
2. The Prompt might include context from **Resources**
3. During processing, the AI model might call **Tools** to perform specific actions
4. For complex operations, the Server might use **Sampling** to request additional LLM processing

The distinction between these primitives provides a clear structure for MCP interactions, enabling AI models to access information, perform actions, and engage in complex workflows while maintaining appropriate control boundaries.

## Discovery Process

One of MCP's key features is dynamic capability discovery. When a Client connects to a Server, it can query the available Tools, Resources, and Prompts through specific list methods:

- `tools/list`: Discover available Tools
- `resources/list`: Discover available Resources
- `prompts/list`: Discover available Prompts

This dynamic discovery mechanism allows Clients to adapt to the specific capabilities each Server offers without requiring hardcoded knowledge of the Server's functionality.

## Conclusion

Understanding these core primitives is essential for working with MCP effectively. By providing distinct types of capabilities with clear control boundaries, MCP enables powerful interactions between AI models and external systems while maintaining appropriate safety and control mechanisms.

In the next section, we'll explore how Gradio integrates with MCP to provide easy-to-use interfaces for these capabilities. 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/capabilities.mdx" />

### Key Concepts and Terminology
https://huggingface.co/learn/mcp-course/unit1/key-concepts.md

# Key Concepts and Terminology

Before diving deeper into the Model Context Protocol, it's important to understand the key concepts and terminology that form the foundation of MCP. This section will introduce the fundamental ideas that underpin the protocol and provide a common vocabulary for discussing MCP implementations throughout the course.

MCP is often described as the "USB-C for AI applications." Just as USB-C provides a standardized physical and logical interface for connecting various peripherals to computing devices, MCP offers a consistent protocol for linking AI models to external capabilities. This standardization benefits the entire ecosystem:

- **users** enjoy simpler and more consistent experiences across AI applications
- **AI application developers** gain easy integration with a growing ecosystem of tools and data sources
- **tool and data providers** need only create a single implementation that works with multiple AI applications
- the broader ecosystem benefits from increased interoperability, innovation, and reduced fragmentation

## The Integration Problem

The **M×N Integration Problem** refers to the challenge of connecting M different AI applications to N different external tools or data sources without a standardized approach. 

### Without MCP (M×N Problem)

Without a protocol like MCP, developers would need to create M×N custom integrations—one for each possible pairing of an AI application with an external capability. 

![Without MCP](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/1.png)

Each AI application would need to integrate with each tool/data source individually. This is a very complex and expensive process which introduces a lot of friction for developers, and high maintenance costs.

Once we have multiple models and multiple tools, the number of integrations becomes too large to manage, each with its own unique interface.

![Multiple Models and Tools](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/1a.png)

### With MCP (M+N Solution)

MCP transforms this into an M+N problem by providing a standard interface: each AI application implements the client side of MCP once, and each tool/data source implements the server side once. This dramatically reduces integration complexity and maintenance burden.

![With MCP](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/2.png)

## Core MCP Terminology

Now that we understand the problem that MCP solves, let's dive into the core terminology and concepts that make up the MCP protocol.

> [!TIP]
> MCP is a standard like HTTP or USB-C, and is a protocol for connecting AI applications to external tools and data sources. Therefore, using standard terminology is crucial to making the MCP work effectively. 
>
> When documenting our applications and communicating with the community, we should use the following terminology.

### Components

Just like client server relationships in HTTP, MCP has a client and a server.

![MCP Components](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/3.png)

- **Host**: The user-facing AI application that end-users interact with directly. Examples include Anthropic's Claude Desktop, AI-enhanced IDEs like Cursor, inference libraries like Hugging Face Python SDK, or custom applications built in libraries like LangChain or smolagents. Hosts initiate connections to MCP Servers and orchestrate the overall flow between user requests, LLM processing, and external tools.

- **Client**: A component within the host application that manages communication with a specific MCP Server. Each Client maintains a 1:1 connection with a single Server, handling the protocol-level details of MCP communication and acting as an intermediary between the Host's logic and the external Server.

- **Server**: An external program or service that exposes capabilities (Tools, Resources, Prompts) via the MCP protocol.

> [!WARNING]
> A lot of content uses 'Client' and 'Host' interchangeably. Technically speaking, the host is the user-facing application, and the client is the component within the host application that manages communication with a specific MCP Server.

### Capabilities

Of course, your application's value is the sum of the capabilities it offers. So the capabilities are the most important part of your application. MCP's can connect with any software service, but there are some common capabilities that are used for many AI applications.

| Capability | Description | Example |
| ---------- | ----------- | ------- |
| **Tools** | Executable functions that the AI model can invoke to perform actions or retrieve computed data. Typically relating to the use case of the application. | A tool for a weather application might be a function that returns the weather in a specific location. |
| **Resources** | Read-only data sources that provide context without significant computation. | A researcher assistant might have a resource for scientific papers. |
| **Prompts** | Pre-defined templates or workflows that guide interactions between users, AI models, and the available capabilities. | A summarization prompt. |
| **Sampling** | Server-initiated requests for the Client/Host to perform LLM interactions, enabling recursive actions where the LLM can review generated content and make further decisions. | A writing application reviewing its own output and decides to refine it further. |

In the following diagram, we can see the collective capabilities applied to a use case for a code agent.

![collective diagram](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/8.png)

This application might use their MCP entities in the following way:

| Entity | Name | Description |
| --- | --- | --- |
| Tool | Code Interpreter | A tool that can execute code that the LLM writes. |
| Resource | Documentation | A resource that contains the documentation of the application. |
| Prompt | Code Style | A prompt that guides the LLM to generate code. |
| Sampling | Code Review | A sampling that allows the LLM to review the code and make further decisions. |

### Conclusion

Understanding these key concepts and terminology provides the foundation for working with MCP effectively. In the following sections, we'll build on this foundation to explore the architectural components, communication protocol, and capabilities that make up the Model Context Protocol. 


<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/key-concepts.mdx" />

### Introduction to Model Context Protocol (MCP)
https://huggingface.co/learn/mcp-course/unit1/introduction.md

# Introduction to Model Context Protocol (MCP)

Welcome to Unit 1 of the MCP Course! In this unit, we'll explore the fundamentals of Model Context Protocol.

## What You Will Learn

In this unit, you will:

* Understand what Model Context Protocol is and why it's important
* Learn the key concepts and terminology associated with MCP
* Explore the integration challenges that MCP solves
* Walk through the key benefits and goals of MCP
* See a simple example of MCP integration in action

By the end of this unit, you'll have a solid understanding of the foundational concepts of MCP and be ready to dive deeper into its architecture and implementation in the next unit.

## Importance of MCP

The AI ecosystem is evolving rapidly, with Large Language Models (LLMs) and other AI systems becoming increasingly capable. However, these models are often limited by their training data and lack access to real-time information or specialized tools. This limitation hinders the potential of AI systems to provide truly relevant, accurate, and helpful responses in many scenarios.

This is where Model Context Protocol (MCP) comes in. MCP enables AI models to connect with external data sources, tools, and environments, allowing for the seamless transfer of information and capabilities between AI systems and the broader digital world. This interoperability is crucial for the growth and adoption of truly useful AI applications.

## Overview of Unit 1

Here's a brief overview of what we'll cover in this unit:

1. **What is Model Context Protocol?** - We'll start by defining what MCP is and discussing its role in the AI ecosystem.
2. **Key Concepts** - We'll explore the fundamental concepts and terminology associated with MCP.
3. **Integration Challenges** - We'll examine the problems that MCP aims to solve, particularly the "M×N Integration Problem."
4. **Benefits and Goals** - We'll discuss the key benefits and goals of MCP, including standardization, enhanced AI capabilities, and interoperability.
5. **Simple Example** - Finally, we'll walk through a simple example of MCP integration to see how it works in practice.

Let's dive in and explore the exciting world of Model Context Protocol! 

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/introduction.mdx" />

### Get your certificate!
https://huggingface.co/learn/mcp-course/unit1/certificate.md

# Get your certificate!

Well done! You've completed the first unit of the MCP course. Now it's time to take the exam to get your certificate.

Below is a quiz to check your understanding of the unit. 

<iframe
	src="https://mcp-course-unit-1-quiz.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

> [!TIP]
> If you're struggling to use the quiz above, go to the space directly [on the Hugging Face Hub](https://huggingface.co/spaces/mcp-course/unit_1_quiz). If you find errors, you can report them in the space's [Community tab](https://huggingface.co/spaces/mcp-course/unit_1_quiz/discussions).



<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/certificate.mdx" />

### Architectural Components of MCP
https://huggingface.co/learn/mcp-course/unit1/architectural-components.md

# Architectural Components of MCP

In the previous section, we discussed the key concepts and terminology of MCP. Now, let's dive deeper into the architectural components that make up the MCP ecosystem.

## Host, Client, and Server

The Model Context Protocol (MCP) is built on a client-server architecture that enables structured communication between AI models and external systems. 

![MCP Architecture](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/4.png)

The MCP architecture consists of three primary components, each with well-defined roles and responsibilities: Host, Client, and Server. We touched on these in the previous section, but let's dive deeper into each component and their responsibilities.

### Host

The **Host** is the user-facing AI application that end-users interact with directly. 

Examples include:
- AI Chat apps like OpenAI ChatGPT or Anthropic's Claude Desktop
- AI-enhanced IDEs like Cursor, or integrations to tools like Continue.dev
- Custom AI agents and applications built in libraries like LangChain or smolagents

The Host's responsibilities include:
- Managing user interactions and permissions
- Initiating connections to MCP Servers via MCP Clients
- Orchestrating the overall flow between user requests, LLM processing, and external tools
- Rendering results back to users in a coherent format

In most cases, users will select their host application based on their needs and preferences. For example, a developer may choose Cursor for its powerful code editing capabilities, while domain experts may use custom applications built in smolagents.

### Client

The **Client** is a component within the Host application that manages communication with a specific MCP Server. Key characteristics include:

- Each Client maintains a 1:1 connection with a single Server
- Handles the protocol-level details of MCP communication
- Acts as the intermediary between the Host's logic and the external Server

### Server

The **Server** is an external program or service that exposes capabilities to AI models via the MCP protocol. Servers:

- Provide access to specific external tools, data sources, or services
- Act as lightweight wrappers around existing functionality
- Can run locally (on the same machine as the Host) or remotely (over a network)
- Expose their capabilities in a standardized format that Clients can discover and use

## Communication Flow

Let's examine how these components interact in a typical MCP workflow:

> [!TIP]
> In the next section, we'll dive deeper into the communication protocol that enables these components with practical examples.

1. **User Interaction**: The user interacts with the **Host** application, expressing an intent or query.

2. **Host Processing**: The **Host** processes the user's input, potentially using an LLM to understand the request and determine which external capabilities might be needed.

3. **Client Connection**: The **Host** directs its **Client** component to connect to the appropriate Server(s).

4. **Capability Discovery**: The **Client** queries the **Server** to discover what capabilities (Tools, Resources, Prompts) it offers.

5. **Capability Invocation**: Based on the user's needs or the LLM's determination, the Host instructs the **Client** to invoke specific capabilities from the **Server**.

6. **Server Execution**: The **Server** executes the requested functionality and returns results to the **Client**.

7. **Result Integration**: The **Client** relays these results back to the **Host**, which incorporates them into the context for the LLM or presents them directly to the user.

A key advantage of this architecture is its modularity. A single **Host** can connect to multiple **Servers** simultaneously via different **Clients**. New **Servers** can be added to the ecosystem without requiring changes to existing **Hosts**. Capabilities can be easily composed across different **Servers**.

> [!TIP]
> As we discussed in the previous section, this modularity transforms the traditional M×N integration problem (M AI applications connecting to N tools/services) into a more manageable M+N problem, where each Host and Server needs to implement the MCP standard only once.

The architecture might appear simple, but its power lies in the standardization of the communication protocol and the clear separation of responsibilities between components. This design allows for a cohesive ecosystem where AI models can seamlessly connect with an ever-growing array of external tools and data sources.

## Conclusion

These interaction patterns are guided by several key principles that shape the design and evolution of MCP. The protocol emphasizes **standardization** by providing a universal protocol for AI connectivity, while maintaining **simplicity** by keeping the core protocol straightforward yet enabling advanced features. **Safety** is prioritized by requiring explicit user approval for sensitive operations, and discoverability enables dynamic discovery of capabilities. The protocol is built with **extensibility** in mind, supporting evolution through versioning and capability negotiation, and ensures **interoperability** across different implementations and environments.

In the next section, we'll explore the communication protocol that enables these components to work together effectively.

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit1/architectural-components.mdx" />

### Unit 3 Solution Walkthrough: Building a Pull Request Agent with MCP
https://huggingface.co/learn/mcp-course/unit3/build-mcp-server-solution-walkthrough.md

# Unit 3 Solution Walkthrough: Building a Pull Request Agent with MCP

## Overview

This walkthrough guides you through the complete solution for Unit 3's Pull Request Agent - an MCP server that helps developers create better pull requests by analyzing code changes, monitoring CI/CD pipelines, and automating team communications. The solution demonstrates all three MCP primitives (Tools, Resources, and Prompts) working together in a real-world workflow.

## Architecture Overview

The PR Agent consists of interconnected modules that progressively build a complete automation system:

1. **Build MCP Server** - Basic server with Tools for PR template suggestions
2. **Smart File Analysis** - Enhanced analysis using Resources for project context
3. **GitHub Actions Integration** - CI/CD monitoring with standardized Prompts
4. **Hugging Face Hub Integration** - Model deployment and dataset PR workflows
5. **Slack Notification** - Team communication integrating all MCP primitives

## Module 1: Build MCP Server

### What We're Building
A minimal MCP server that analyzes file changes and suggests appropriate PR templates using MCP Tools.

### Key Components

#### 1. Server Initialization (`server.py`)
```python
# The server registers three essential tools:
# - analyze_file_changes: Returns structured data about changed files
# - get_pr_templates: Lists available templates with metadata
# - suggest_template: Provides intelligent template recommendations
```

The server uses the MCP SDK to expose these tools to Claude Code, allowing it to gather information and make intelligent decisions about which PR template to use.

#### 2. File Analysis Tool
The `analyze_file_changes` tool examines the git diff to identify:
- File types and extensions
- Number of files changed
- Lines added/removed
- Common patterns (tests, configs, docs)

This structured data enables Claude to understand the nature of the changes without hard-coding decision logic.

#### 3. Template Management
Templates are stored as markdown files in the `templates/` directory:
- `bug.md` - For bug fixes
- `feature.md` - For new features
- `docs.md` - For documentation updates
- `refactor.md` - For code refactoring

Each template includes placeholders that Claude can fill based on the analysis.

### How Claude Uses These Tools

1. Claude calls `analyze_file_changes` to understand what changed
2. Uses `get_pr_templates` to see available options
3. Calls `suggest_template` with the analysis data
4. Receives a recommendation with reasoning
5. Can customize the template based on specific changes

### Learning Outcomes
- Understanding tool registration and schemas
- Letting Claude make decisions with structured data
- Separation of data gathering from decision logic

## Module 2: Smart File Analysis

### What We're Building
Enhanced file analysis using MCP Resources to provide project context and team guidelines.

### Key Components

#### 1. Resource Registration
The server exposes four types of resources:
```python
# Resources provide read-only access to:
# - file://templates/ - PR template files
# - file://project-context/ - Coding standards, conventions
# - git://recent-changes/ - Commit history and patterns
# - team://guidelines/ - Review processes and standards
```

#### 2. Project Context Resources
The `project-context/` directory contains:
- `coding-standards.md` - Language-specific conventions
- `review-guidelines.md` - What reviewers look for
- `architecture.md` - System design patterns
- `dependencies.md` - Third-party library policies

Claude can read these to understand project-specific requirements.

#### 3. Git History Analysis
The `git://recent-changes/` resource provides:
- Recent commit messages and patterns
- Common PR titles and descriptions
- Team member contribution patterns
- Historical template usage

This helps Claude suggest templates consistent with team practices.

### How Claude Uses Resources

1. Reads `team://guidelines/review-process.md` to understand PR requirements
2. Accesses `file://project-context/coding-standards.md` for style guides
3. Analyzes `git://recent-changes/` to match team patterns
4. Combines this context with file analysis for better suggestions

### Enhanced Decision Making
With resources, Claude can now:
- Suggest templates matching team conventions
- Include project-specific requirements in PRs
- Reference coding standards in descriptions
- Align with historical team practices

### Learning Outcomes
- Resource URI design and schemas
- Making project knowledge accessible to AI
- Context-aware decision making
- Balancing automation with team standards

## Module 3: GitHub Actions Integration

### What We're Building
Real-time CI/CD monitoring using webhooks and standardized prompts for consistent team communication.

### Key Components

#### 1. Webhook Server
Uses Cloudflare Tunnel to receive GitHub Actions events:
```python
# Webhook endpoint handles:
# - workflow_run events
# - check_run events  
# - pull_request status updates
# - deployment notifications
```

#### 2. Prompt Templates
Four standardized prompts ensure consistency:
- **"Analyze CI Results"** - Process test failures and build errors
- **"Generate Status Summary"** - Create human-readable status updates
- **"Create Follow-up Tasks"** - Suggest next steps based on results
- **"Draft Team Notification"** - Format updates for different audiences

#### 3. Event Processing Pipeline
1. Receive webhook from GitHub
2. Parse event data and extract relevant information
3. Use appropriate prompt based on event type
4. Generate standardized response
5. Store for team notification

### How Claude Uses Prompts

Example prompt usage:
```python
# When tests fail, Claude uses the "Analyze CI Results" prompt:
prompt_data = {
    "event_type": "workflow_run",
    "status": "failure",
    "failed_jobs": ["unit-tests", "lint"],
    "error_logs": "...",
    "pr_context": {...}
}

# Claude generates:
# - Root cause analysis
# - Suggested fixes
# - Impact assessment
# - Next steps
```

### Standardized Workflows
Prompts ensure that regardless of who's working:
- CI failures are analyzed consistently
- Status updates follow team formats
- Follow-up actions align with processes
- Notifications contain required information

### Learning Outcomes
- Webhook integration patterns
- Prompt engineering for consistency
- Event-driven architectures
- Standardizing team workflows

## Module 4: Hugging Face Hub Integration

### What We're Building
Integration with Hugging Face Hub for LLM and dataset PRs, adding specialized workflows for teams working with language models.

### Key Components

#### 1. Hub-Specific Tools
```python
# Tools for Hugging Face workflows:
# - analyze_model_changes: Detect LLM file modifications
# - validate_dataset_format: Check training data compliance
# - generate_model_card: Create/update model documentation
# - suggest_hub_template: PR templates for LLMs/datasets
```

#### 2. Hub Resources
```python
# Resources for Hub context:
# - hub://model-cards/ - LLM card templates and examples
# - hub://dataset-formats/ - Training data specifications
# - hub://community-standards/ - Hub community guidelines
# - hub://license-info/ - License compatibility checks
```

#### 3. LLM-Specific Prompts
```python
# Prompts for LLM workflows:
# - "Analyze Model Changes" - Understand LLM updates
# - "Generate Benchmark Summary" - Create evaluation metrics
# - "Check Dataset Quality" - Validate training data
# - "Draft Model Card Update" - Update documentation
```

### Hub-Specific Workflows

When a PR modifies LLM files:
1. **Tool**: `analyze_model_changes` detects model architecture changes
2. **Resource**: Reads `hub://model-cards/llm-template.md`
3. **Prompt**: "Generate Benchmark Summary" creates evaluation section
4. **Tool**: `generate_model_card` updates documentation
5. **Resource**: Checks `hub://license-info/` for compatibility

### Dataset PR Handling
For training data updates:
- Validates format consistency
- Checks data quality metrics
- Updates dataset cards
- Suggests appropriate reviewers

### Learning Outcomes
- Hugging Face Hub API integration
- LLM-specific PR workflows
- Model and dataset documentation
- Community standards compliance

## Module 5: Slack Notification

### What We're Building
Automated team notifications combining Tools, Resources, and Prompts for complete workflow automation.

### Key Components

#### 1. Communication Tools
```python
# Three tools for team updates:
# - send_slack_message: Post to team channels
# - get_team_members: Identify who to notify
# - track_notification_status: Monitor delivery
```

#### 2. Team Resources
```python
# Resources for team data:
# - team://members/ - Developer profiles and preferences
# - slack://channels/ - Channel configurations
# - notification://templates/ - Message formats
```

#### 3. Notification Prompts
```python
# Prompts for communication:
# - "Format Team Update" - Style messages appropriately
# - "Choose Communication Channel" - Select right audience
# - "Escalate if Critical" - Handle urgent issues
```

### Integration Example

When CI fails on a critical PR:
1. **Tool**: `get_team_members` identifies the PR author and reviewers
2. **Resource**: `team://members/{user}/preferences` checks notification settings
3. **Prompt**: "Format Team Update" creates appropriate message
4. **Tool**: `send_slack_message` delivers to right channel
5. **Resource**: `notification://templates/ci-failure` ensures consistent format
6. **Prompt**: "Escalate if Critical" determines if additional alerts needed

### Intelligent Routing
The system considers:
- Team member availability (from calendar resources)
- Notification preferences (email vs Slack)
- Message urgency (based on PR labels)
- Time zones and working hours

### Learning Outcomes
- Primitive integration patterns
- Complex workflow orchestration
- Balancing automation with human needs
- Production-ready error handling

## Complete Workflow Example

Here's how all components work together for a typical PR:

1. **Developer creates PR**
   - GitHub webhook triggers the server
   - Tool: `analyze_file_changes` examines the diff
   - Resource: Reads team guidelines and project context
   - Prompt: Suggests optimal PR template

2. **CI/CD Pipeline Runs**
   - Webhook receives workflow events
   - Prompt: "Analyze CI Results" processes outcomes
   - Resource: Checks team escalation policies
   - Tool: Updates PR status in GitHub

3. **Hugging Face Hub Integration**
   - Tool: Detects LLM/dataset changes
   - Resource: Reads Hub guidelines
   - Prompt: Generates model card updates
   - Tool: Validates against Hub standards

4. **Team Notification**
   - Tool: Identifies relevant team members
   - Resource: Reads notification preferences
   - Prompt: Formats appropriate message
   - Tool: Sends via Slack channels

5. **Follow-up Actions**
   - Prompt: "Create Follow-up Tasks" generates next steps
   - Tool: Creates GitHub issues if needed
   - Resource: Links to documentation
   - All primitives work together seamlessly

## Testing Strategy

### Unit Tests
Each module includes comprehensive unit tests:
- Tool schema validation
- Resource URI parsing
- Prompt template rendering
- Integration scenarios

### Integration Tests
End-to-end tests cover:
- Complete PR workflow
- Error recovery scenarios
- Performance under load
- Security validation

### Test Structure
```
tests/
├── unit/
│   ├── test_tools.py
│   ├── test_resources.py
│   ├── test_prompts.py
│   └── test_integration.py
├── integration/
│   ├── test_workflow.py
│   ├── test_webhooks.py
│   └── test_notifications.py
└── fixtures/
    ├── sample_events.json
    └── mock_responses.json
```

## Running the Solution

### Local Development Setup
1. **Start the MCP server**: `python server.py`
2. **Configure Claude Code**: Add server to MCP settings
3. **Set up Cloudflare Tunnel**: `cloudflared tunnel --url http://localhost:3000`
4. **Configure webhooks**: Add tunnel URL to GitHub repository
5. **Test the workflow**: Create a PR and watch the automation

### Configuration
Simple file-based configuration for easy setup:
- GitHub tokens in `.env` file
- Slack webhooks in config
- Template customization in `templates/`
- All settings in one place

## Common Patterns and Best Practices

### Tool Design
- Keep tools focused and single-purpose
- Return structured data for AI interpretation
- Include comprehensive error messages
- Version your tool schemas

### Resource Organization
- Use clear URI hierarchies
- Implement resource discovery
- Cache frequently accessed resources
- Version control all resources

### Prompt Engineering
- Make prompts specific but flexible
- Include context and examples
- Test with various inputs
- Maintain prompt libraries

### Integration Patterns
- Use events for loose coupling
- Implement circuit breakers
- Add retries with backoff
- Monitor all external calls

## Troubleshooting Guide

### Common Issues

1. **Webhook not receiving events**
   - Check Cloudflare Tunnel is running
   - Verify GitHub webhook configuration
   - Confirm secret matches

2. **Tools not appearing in Claude**
   - Validate tool schemas
   - Check server registration
   - Review MCP connection

3. **Resources not accessible**
   - Verify file permissions
   - Check URI formatting
   - Confirm resource registration

4. **Prompts producing inconsistent results**
   - Review prompt templates
   - Check context provided
   - Validate input formatting

## Next Steps and Extensions

### Potential Enhancements
1. Add more code analysis tools (complexity, security)
2. Integrate with more communication platforms
3. Add custom workflow definitions
4. Implement PR auto-merge capabilities

### Learning Path
- **Next**: Unit 4 - Deploy this server remotely
- **Advanced**: Custom MCP protocol extensions
- **Expert**: Multi-server orchestration

## Conclusion

This PR Agent demonstrates the power of MCP's three primitives working together. Tools provide capabilities, Resources offer context, and Prompts ensure consistency. Combined, they create an intelligent automation system that enhances developer productivity while maintaining team standards.

The modular architecture ensures each component can be understood, tested, and extended independently, while the integration showcases real-world patterns you'll use in production MCP servers.

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/build-mcp-server-solution-walkthrough.mdx" />

### Module 1: Build MCP Server
https://huggingface.co/learn/mcp-course/unit3/build-mcp-server.md

# Module 1: Build MCP Server

## The PR Chaos at CodeCraft Studios

It's your first week at CodeCraft Studios, and you're witnessing something that makes every developer cringe. The team's pull requests look like this:

- "stuff" 
- "more changes"
- "fix"
- "update things"

Meanwhile, the code review backlog is growing because reviewers can't understand what changed or why. Sarah from the backend team spent 30 minutes trying to figure out what "various improvements" actually meant, while Mike from frontend had to dig through 47 files to understand a "small fix."

The team knows they need better PR descriptions, but everyone's too busy shipping features to write detailed explanations. They need a solution that helps without slowing them down.

**Your mission**: Build an intelligent PR Agent that analyzes code changes and suggests helpful descriptions automatically.

### Screencast: The PR Problem in Action 😬

<Youtube id="tskAUPWFPP0" />

**What You'll See**: A real PR at CodeCraft Studios titled "various improvements" and the description simply says "Fixed some stuff and made updates". Classic, right?

**The Confusion**: Watch as teammates struggle:
- **Sarah** (3 hours ago): "What was fixed? I see changes to the User model but can't tell if this is addressing a bug or adding features"
- **Jamie** (3 hours ago): "There are 8 files across 4 services... are these changes related? What should I focus on during review?"

**The Pain Point**: The screencast shows the actual diff—8 files scattered across multiple services with zero context. Reviewers have to piece together the story themselves, wasting precious time and possibly missing critical issues.

**Why This Matters**: This is exactly the PR chaos your MCP server will solve! By the end of this module, you'll turn these cryptic PRs into clear, actionable descriptions that make everyone's life easier.

## What You'll Build

In this first module, you'll create the foundation of CodeCraft Studios' automation system: an MCP server that transforms how the team writes pull requests. This module focuses on core MCP concepts that you'll build upon in Modules 2 and 3.

### Screencast: Your PR Agent Saves the Day! 🚀

<Youtube id="OaAWJLvnlqc" />

**The Solution in Action**: Watch how your MCP server will transform PR chaos into clarity:
1. **`analyze_file_changes`** - Grabs all the changes (453 lines across 8 files!)
2. **`get_pr_templates`** - Shows Claude the 7 templates to choose from
3. **`suggest_template`** - Claude picks "Feature" (smart choice!)

**What You'll See**: Claude doesn't just pick a template—it:
- Writes a clear summary of what actually changed
- Spots security issues (yikes, unhashed passwords!)
- Creates a nice to-do list for follow-up work
- Even prioritizes what needs fixing first

**The "Wow" Moment** ✨: In just seconds, your MCP server helps Claude transform the same branch into a PR that actually explains what's going on. No more confused reviewers, no more "what does this do?" comments.

**This is what you'll build**: A tool that turns PR dread into PR delight—let's get started!

## What You Will Learn

In this foundational module, you'll master:
- **How to create a basic MCP server using FastMCP** - The building blocks for Modules 2 and 3
- **Implementing MCP Tools for data retrieval and analysis** - The core primitive you'll use throughout Unit 3  
- **Letting Claude make intelligent decisions based on raw data** - A key principle for all MCP development
- **Testing and validating your MCP server** - Essential skills for building reliable tools

## Overview

Your PR Agent will solve CodeCraft Studios' problem using a key principle of MCP development: instead of hard-coding rigid rules about what makes a good PR, you'll provide Claude with raw git data and let it intelligently suggest appropriate descriptions.

This approach works because:
- **Flexible analysis**: Claude can understand context that simple rules miss
- **Natural language**: Suggestions feel human, not robotic
- **Adaptable**: Works for any codebase or coding style

You'll implement three essential tools that establish patterns for the entire automation system:

1. **analyze_file_changes** - Retrieves git diff information and changed files (data collection)
2. **get_pr_templates** - Lists available PR templates (resource management)  
3. **suggest_template** - Allows Claude to recommend the most appropriate template (intelligent decision-making)

## Getting Started

### Prerequisites

- Python 3.10 or higher
- Git installed and a git repository to test with
- uv package manager ([installation guide](https://docs.astral.sh/uv/getting-started/installation/))

### Starter Code

Clone the starter code repository:

```bash
git clone https://github.com/huggingface/mcp-course.git
```

Navigate to the starter code directory:

```bash
cd mcp-course/projects/unit3/build-mcp-server/starter
```

Install dependencies:

> [!TIP]
> You might want to create a virtual environment for this project:
>
> ```bash
> uv venv .venv
> source .venv/bin/activate # On Windows use: .venv\Scripts\activate
> ```

```bash
uv sync --all-extras
```

### Your Task

This is your first hands-on MCP development experience! Open `server.py` and implement the three tools following the TODO comments. The starter code provides the basic structure - you need to:

1. **Implement `analyze_file_changes`** to run git commands and return diff data
   - ⚠️ **Important**: You'll likely hit a token limit error (25,000 tokens max per response)
   - This is a real-world constraint that teaches proper output management
   - See the "Handling Large Outputs" section below for the solution
   - ⚠️ **Note**: Git commands will run in the MCP server's directory by default. See "Working Directory Considerations" below for details
2. **Implement `get_pr_templates`** to manage and return PR templates  
3. **Implement `suggest_template`** to map change types to templates

Don't worry about making everything perfect - you'll refine these skills as you progress through the unit.

### Design Philosophy

Unlike traditional systems that categorize changes based on file extensions or rigid patterns, your implementation should:

- Provide Claude with raw git data (diffs, file lists, statistics)
- Let Claude analyze the actual code changes
- Allow Claude to make intelligent template suggestions
- Keep the logic simple - Claude handles the complexity

> [!TIP]
> **MCP Philosophy**: Instead of building complex logic into your tools, provide Claude with rich data and let its intelligence make the decisions. This makes your code simpler and more flexible than traditional rule-based systems.

## Testing Your Implementation

### 1. Validate Your Code

Run the validation script to check your implementation:

```bash
uv run python validate_starter.py
```

### 2. Run Unit Tests

Test your implementation with the provided test suite:

```bash
uv run pytest test_server.py -v
```

### 3. Test with Claude Code

Configure your server directly in Claude Code:

```bash
# Add the MCP server to Claude Code
claude mcp add pr-agent -- uv --directory /absolute/path/to/starter run server.py

# Verify the server is configured
claude mcp list
```

Then:
1. Make some changes in a git repository
2. Ask Claude: "Can you analyze my changes and suggest a PR template?"
3. Watch Claude use your tools to provide intelligent suggestions

> [!WARNING]
> **Common first error**: If you get "MCP tool response exceeds maximum allowed tokens (25000)", this is expected! Large repositories can generate massive diffs. This is a valuable learning moment - see the "Handling Large Outputs" section for the solution.

## Common Patterns

### Tool Implementation Pattern

```python
@mcp.tool()
async def tool_name(param1: str, param2: bool = True) -> str:
    """Tool description for Claude.
    
    Args:
        param1: Description of parameter
        param2: Optional parameter with default
    """
    # Your implementation
    result = {"key": "value"}
    return json.dumps(result)
```

### Error Handling

Always handle potential errors gracefully:

```python
try:
    result = subprocess.run(["git", "diff"], capture_output=True, text=True)
    return json.dumps({"output": result.stdout})
except Exception as e:
    return json.dumps({"error": str(e)})
```

> [!WARNING]
> **Error Handling**: Always return valid JSON from your tools, even for errors. Claude needs structured data to understand what went wrong and provide helpful responses to users.

### Handling Large Outputs (Critical Learning Moment!)

> [!WARNING]
> **Real-world constraint**: MCP tools have a token limit of 25,000 tokens per response. Large git diffs can easily exceed this limit 10x or more! This is a critical lesson for production MCP development.

When implementing `analyze_file_changes`, you'll likely encounter this error:
```
Error: MCP tool response (262521 tokens) exceeds maximum allowed tokens (25000)
```

**Why this happens:**
- A single file change can be thousands of lines
- Enterprise repositories often have massive refactorings
- Git diffs include full context by default
- JSON encoding adds overhead

This teaches us an important principle: **Always design tools with output limits in mind**. Here's the solution:

```python
@mcp.tool()
async def analyze_file_changes(base_branch: str = "main", 
                              include_diff: bool = True,
                              max_diff_lines: int = 500) -> str:
    """Analyze file changes with smart output limiting.
    
    Args:
        base_branch: Branch to compare against
        include_diff: Whether to include the actual diff
        max_diff_lines: Maximum diff lines to include (default 500)
    """
    try:
        # Get the diff
        result = subprocess.run(
            ["git", "diff", f"{base_branch}...HEAD"],
            capture_output=True, 
            text=True
        )
        
        diff_output = result.stdout
        diff_lines = diff_output.split('\n')
        
        # Smart truncation if needed
        if len(diff_lines) > max_diff_lines:
            truncated_diff = '\n'.join(diff_lines[:max_diff_lines])
            truncated_diff += f"\n\n... Output truncated. Showing {max_diff_lines} of {len(diff_lines)} lines ..."
            diff_output = truncated_diff
        
        # Get summary statistics
        stats_result = subprocess.run(
            ["git", "diff", "--stat", f"{base_branch}...HEAD"],
            capture_output=True,
            text=True
        )
        
        return json.dumps({
            "stats": stats_result.stdout,
            "total_lines": len(diff_lines),
            "diff": diff_output if include_diff else "Use include_diff=true to see diff",
            "files_changed": self._get_changed_files(base_branch)
        })
        
    except Exception as e:
        return json.dumps({"error": str(e)})
```

**Best practices for large outputs:**
1. **Implement pagination**: Break large results into pages
2. **Add filtering options**: Let users request specific files or directories
3. **Provide summaries first**: Return statistics before full content
4. **Use progressive disclosure**: Start with high-level info, allow drilling down
5. **Set sensible defaults**: Default to reasonable limits that work for most cases

## Working Directory Considerations

By default, MCP servers run commands in their installation directory, not in Claude's current working directory. This means your git commands might analyze the wrong repository! 

To solve this, MCP provides [roots](https://modelcontextprotocol.io/docs/concepts/roots) - a way for clients to inform servers about relevant directories. Claude Code automatically provides its working directory as a root.

Here's how to access it in your tool:

```python
@mcp.tool()
async def analyze_file_changes(...):
    # Get Claude's working directory from roots
    context = mcp.get_context()
    roots_result = await context.session.list_roots()
    
    # Extract the path from the FileUrl object
    working_dir = roots_result.roots[0].uri.path
    
    # Use it for all git commands
    result = subprocess.run(
        ["git", "diff", "--name-status"],
        capture_output=True,
        text=True,
        cwd=working_dir  # Run in Claude's directory!
    )
```

This ensures your tools operate on the repository Claude is actually working with, not the MCP server's installation location.

## Troubleshooting

- **Import errors**: Ensure you've run `uv sync`
- **Git errors**: Make sure you're in a git repository
- **No output**: MCP servers communicate via stdio - test with Claude Desktop
- **JSON errors**: All tools must return valid JSON strings
- **Token limit exceeded**: This is expected with large diffs! Implement output limiting as shown above
- **"Response too large" errors**: Add `max_diff_lines` parameter or set `include_diff=false`
- **Git commands run in wrong directory**: MCP servers run in their installation directory by default, not Claude's working directory. To fix this, use [MCP roots](https://modelcontextprotocol.io/docs/concepts/roots) to access Claude's current directory:
  ```python
  # Get Claude's working directory from roots
  context = mcp.get_context()
  roots_result = await context.session.list_roots()
  working_dir = roots_result.roots[0].uri.path  # FileUrl object has .path property
  
  # Use it in subprocess calls
  subprocess.run(["git", "diff"], cwd=working_dir)
  ```
  Claude Code automatically provides its working directory as a root, allowing your MCP server to operate in the correct location.

## Next Steps

Congratulations! You've built your first MCP server with Tools - the foundation for everything that follows in Unit 3.

### What you've accomplished in Module 1:
- **Created MCP Tools** that provide Claude with structured data
- **Implemented the core MCP philosophy** - let Claude make intelligent decisions from raw data
- **Built a practical PR Agent** that can analyze code changes and suggest templates
- **Learned about real-world constraints** - the 25,000 token limit and how to handle it
- **Established testing patterns** with validation scripts and unit tests

### Key patterns you can reuse:
- **Data collection tools** that gather information from external sources
- **Intelligent analysis** where Claude processes raw data to make decisions  
- **Output management** - truncating large responses while preserving usefulness
- **Error handling** that returns structured JSON responses
- **Testing strategies** for MCP server development

### What to do next:
1. **Review the solution** in `/projects/unit3/build-mcp-server/solution/` to see different implementation approaches
2. **Compare your implementation** with the provided solution - there's no single "right" way to solve the problem
3. **Test your tools thoroughly** - try them with different types of code changes to see how Claude adapts
4. **Move on to Module 2** where you'll add real-time webhook capabilities and learn about MCP Prompts for workflow standardization

Module 2 will build directly on the server you created here, adding dynamic event handling to complement your static file analysis tools!

### The story continues...
With your PR Agent working, CodeCraft Studios developers are already writing better pull requests. But next week, you'll face a new challenge: critical CI/CD failures are slipping through unnoticed. Module 2 will add real-time monitoring to catch these issues before they reach production.

## Additional Resources

- [MCP Documentation](https://modelcontextprotocol.io/)
- [FastMCP Guide](https://modelcontextprotocol.io/quickstart/server)
- Solution walkthrough: `unit3/build-mcp-server-solution-walkthrough.md`

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/build-mcp-server.mdx" />

### Module 3: Slack Notification
https://huggingface.co/learn/mcp-course/unit3/slack-notification.md

# Module 3: Slack Notification

## The Communication Gap Crisis

Week 3 at CodeCraft Studios. Your automation system is already transforming how the team works:
- **PR Agent** (Module 1): Developers are writing clear, helpful pull request descriptions
- **CI/CD Monitor** (Module 2): The team catches test failures immediately, preventing bugs from reaching production

The team is feeling much more confident... until Monday morning brings a new crisis.

The frontend team (Emma and Jake) spent the entire weekend debugging a nasty API integration issue. They tried everything: checked their network calls, validated request formats, even rewrote the error handling. Finally, at 2 AM Sunday, they discovered the backend team had fixed this exact issue on Friday and deployed the fix to staging - but forgot to announce it.

"We wasted 12 hours solving a problem that was already fixed!" Emma says, frustrated.

Meanwhile, the design team finished the new user onboarding flow illustrations last week, but the frontend team didn't know they were ready. Those beautiful assets are still sitting unused while the team ships a temporary design.

The team realizes they have an information silo problem. Everyone's working hard, but they're not communicating effectively about what's happening when.

**Your mission**: Complete the automation system with intelligent Slack notifications that keep the whole team informed about important developments automatically.

## What You'll Build

This final module completes the CodeCraft Studios transformation. You'll integrate Tools and Prompts to create a smart notification system that sends formatted Slack messages about CI/CD events, demonstrating how all MCP primitives work together in a real-world scenario.

Building on the foundation from Modules 1 and 2, you'll add the final piece of the puzzle:
- **Slack webhook tool** for sending messages to your team channel
- **Two notification prompts** that intelligently format CI events
- **Complete integration** showing all MCP primitives working together

### Screencast: The Complete Automation System! 🎉

<Youtube id="sX5qrbDG-oY" />

**The Final Piece**: Watch how your complete automation system prevents those Monday morning surprises that plagued Emma and Jake!

**What You'll See**: 
- **Claude's intelligent workflow** - Notice how Claude breaks down the task: ☐ Check events → ☐ Send notification
- **Real-time MCP tools in action** - `get_recent_actions_events` pulls fresh CI data, then `send_slack_notification` delivers the alert
- **Side-by-side demonstration** - The Slack channel is open in parallel to show the formatted message appearing as Claude sends it

**The Smart Notification**: Claude doesn't just spam the team—it crafts a professional alert with:
- 🚨 Clear urgency indicators and emoji
- **Detailed failure breakdown** (test-auth-service ❌, test-api ❌, test-frontend ⏳)
- **Actionable links** to the pipeline run and pull request
- **Context everyone needs** - repository, PR #1 "various improvements", commit hash

**Why This Matters**: Remember the communication gap crisis? No more! This system ensures that when CI fails on `demo-bad-pr` branch, the whole team knows immediately. No more weekend debugging sessions for issues that were already fixed!

**The Complete Journey**: From Module 1's PR chaos to Module 3's intelligent team notifications—you've built a system that transforms how CodeCraft Studios collaborates. The weekend warriors become informed teammates! 🚀

## Learning Objectives

By the end of this module, you'll understand:
1. How to integrate external APIs with MCP Tools
2. How to combine Tools and Prompts for complete workflows  
3. How to format rich messages using Slack markdown
4. How all MCP primitives work together in practice

## Prerequisites

You'll need everything from the previous modules plus:
- **Completed Modules 1 and 2** - This module directly extends your existing MCP server
- **A Slack workspace** where you can create incoming webhooks (personal workspaces work fine)
- **Basic understanding of REST APIs** - You'll be making HTTP requests to Slack's webhook endpoints

## Key Concepts

### MCP Integration Pattern

This module demonstrates the complete workflow:
1. **Events** → GitHub Actions webhook (from Module 2)
2. **Prompts** → Format events into readable messages
3. **Tools** → Send formatted messages to Slack
4. **Result** → Professional team notifications

### Slack Markdown Formatting

You'll use [Slack's markdown](https://api.slack.com/reference/surfaces/formatting) for rich messages:
- [`*bold text*`](https://api.slack.com/reference/surfaces/formatting#visual-styles) for emphasis
- [`_italic text_`](https://api.slack.com/reference/surfaces/formatting#visual-styles) for details
- [`` `code blocks` ``](https://api.slack.com/reference/surfaces/formatting#inline-code) for technical info
- [`> quoted text`](https://api.slack.com/reference/surfaces/formatting#quotes) for summaries
- [Emoji](https://api.slack.com/reference/surfaces/formatting#emoji): ✅ ❌ 🚀 ⚠️
- [Links](https://api.slack.com/reference/surfaces/formatting#linking-urls): `<https://github.com/user/repo|Repository>`

## Project Structure

```
slack-notification/
├── starter/          # Your starting point
│   ├── server.py     # Modules 1+2 code + TODOs
│   ├── webhook_server.py  # From Module 2
│   ├── pyproject.toml
│   └── README.md
└── solution/         # Complete implementation
    ├── server.py     # Full Slack integration
    ├── webhook_server.py
    └── README.md
```

## Implementation Steps

### Step 1: Set Up Slack Integration (10 min)

1. Create a Slack webhook:
   - Go to [Slack API Apps](https://api.slack.com/apps)
   - Create new app → "From scratch" ([Creating an app guide](https://api.slack.com/authentication/basics#creating))
   - App Name: "MCP Course Notifications"
   - Choose your workspace
   - Go to "Features" → "[Incoming Webhooks](https://api.slack.com/messaging/webhooks)"
   - [Activate incoming webhooks](https://api.slack.com/messaging/webhooks#enable_webhooks)
   - Click "Add New Webhook to Workspace"
   - Choose channel and authorize ([Webhook setup guide](https://api.slack.com/messaging/webhooks#getting_started))
   - Copy the webhook URL

2. Test webhook works (following [webhook posting examples](https://api.slack.com/messaging/webhooks#posting_with_webhooks)):
   ```bash
   curl -X POST -H 'Content-type: application/json' \
     --data '{"text":"Hello from MCP Course!"}' \
     YOUR_WEBHOOK_URL
   ```

3. Set environment variable:
   ```bash
   export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
   ```

   **⚠️ Security Note**: The webhook URL is a sensitive secret that grants permission to post messages to your Slack channel. Always:
   - Store it as an environment variable, never hardcode it in your code
   - Never commit webhook URLs to version control (add to .gitignore)
   - Treat it like a password - anyone with this URL can send messages to your channel

> [!WARNING]
> **Security Alert**: Webhook URLs are sensitive credentials! Anyone with your webhook URL can send messages to your Slack channel. Always store them as environment variables and never commit them to version control.

### Step 2: Add Slack Tool (15 min)

Now that you have a working webhook, you'll add a new MCP tool to your existing server.py from Module 2. This tool will handle sending notifications to Slack by making HTTP requests to the webhook URL.

> [!TIP]
> **Note**: The starter code includes all improvements from Modules 1 & 2 (output limiting, webhook handling). Focus on the new Slack integration!

Add this tool to your server.py:

**`send_slack_notification`**:
- Takes a message string parameter
- Reads webhook URL from environment variable
- Sends POST request to Slack webhook
- Returns success/failure message
- Handles basic error cases

```python
import os
import requests
from mcp.types import TextContent

@mcp.tool()
def send_slack_notification(message: str) -> str:
    """Send a formatted notification to the team Slack channel."""
    webhook_url = os.getenv("SLACK_WEBHOOK_URL")
    if not webhook_url:
        return "Error: SLACK_WEBHOOK_URL environment variable not set"
    
    try:
        # TODO: Send POST request to webhook_url
        # TODO: Include message in JSON payload with "mrkdwn": true
        # TODO: Handle response and return status
        pass
    except Exception as e:
        return f"Error sending message: {str(e)}"
```

### Step 3: Create Formatting Prompts (15 min)

Next, you'll add MCP Prompts to your server - this is where the magic happens! These prompts will work with Claude to automatically format your GitHub webhook data into well-structured Slack messages. Remember from Module 1 that Prompts provide reusable instructions that Claude can use consistently.

Implement two prompts that generate Slack-formatted messages:

1. **`format_ci_failure_alert`**:
   ```python
   @mcp.prompt()
   def format_ci_failure_alert() -> str:
       """Create a Slack alert for CI/CD failures."""
       return """Format this GitHub Actions failure as a Slack message:

   Use this template:
   :rotating_light: *CI Failure Alert* :rotating_light:
   
   A CI workflow has failed:
   *Workflow*: workflow_name
   *Branch*: branch_name
   *Status*: Failed
   *View Details*: <LOGS_LINK|View Logs>
   
   Please check the logs and address any issues.
   
   Use Slack markdown formatting and keep it concise for quick team scanning."""
   ```

2. **`format_ci_success_summary`**:
   ```python
   @mcp.prompt()
   def format_ci_success_summary() -> str:
       """Create a Slack message celebrating successful deployments."""
       return """Format this successful GitHub Actions run as a Slack message:

   Use this template:
   :white_check_mark: *Deployment Successful* :white_check_mark:
   
   Deployment completed successfully for [Repository Name]
   
   *Changes:*
   - Key feature or fix 1
   - Key feature or fix 2
   
   *Links:*
   <PR_LINK|View Changes>
   
   Keep it celebratory but informative. Use Slack markdown formatting."""
   ```

### Step 4: Test Complete Workflow (10 min)

Now comes the exciting part - testing your complete MCP workflow! You'll have all three components working together: webhook capture from Module 2, prompt formatting from this module, and Slack notifications.

1. Start all services (just like in Module 2, but now with Slack integration):
   ```bash
   # Terminal 1: Start webhook server
   python webhook_server.py
   
   # Terminal 2: Start MCP server
   uv run server.py
   
   # Terminal 3: Start Cloudflare Tunnel  
   cloudflared tunnel --url http://localhost:8080
   ```

2. Test the complete integration with Claude Code:
   - **Configure GitHub webhook** with tunnel URL (same as Module 2)
   - **Push changes** to trigger GitHub Actions 
   - **Ask Claude** to check recent events and format them using your prompts
   - **Let Claude send** the formatted message using your Slack tool
   - **Verify** notifications appear in your Slack channel

### Step 5: Verify Integration (5 min)

You can test your implementation without setting up a real GitHub repository! See `manual_test.md` for curl commands that simulate GitHub webhook events.

**Understanding the webhook event flow:**
- Your webhook server (from Module 2) captures GitHub events and stores them in `github_events.json`
- Your MCP tools read from this file to get recent CI/CD activity  
- Claude uses your formatting prompts to create readable messages
- Your Slack tool sends the formatted messages to your team channel
- This creates a complete pipeline: GitHub → Local Storage → Claude Analysis → Slack Notification

**Quick Test Workflow:**
1. Use curl to send fake GitHub events to your webhook server
2. Ask Claude to check recent events and format them
3. Send formatted messages to Slack
4. Verify everything works end-to-end

**Manual Testing Alternative:** For a complete testing experience without GitHub setup, follow the step-by-step curl commands in `manual_test.md`.

## Example Workflow in Claude Code

```
User: "Check recent CI events and notify the team about any failures"

Claude: 
1. Uses get_recent_actions_events (from Module 2)
2. Finds a workflow failure
3. Uses format_ci_failure_alert prompt to create message
4. Uses send_slack_notification tool to deliver it
5. Reports back: "Sent failure alert to #dev-team channel"
```

## Expected Slack Message Output

**Failure Alert:**
```
🚨 *CI Failure Alert* 🚨

A CI workflow has failed:
*Workflow*: CI (Run #42)
*Branch*: feature/slack-integration
*Status*: Failed
*View Details*: <https://github.com/user/mcp-course/actions/runs/123|View Logs>

Please check the logs and address any issues.
```

**Success Summary:**
```
✅ *Deployment Successful* ✅

Deployment completed successfully for mcp-course

*Changes:*
- Added team notification system
- Integrated MCP Tools and Prompts

*Links:*
<https://github.com/user/mcp-course/pull/42|View Changes>
```

## Common Issues

### Webhook URL Issues
- Verify the environment variable is set correctly
- Test webhook directly with curl before integrating
- Ensure Slack app has proper permissions

### Message Formatting
- [Slack markdown](https://api.slack.com/reference/surfaces/formatting) differs from GitHub markdown
- **Important**: Use `*text*` for bold (not `**text**`)
- Include `"mrkdwn": true` in webhook payload for proper formatting
- Test message formatting manually before automating
- Handle special characters in commit messages properly ([formatting reference](https://api.slack.com/reference/surfaces/formatting#escaping))

### Network Errors
- Add basic timeout handling to webhook requests ([webhook error handling](https://api.slack.com/messaging/webhooks#handling_errors))
- Return meaningful error messages from the tool
- Check internet connectivity if requests fail

## Key Takeaways

You've now built a complete MCP workflow that demonstrates:
- **Tools** for external API integration (Slack webhooks)
- **Prompts** for intelligent message formatting
- **Integration** of all MCP primitives working together
- **Real-world application** that teams can actually use

This shows the power of MCP for building practical development automation tools!

> [!TIP]
> **Key Learning**: You've now built a complete MCP workflow that combines Tools (for external API calls) with Prompts (for consistent formatting). This pattern of Tools + Prompts is fundamental to advanced MCP development and can be applied to many other automation scenarios.

## Next Steps

Congratulations! You've completed the final module of Unit 3 and built a complete end-to-end automation system. Your journey through all three modules has given you hands-on experience with:

- **Module 1**: MCP Tools and intelligent data analysis
- **Module 2**: Real-time webhooks and MCP Prompts
- **Module 3**: External API integration and workflow completion

### What to do next:
1. **Test your complete system** - Try triggering real GitHub events and watch the full pipeline work
2. **Experiment with customization** - Modify the Slack message formats or add new notification types
3. **Review the Unit 3 Conclusion** - Reflect on everything you've learned and explore next steps
4. **Share your success** - Show teammates how MCP can automate your development workflows

You now have a solid foundation for building intelligent automation systems with MCP!

### The transformation is complete!
CodeCraft Studios has gone from chaotic development to a well-oiled machine. The automation system you built handles:
- **Smart PR descriptions** that help reviewers understand changes
- **Real-time CI/CD monitoring** that catches failures before they reach production  
- **Intelligent team notifications** that keep everyone informed automatically

The team can now focus on building great products instead of fighting process problems. And you've learned advanced MCP patterns that you can apply to any automation challenge!

## Additional Resources

- [Slack Incoming Webhooks Documentation](https://api.slack.com/messaging/webhooks)
- [Slack Message Formatting Guide](https://api.slack.com/reference/surfaces/formatting)
- [MCP Tools Documentation](https://modelcontextprotocol.io/docs/concepts/tools)
- [MCP Prompts Guide](https://modelcontextprotocol.io/docs/concepts/prompts)

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/slack-notification.mdx" />

### Unit 3 Conclusion: The CodeCraft Studios Transformation
https://huggingface.co/learn/mcp-course/unit3/conclusion.md

# Unit 3 Conclusion: The CodeCraft Studios Transformation

## Mission Accomplished!

Congratulations! You've successfully transformed CodeCraft Studios from a chaotic startup into a well-oiled development machine. Let's see how far you've come:

### Before Your Automation System:
- ❌ PRs with descriptions like "stuff" and "fix"
- ❌ Critical bugs reaching production undetected  
- ❌ Teams working in silos, duplicating effort
- ❌ Weekend debugging sessions for already-fixed issues

### After Your Automation System:
- ✅ Clear, helpful PR descriptions that save reviewers time
- ✅ Real-time CI/CD monitoring that catches failures immediately
- ✅ Smart team notifications that keep everyone informed
- ✅ Developers focused on building features, not fighting process problems

The CodeCraft Studios team now has a complete automation system that demonstrates what's possible when you combine MCP's flexibility with Claude's intelligence.

## How You Solved Each Challenge

Your three-module journey tackled real problems that every development team faces:

### Module 1: Solved the PR Chaos
*"Help developers write better pull requests without slowing them down"*
- **PR Agent** with intelligent file analysis
- **Core MCP concepts**: Tools, data collection, and Claude integration
- **Design philosophy**: Provide raw data, let Claude make intelligent decisions
- **Result**: Clear PR descriptions that help reviewers understand changes

### Module 2: Caught the Silent Failures  
*"Never let another critical bug slip through unnoticed"*
- **Webhook server** for capturing GitHub Actions events
- **MCP Prompts** for standardized workflow guidance
- **Event storage system** using simple JSON files
- **Result**: Real-time CI/CD monitoring that prevents production issues

### Module 3: Bridged the Communication Gap
*"Keep the whole team informed about what's happening"*
- **Slack integration** for team notifications
- **Message formatting** using Claude's intelligence
- **Tools + Prompts combination** for powerful automation
- **Result**: Smart notifications that eliminate information silos

## Key MCP Concepts You've Learned

### MCP Primitives
- **Tools**: For data access and external API calls
- **Prompts**: For consistent workflow guidance and formatting
- **Integration patterns**: How Tools and Prompts work together

### Architecture Patterns
- **Separation of concerns**: MCP server vs webhook server
- **File-based event storage**: Simple, reliable, testable
- **Claude as the intelligence layer**: Making decisions from raw data

### Development Best Practices
- **Error handling**: Returning structured JSON even for failures
- **Security**: Environment variables for sensitive credentials
- **Testing**: Validation scripts and manual testing workflows

## Real-World Applications

The patterns you've learned can be applied to many automation scenarios:

> [!TIP]
> **Beyond CI/CD**: The Tools + Prompts pattern works for customer support automation, content moderation, data analysis workflows, and any scenario where you need intelligent processing of external data.

### Common Patterns from Unit 3
1. **Data Collection** → Tools that gather information
2. **Intelligent Analysis** → Claude processes the data
3. **Formatted Output** → Prompts guide consistent presentation
4. **External Integration** → Tools interact with APIs and services

## Next Steps

### Immediate Actions
1. **Experiment** with your workflow automation - try different GitHub events
2. **Extend** the system with additional integrations (Discord, email, etc.)
3. **Share** your MCP server with teammates for real project use

### Advanced Exploration
- **Scale up**: Handle multiple repositories or teams
- **Add persistence**: Use databases for larger event volumes  
- **Create dashboards**: Build web interfaces for your automation
- **Explore other MCP clients**: Beyond Claude Code and Claude Desktop

### Community Involvement
- **Contribute** to the MCP ecosystem with your own servers
- **Share patterns** you discover with the community
- **Build on** existing MCP servers and extend their capabilities

## Key Takeaways

> [!TIP]
> **MCP Philosophy**: The most effective MCP servers don't try to be smart - they provide Claude with rich, structured data and let Claude's intelligence do the heavy lifting. This makes your code simpler and more flexible.

### Technical Insights
- **Simple is powerful**: JSON file storage can handle many use cases
- **Claude as orchestrator**: Let Claude coordinate between your tools
- **Prompts for consistency**: Use prompts to ensure reliable output formats

### Development Insights  
- **Start small**: Build one tool at a time, test thoroughly
- **Think in workflows**: Design tools that work well together
- **Plan for humans**: Your automation should help teams, not replace them

## Resources for Continued Learning

### MCP Documentation
- [Official MCP Protocol](https://modelcontextprotocol.io/)
- [Python SDK Reference](https://github.com/modelcontextprotocol/python-sdk)
- [FastMCP Framework](https://gofastmcp.com/)

### Community Resources
- [MCP Server Directory](https://modelcontextprotocol.io/servers)
- [Example Implementations](https://github.com/modelcontextprotocol)
- [Community Discord](https://discord.gg/modelcontextprotocol)

---

## The CodeCraft Studios Success Story

Three weeks ago, CodeCraft Studios was struggling with:
- Unclear pull requests causing review delays
- Critical bugs slipping into production  
- Teams working in isolation and duplicating effort

Today, they have an intelligent automation system that:
- **Helps developers** write clear, helpful PR descriptions automatically
- **Monitors CI/CD pipelines** and alerts the team to issues immediately  
- **Keeps everyone informed** with smart, contextual team notifications

You've built more than just an MCP server - you've created a solution that transforms how development teams work together.

## Your MCP Journey Continues

The patterns you learned at CodeCraft Studios can solve countless other automation challenges. Whether you're building customer service tools, data analysis pipelines, or any system that needs intelligent processing, you now have the foundation to create powerful, adaptive solutions with MCP.

The future of intelligent automation is in your hands. What will you build next? 🚀

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/conclusion.mdx" />

### Advanced MCP Development: Building Custom Workflow Servers for Claude Code
https://huggingface.co/learn/mcp-course/unit3/introduction.md

# Advanced MCP Development: Building Custom Workflow Servers for Claude Code

Welcome to Unit 3! In this unit, we'll build a practical MCP server that enhances Claude Code with custom development workflows while learning all three MCP primitives.

If you'd like to hear from the creators of MCP, here's a video they made:

<Youtube id="CQywdSdi5iA" />

In this video Theo Chu, David Soria Parra and Alex Albert dive into the Model Context Protocol (MCP), the standard that's changing how AI applications connect with external data and tools.

## What You'll Build

**PR Agent Workflow Server** - An MCP server that demonstrates how to make Claude Code team-aware and workflow-intelligent:

- **Smart PR Management**: Automatic PR template selection based on code changes using MCP Tools
- **CI/CD Monitoring**: Track GitHub Actions with Cloudflare Tunnel and standardized Prompts
- **Team Communication**: Slack notifications demonstrating all MCP primitives working together

## Real-World Case Study

We'll implement a practical scenario every development team faces:

**Before**: Developer manually creates PRs, waits for Actions to complete, manually checks results, remembers to notify team members

**After**: Claude Code connected to your workflow server can intelligently:
- Suggest the right PR template based on changed files
- Monitor GitHub Actions runs and provide formatted summaries
- Automatically notify team via Slack when deployments succeed/fail
- Guide developers through team-specific review processes based on Actions results

## Key Learning Outcomes

1. **Core MCP Primitives**: Master Tools and Prompts through practical examples
2. **MCP Server Development**: Build a functional server with proper structure and error handling
3. **GitHub Actions Integration**: Use Cloudflare Tunnel to receive webhooks and process CI/CD events
4. **Hugging Face Hub Workflows**: Create specialized workflows for LLM development teams
5. **Multi-System Integration**: Connect GitHub, Slack, and Hugging Face Hub through MCP
6. **Claude Code Enhancement**: Make Claude understand your team's specific workflows

## MCP Primitives in Action

This unit provides hands-on experience with the core MCP primitives:

- **Tools** (Module 1): Functions Claude can call to analyze files and suggest templates
- **Prompts** (Module 2): Standardized workflows for consistent team processes
- **Integration** (Module 3): All primitives working together for complex automation

## Module Structure

1. **Module 1: Build MCP Server** - Create a basic server with Tools for PR template suggestions
2. **Module 2: GitHub Actions Integration** - Monitor CI/CD with Cloudflare Tunnel and Prompts
3. **Module 3: Slack Notification** - Team communication integrating all MCP primitives

## Prerequisites

Before starting this unit, ensure you have:

- Completion of Units 1 and 2 
- Basic familiarity with GitHub Actions and webhook concepts
- Access to a GitHub repository for testing (can be a personal test repo)
- A Slack workspace where you can create webhook integrations

### Claude Code Installation and Setup

This unit requires Claude Code to test your MCP server integration.

> [!TIP]
> **Installation Required:** This unit requires Claude Code for testing MCP server integration with AI workflows.

**Quick Setup:**

Follow the [official installation guide](https://docs.anthropic.com/en/docs/claude-code/getting-started) to install Claude Code and complete authentication. The key steps are installing via npm, navigating to your project directory, and running `claude` to authenticate through console.anthropic.com.

Once installed, you'll use Claude Code throughout this unit to test your MCP server and interact with the workflow automation you build.

> [!WARNING]
> **New to Claude Code?** If you encounter any setup issues, the [troubleshooting guide](https://docs.anthropic.com/en/docs/claude-code/troubleshooting) covers common installation and authentication problems.

By the end of this unit, you'll have built a complete MCP server that demonstrates how to transform Claude Code into a powerful team development assistant, with hands-on experience using all three MCP primitives.



<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/introduction.mdx" />

### Get your certificate!
https://huggingface.co/learn/mcp-course/unit3/certificate.md

# Get your certificate!

Well done! You've completed Unit 3 of the MCP course. Now it's time to take the exam to get your certificate.

Below is a quiz to check your understanding of the unit. 

<iframe
	src="https://mcp-course-unit-3-quiz.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

> [!TIP]
> If you're struggling to use the quiz above, go to the space directly [on the Hugging Face Hub](https://huggingface.co/spaces/mcp-course/unit_3_quiz). If you find errors, you can report them in the space's [Community tab](https://huggingface.co/spaces/mcp-course/unit_3_quiz/discussions).



<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/certificate.mdx" />

### Module 2: GitHub Actions Integration
https://huggingface.co/learn/mcp-course/unit3/github-actions-integration.md

# Module 2: GitHub Actions Integration

## The Silent Failures Strike

Week 2 at CodeCraft Studios. Your PR Agent from Module 1 is already helping developers write better pull requests - Sarah's latest PR had a clear description that saved Mike 20 minutes of investigation time. The team is thrilled!

But then disaster strikes.

A critical bug reaches production on Friday afternoon. The payment system is down, customers are complaining, and the team scrambles to investigate. After two stressful hours, they discover the root cause: a test failure in Tuesday's CI run that nobody noticed.

"How did we miss this?" asks the team lead, scrolling through GitHub Actions. "The tests clearly failed, but with 47 repositories and dozens of daily commits, who has time to check every build?"

The team realizes they need real-time visibility into their CI/CD pipeline, but manually checking GitHub Actions across all their projects isn't scalable. They need automation that watches for problems and alerts them immediately.

**Your mission**: Extend your MCP server with webhook capabilities to monitor GitHub Actions and never let another failure slip through unnoticed.

## What You'll Build

This module bridges the gap between static file analysis (Module 1) and dynamic team notifications (Module 3). You'll add real-time capabilities that transform your PR Agent into a comprehensive development monitoring system.

Building on the foundation you created in Module 1, you'll add:
- **Webhook server** to receive GitHub Actions events
- **New tools** for monitoring CI/CD status
- **MCP Prompts** that provide consistent workflow patterns
- **Real-time integration** with your GitHub repository

### Screencast: Real-Time CI/CD Monitoring in Action! 🎯

<Youtube id="XIEnmCicFXk" />

**The Setup**: Watch how CodeCraft Studios' new system catches failures before they reach production:
1. **GitHub Webhooks** - See the actual webhook configuration that sends events to your server
2. **Failed Tests** - Those red X's that used to go unnoticed? Not anymore!
3. **Local Development** - The webhook server and Cloudflare tunnel working together

**MCP Magic in Real-Time**: Claude responds to three key requests:
- **"What GitHub Actions events have we received?"** - Claude uses your new tools to check recent activity
- **"Analyze CI Results"** - Watch Claude dig into test failures and provide actionable insights
- **"Create Deployment Summary"** - See how MCP Prompts guide Claude to create team-friendly updates

**The Silent Failures No More** 🚨: Remember that critical bug from Tuesday's failed test? With this system, Claude would have caught it immediately. The screencast shows exactly how your MCP server turns GitHub's raw webhook data into clear, actionable alerts.

**What Makes This Special**: Your Module 1 PR Agent was static—it analyzed code when asked. This Module 2 enhancement is dynamic—it watches your CI/CD pipeline 24/7 and helps Claude provide real-time insights. No more Friday afternoon surprises!

## Learning Objectives

By the end of this module, you'll understand:
1. How to run a webhook server alongside an MCP server
2. How to receive and process GitHub webhooks
3. How to create MCP Prompts for standardized workflows
4. How to use Cloudflare Tunnel for local webhook testing

## Prerequisites

You'll build directly on your work from Module 1, so make sure you have:
- **Completed Module 1: Build MCP Server** - You'll be extending that same codebase
- **Basic understanding of GitHub Actions** - You should know what CI/CD workflows are
- **A GitHub repository with Actions enabled** - Even a simple workflow file works fine
- **Cloudflare Tunnel (cloudflared) installed** - This will expose your local webhook server to GitHub

## Key Concepts

### MCP Prompts

Prompts are reusable templates that guide Claude through complex workflows. Unlike Tools (which Claude calls automatically), Prompts are user-initiated and provide structured guidance.

Example use cases:
- Analyzing CI/CD results consistently
- Creating standardized deployment summaries
- Troubleshooting failures systematically

### Webhook Integration

Your MCP server will run two services:
1. The MCP server (communicates with Claude)
2. A webhook server on port 8080 (receives GitHub events)

This allows Claude to react to real-time CI/CD events!

> [!TIP]
> **Architecture Insight**: Running separate services for MCP communication and webhook handling is a clean separation of concerns. The webhook server handles HTTP complexity while your MCP server focuses on data analysis and Claude integration.

## Project Structure

```
github-actions-integration/
├── starter/          # Your starting point
│   ├── server.py     # Module 1 code + TODOs
│   ├── pyproject.toml
│   └── README.md
└── solution/         # Complete implementation
    ├── server.py     # Full webhook + prompts
    ├── pyproject.toml
    └── README.md
```

## Implementation Steps

### Step 1: Set Up and Run Webhook Server

Unlike Module 1 where you worked with existing files, this module introduces real-time event handling. The starter code includes:
- **Your Module 1 implementation** - All your existing PR analysis tools
- **A complete webhook server** (`webhook_server.py`) - Ready to receive GitHub events

1. Install dependencies (same as Module 1):
   ```bash
   uv sync
   ```

2. Start the webhook server (in a separate terminal):
   ```bash
   python webhook_server.py
   ```

This server will receive GitHub webhooks and store them in `github_events.json`.

**How webhook event storage works:**
- Each incoming GitHub webhook (push, pull request, workflow completion, etc.) is appended to the JSON file
- Events are stored with timestamps, making it easy to find recent activity
- The file acts as a simple event log that your MCP tools can read and analyze
- No database required - everything is stored in a simple, readable JSON format

### Step 2: Connect to Event Storage

Now you'll connect your MCP server (from Module 1) to the webhook data. This is much simpler than handling HTTP requests directly - the webhook server does all the heavy lifting and stores events in a JSON file.

Add the path to read webhook events:

```python
# File where webhook server stores events
EVENTS_FILE = Path(__file__).parent / "github_events.json"
```

The webhook server handles all the HTTP details - you just need to read the JSON file! This separation of concerns keeps your MCP server focused on what it does best.

> [!TIP]
> **Development Tip**: Working with files instead of HTTP requests makes testing much easier. You can manually add events to `github_events.json` to test your tools without setting up webhooks.

### Step 3: Add GitHub Actions Tools

Just like in Module 1 where you created tools for file analysis, you'll now create tools for CI/CD analysis. These tools will work alongside your existing PR analysis tools, giving Claude a complete view of both code changes and build status.

> [!TIP]
> **Note**: The starter code already includes the output limiting fix from Module 1, so you won't encounter token limit errors. Focus on the new concepts in this module!

Implement two new tools:

1. **`get_recent_actions_events`**: 
   - Read from `EVENTS_FILE`
   - Return the most recent events (up to limit)
   - Return empty list if file doesn't exist

2. **`get_workflow_status`**: 
   - Read all events from file
   - Filter for workflow_run events
   - Group by workflow name and show latest status

These tools let Claude analyze your CI/CD pipeline.

### Step 4: Create MCP Prompts

Now you'll add your first MCP Prompts! Unlike Tools (which Claude calls automatically), Prompts are templates that help users interact with Claude consistently. Think of them as "conversation starters" that guide Claude through complex workflows.

While Module 1 focused on Tools for data access, this module introduces Prompts for workflow guidance.

Implement four prompts that demonstrate different workflow patterns:

1. **`analyze_ci_results`**: Comprehensive CI/CD analysis
2. **`create_deployment_summary`**: Team-friendly updates
3. **`generate_pr_status_report`**: Combined code + CI report
4. **`troubleshoot_workflow_failure`**: Systematic debugging

Each prompt should return a string with clear instructions for Claude to follow.

### Step 5: Test with Cloudflare Tunnel

Now for the exciting part - testing your expanded MCP server with real GitHub events! You'll run multiple services together, just like in a real development environment.

1. Start your MCP server (same command as Module 1):
   ```bash
   uv run server.py
   ```

2. In another terminal, start Cloudflare Tunnel:
   ```bash
   cloudflared tunnel --url http://localhost:8080
   ```

3. Configure GitHub webhook with the tunnel URL

4. Test with Claude Code using the prompts

## Exercises

### Exercise 1: Custom Workflow Prompt
Create a new prompt that helps with PR reviews by combining:
- Code changes from Module 1 tools
- CI/CD status from Module 2 tools
- A checklist format for reviewers

### Exercise 2: Event Filtering
Enhance `get_workflow_status` to:
- Filter by workflow conclusion (success/failure)
- Group by repository
- Show time since last run

### Exercise 3: Notification System
Add a tool that:
- Tracks which events have been "seen"
- Highlights new failures
- Suggests which team member to notify

## Common Issues

### Webhook Not Receiving Events
- Ensure Cloudflare Tunnel is running
- Check GitHub webhook settings (should show recent deliveries)
- Verify the payload URL includes `/webhook/github`

### Prompt Not Working
- FastMCP prompts simply return strings
- Make sure your function is decorated with `@mcp.prompt()`

### Webhook Server Issues
- Ensure webhook_server.py is running in a separate terminal
- Check that port 8080 is free: `lsof -i :8080`
- The events file will be created automatically when first event is received

## Next Steps

Excellent work! You've successfully added real-time capabilities to your MCP server. You now have a system that can:

- **Analyze code changes** (from Module 1) 
- **Monitor CI/CD events in real-time** (from this module)
- **Use MCP Prompts** to provide consistent workflow guidance
- **Handle webhook events** through a clean file-based architecture

### Key achievements in Module 2:
- Built your first webhook integration
- Learned MCP Prompts for workflow standardization  
- Created tools that work with real-time data
- Established patterns for event-driven automation

### What to do next:
1. **Review the solution** in `/projects/unit3/github-actions-integration/solution/` to see different implementation approaches
2. **Experiment with your prompts** - try using them for different types of GitHub events
3. **Test the integration** - combine your Module 1 file analysis tools with Module 2 event monitoring in a single conversation with Claude
4. **Move on to Module 3** - where you'll complete the automation pipeline by adding team notifications through Slack integration

Module 3 will bring everything together into a complete workflow that your team can actually use!

### The story continues...
Your monitoring system is working! CodeCraft Studios now catches CI/CD failures in real-time, and the team feels much more confident about their deployments. But next week brings a new challenge: information silos are causing duplicate work and missed opportunities. Module 3 will complete the automation system with intelligent team notifications that keep everyone in the loop.

## Additional Resources

- [MCP Prompts Documentation](https://modelcontextprotocol.io/docs/concepts/prompts)
- [GitHub Webhooks Guide](https://docs.github.com/en/developers/webhooks-and-events)
- [Cloudflare Tunnel Documentation](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps)

<EditOnGithub source="https://github.com/huggingface/mcp-course/blob/main/units/en/unit3/github-actions-integration.mdx" />
