# Huggingface_Hub

## Docs

- [🤗 Hub client library](https://huggingface.co/docs/huggingface_hub/main/index.md)
- [Installation](https://huggingface.co/docs/huggingface_hub/main/installation.md)
- [Quickstart](https://huggingface.co/docs/huggingface_hub/main/quick-start.md)
- [Git vs HTTP paradigm](https://huggingface.co/docs/huggingface_hub/main/concepts/git_vs_http.md)
- [Migrating to huggingface_hub v1.0](https://huggingface.co/docs/huggingface_hub/main/concepts/migration.md)
- [TensorBoard logger](https://huggingface.co/docs/huggingface_hub/main/package_reference/tensorboard.md)
- [Webhooks Server](https://huggingface.co/docs/huggingface_hub/main/package_reference/webhooks_server.md)
- [HfApi Client](https://huggingface.co/docs/huggingface_hub/main/package_reference/hf_api.md)
- [Managing collections](https://huggingface.co/docs/huggingface_hub/main/package_reference/collections.md)
- [Strict Dataclasses](https://huggingface.co/docs/huggingface_hub/main/package_reference/dataclasses.md)
- [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/package_reference/cache.md)
- [Jobs](https://huggingface.co/docs/huggingface_hub/main/package_reference/jobs.md)
- [Repository Cards](https://huggingface.co/docs/huggingface_hub/main/package_reference/cards.md)
- [Downloading files](https://huggingface.co/docs/huggingface_hub/main/package_reference/file_download.md)
- [MCP Client](https://huggingface.co/docs/huggingface_hub/main/package_reference/mcp.md)
- [Overview](https://huggingface.co/docs/huggingface_hub/main/package_reference/overview.md)
- [Interacting with Discussions and Pull Requests](https://huggingface.co/docs/huggingface_hub/main/package_reference/community.md)
- [OAuth and FastAPI](https://huggingface.co/docs/huggingface_hub/main/package_reference/oauth.md)
- [Filesystem API](https://huggingface.co/docs/huggingface_hub/main/package_reference/hf_file_system.md)
- [Inference Endpoints](https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_endpoints.md)
- [Authentication](https://huggingface.co/docs/huggingface_hub/main/package_reference/authentication.md)
- [WARNING](https://huggingface.co/docs/huggingface_hub/main/package_reference/cli.md)
- [Managing your Space runtime](https://huggingface.co/docs/huggingface_hub/main/package_reference/space_runtime.md)
- [Inference types](https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_types.md)
- [Utilities](https://huggingface.co/docs/huggingface_hub/main/package_reference/utilities.md)
- [Inference](https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_client.md)
- [Environment variables](https://huggingface.co/docs/huggingface_hub/main/package_reference/environment_variables.md)
- [Mixins & serialization methods](https://huggingface.co/docs/huggingface_hub/main/package_reference/mixins.md)
- [Serialization](https://huggingface.co/docs/huggingface_hub/main/package_reference/serialization.md)
- [Create and manage a repository](https://huggingface.co/docs/huggingface_hub/main/guides/repository.md)
- [Collections](https://huggingface.co/docs/huggingface_hub/main/guides/collections.md)
- [Download files from the Hub](https://huggingface.co/docs/huggingface_hub/main/guides/download.md)
- [Run and manage Jobs](https://huggingface.co/docs/huggingface_hub/main/guides/jobs.md)
- [Webhooks](https://huggingface.co/docs/huggingface_hub/main/guides/webhooks.md)
- [Manage your Space](https://huggingface.co/docs/huggingface_hub/main/guides/manage-spaces.md)
- [Run Inference on servers](https://huggingface.co/docs/huggingface_hub/main/guides/inference.md)
- [How-to guides](https://huggingface.co/docs/huggingface_hub/main/guides/overview.md)
- [Interact with Discussions and Pull Requests](https://huggingface.co/docs/huggingface_hub/main/guides/community.md)
- [Interact with the Hub through the Filesystem API](https://huggingface.co/docs/huggingface_hub/main/guides/hf_file_system.md)
- [Inference Endpoints](https://huggingface.co/docs/huggingface_hub/main/guides/inference_endpoints.md)
- [Search the Hub](https://huggingface.co/docs/huggingface_hub/main/guides/search.md)
- [Understand caching](https://huggingface.co/docs/huggingface_hub/main/guides/manage-cache.md)
- [Command Line Interface (CLI)](https://huggingface.co/docs/huggingface_hub/main/guides/cli.md)
- [Integrate any ML framework with the Hub](https://huggingface.co/docs/huggingface_hub/main/guides/integrations.md)
- [Create and share Model Cards](https://huggingface.co/docs/huggingface_hub/main/guides/model-cards.md)
- [Upload files to the Hub](https://huggingface.co/docs/huggingface_hub/main/guides/upload.md)

### 🤗 Hub client library
https://huggingface.co/docs/huggingface_hub/main/index.md

# 🤗 Hub client library

The `huggingface_hub` library allows you to interact with the [Hugging Face
Hub](https://hf.co), a machine learning platform for creators and collaborators.
Discover pre-trained models and datasets for your projects or play with the hundreds of
machine learning apps hosted on the Hub. You can also create and share your own models
and datasets with the community. The `huggingface_hub` library provides a simple way to
do all these things with Python.

Read the [quick start guide](quick-start) to get up and running with the
`huggingface_hub` library. You will learn how to download files from the Hub, create a
repository, and upload files to the Hub. Keep reading to learn more about how to manage
your repositories on the 🤗 Hub, how to interact in discussions or even how to run inference.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guides/overview">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use huggingface_hub to solve real-world problems.</p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/overview">
      <div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
      <p class="text-gray-700">Exhaustive and technical description of huggingface_hub classes and methods.</p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concepts/git_vs_http">
      <div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
      <p class="text-gray-700">High-level explanations for building a better understanding of huggingface_hub philosophy.</p>
    </a>

  </div>
</div>

<!--
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/overview"
  ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
  <p class="text-gray-700">Learn the basics and become familiar with using huggingface_hub to programmatically interact with the 🤗 Hub!</p>
</a> -->

## Contribute

All contributions to the `huggingface_hub` are welcomed and equally valued! 🤗 Besides
adding or fixing existing issues in the code, you can also help improve the
documentation by making sure it is accurate and up-to-date, help answer questions on
issues, and request new features you think will improve the library. Take a look at the
[contribution
guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to
learn more about how to submit a new issue or feature request, how to submit a pull
request, and how to test your contributions to make sure everything works as expected.

Contributors should also be respectful of our [code of
conduct](https://github.com/huggingface/huggingface_hub/blob/main/CODE_OF_CONDUCT.md) to
create an inclusive and welcoming collaborative space for everyone.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/index.md" />

### Installation
https://huggingface.co/docs/huggingface_hub/main/installation.md

# Installation

Before you start, you will need to set up your environment by installing the appropriate packages.

`huggingface_hub` is tested on **Python 3.9+**.

## Install with pip

It is highly recommended to install `huggingface_hub` in a [virtual environment](https://docs.python.org/3/library/venv.html).
If you are unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/).
A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies.

Start by creating a virtual environment in your project directory:

```bash
python -m venv .env
```

Activate the virtual environment. On Linux and macOS:

```bash
source .env/bin/activate
```

Activate virtual environment on Windows:

```bash
.env/Scripts/activate
```

Now you're ready to install `huggingface_hub` [from the PyPi registry](https://pypi.org/project/huggingface-hub/):

```bash
pip install --upgrade huggingface_hub
```

Once done, [check installation](#check-installation) is working correctly.

### Install optional dependencies

Some dependencies of `huggingface_hub` are [optional](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) because they are not required to run the core features of `huggingface_hub`. However, some features of the `huggingface_hub` may not be available if the optional dependencies aren't installed.

You can install optional dependencies via `pip`:
```bash
# Install dependencies for both torch-specific and CLI-specific features.
pip install 'huggingface_hub[cli,torch]'
```

Here is the list of optional dependencies in `huggingface_hub`:
- `cli`: provide a more convenient CLI interface for `huggingface_hub`.
- `fastai`, `torch`: dependencies to run framework-specific features.
- `dev`: dependencies to contribute to the lib. Includes `testing` (to run tests), `typing` (to run type checker) and `quality` (to run linters).



### Install from source

In some cases, it is interesting to install `huggingface_hub` directly from source.
This allows you to use the bleeding edge `main` version rather than the latest stable version.
The `main` version is useful for staying up-to-date with the latest developments, for instance
if a bug has been fixed since the last official release but a new release hasn't been rolled out yet.

However, this means the `main` version may not always be stable. We strive to keep the
`main` version operational, and most issues are usually resolved
within a few hours or a day. If you run into a problem, please open an Issue so we can
fix it even sooner!

```bash
pip install git+https://github.com/huggingface/huggingface_hub
```

When installing from source, you can also specify a specific branch. This is useful if you
want to test a new feature or a new bug-fix that has not been merged yet:

```bash
pip install git+https://github.com/huggingface/huggingface_hub@my-feature-branch
```

Once done, [check installation](#check-installation) is working correctly.

### Editable install

Installing from source allows you to set up an [editable install](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs).
This is a more advanced installation if you plan to contribute to `huggingface_hub`
and need to test changes in the code. You need to clone a local copy of `huggingface_hub`
on your machine.

```bash
# First, clone repo locally
git clone https://github.com/huggingface/huggingface_hub.git

# Then, install with -e flag
cd huggingface_hub
pip install -e .
```

These commands will link the folder you cloned the repository to and your Python library paths.
Python will now look inside the folder you cloned to in addition to the normal library paths.
For example, if your Python packages are typically installed in `./.venv/lib/python3.13/site-packages/`,
Python will also search the folder you cloned `./huggingface_hub/`.

## Install the Hugging Face CLI 

Use our one-liner installers to set up the `hf` CLI without touching your Python environment:

On macOS and Linux:

```bash
curl -LsSf https://hf.co/cli/install.sh | bash
```

On Windows:

```powershell
powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"
```

## Install with conda

If you are more familiar with it, you can install `huggingface_hub` using the [conda-forge channel](https://anaconda.org/conda-forge/huggingface_hub):


```bash
conda install -c conda-forge huggingface_hub
```

Once done, [check installation](#check-installation) is working correctly.

## Check installation

Once installed, check that `huggingface_hub` works properly by running the following command:

```bash
python -c "from huggingface_hub import model_info; print(model_info('gpt2'))"
```

This command will fetch information from the Hub about the [gpt2](https://huggingface.co/gpt2) model.
Output should look like this:

```text
Model Name: gpt2
Tags: ['pytorch', 'tf', 'jax', 'tflite', 'rust', 'safetensors', 'gpt2', 'text-generation', 'en', 'doi:10.57967/hf/0039', 'transformers', 'exbert', 'license:mit', 'has_space']
Task: text-generation
```

## Windows limitations

With our goal of democratizing good ML everywhere, we built `huggingface_hub` to be a
cross-platform library and in particular to work correctly on both Unix-based and Windows
systems. However, there are a few cases where `huggingface_hub` has some limitations when
run on Windows. Here is an exhaustive list of known issues. Please let us know if you
encounter any undocumented problem by opening [an issue on Github](https://github.com/huggingface/huggingface_hub/issues/new/choose).

- `huggingface_hub`'s cache system relies on symlinks to efficiently cache files downloaded
from the Hub. On Windows, you must activate developer mode or run your script as admin to
enable symlinks. If they are not activated, the cache-system still works but in a non-optimized
manner. Please read [the cache limitations](./guides/manage-cache#limitations) section for more details.
- Filepaths on the Hub can have special characters (e.g. `"path/to?/my/file"`). Windows is
more restrictive on [special characters](https://learn.microsoft.com/en-us/windows/win32/intl/character-sets-used-in-file-names)
which makes it impossible to download those files on Windows. Hopefully this is a rare case.
Please reach out to the repo owner if you think this is a mistake or to us to figure out
a solution.


## Next steps

Once `huggingface_hub` is properly installed on your machine, you might want to
[configure environment variables](package_reference/environment_variables) or [check one of our guides](guides/overview) to get started.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/installation.md" />

### Quickstart
https://huggingface.co/docs/huggingface_hub/main/quick-start.md

# Quickstart

The [Hugging Face Hub](https://huggingface.co/) is the go-to place for sharing machine learning
models, demos, datasets, and metrics. `huggingface_hub` library helps you interact with
the Hub without leaving your development environment. You can create and manage
repositories easily, download and upload files, and get useful model and dataset
metadata from the Hub.

## Installation

To get started, install the `huggingface_hub` library:

```bash
pip install --upgrade huggingface_hub
```

For more details, check out the [installation](installation) guide.

## Download files

Repositories on the Hub are git version controlled, and users can download a single file
or the whole repository. You can use the [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) function to download files.
This function will download and cache a file on your local disk. The next time you need
that file, it will load from your cache, so you don't need to re-download it.

You will need the repository id and the filename of the file you want to download. For
example, to download the [Pegasus](https://huggingface.co/google/pegasus-xsum) model
configuration file:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="google/pegasus-xsum", filename="config.json")
```

To download a specific version of the file, use the `revision` parameter to specify the
branch name, tag, or commit hash. If you choose to use the commit hash, it must be the
full-length hash instead of the shorter 7-character commit hash:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(
...     repo_id="google/pegasus-xsum",
...     filename="config.json",
...     revision="4d33b01d79672f27f001f6abade33f22d993b151"
... )
```

For more details and options, see the API reference for [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download).

<a id="login"></a> <!-- backward compatible anchor -->

## Authentication

In a lot of cases, you must be authenticated with a Hugging Face account to interact with
the Hub: download private repos, upload files, create PRs,...
[Create an account](https://huggingface.co/join) if you don't already have one, and then sign in
to get your [User Access Token](https://huggingface.co/docs/hub/security-tokens) from
your [Settings page](https://huggingface.co/settings/tokens). The User Access Token is
used to authenticate your identity to the Hub.

> [!TIP]
> Tokens can have `read` or `write` permissions. Make sure to have a `write` access token if you want to create or edit a repository. Otherwise, it's best to generate a `read` token to reduce risk in case your token is inadvertently leaked.

### Login command

The easiest way to authenticate is to save the token on your machine. You can do that from the terminal using the [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) command:

```bash
hf auth login
```

The command will tell you if you are already logged in and prompt you for your token. The token is then validated and saved in your `HF_HOME` directory (defaults to `~/.cache/huggingface/token`). Any script or library interacting with the Hub will use this token when sending requests.

Alternatively, you can programmatically log in using [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) in a notebook or a script:

```py
>>> from huggingface_hub import login
>>> login()
```

You can only be logged in to one account at a time. Logging in to a new account will automatically log you out of the previous one. To determine your currently active account, simply run the `hf auth whoami` command.

> [!WARNING]
> Once logged in, all requests to the Hub - even methods that don't necessarily require authentication - will use your access token by default. If you want to disable the implicit use of your token, you should set `HF_HUB_DISABLE_IMPLICIT_TOKEN=1` as an environment variable (see [reference](../package_reference/environment_variables#hfhubdisableimplicittoken)).

### Manage multiple tokens locally

You can save multiple tokens on your machine by simply logging in with the [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) command with each token. If you need to switch between these tokens locally, you can use the `auth switch` command:

```bash
hf auth switch
```

This command will prompt you to select a token by its name from a list of saved tokens. Once selected, the chosen token becomes the _active_ token, and it will be used for all interactions with the Hub.


You can list all available access tokens on your machine with `hf auth list`.

### Environment variable

The environment variable `HF_TOKEN` can also be used to authenticate yourself. This is especially useful in a Space where you can set `HF_TOKEN` as a [Space secret](https://huggingface.co/docs/hub/spaces-overview#managing-secrets).

> [!TIP]
> **NEW:** Google Colaboratory lets you define [private keys](https://twitter.com/GoogleColab/status/1719798406195867814) for your notebooks. Define a `HF_TOKEN` secret to be automatically authenticated!

Authentication via an environment variable or a secret has priority over the token stored on your machine.

### Method parameters

Finally, it is also possible to authenticate by passing your token to any method that accepts `token` as a parameter.

```
from huggingface_hub import whoami

user = whoami(token=...)
```

This is usually discouraged except in an environment where you don't want to store your token permanently or if you need to handle several tokens at once.

> [!WARNING]
> Please be careful when passing tokens as a parameter. It is always best practice to load the token from a secure vault instead of hardcoding it in your codebase or notebook. Hardcoded tokens present a major leak risk if you share your code inadvertently.

## Create a repository

Once you've registered and logged in, create a repository with the [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo)
function:

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(repo_id="super-cool-model")
```

If you want your repository to be private, then:

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(repo_id="super-cool-model", private=True)
```

Private repositories will not be visible to anyone except yourself.

> [!TIP]
> To create a repository or to push content to the Hub, you must provide a User Access
> Token that has the `write` permission. You can choose the permission when creating the
> token in your [Settings page](https://huggingface.co/settings/tokens).

## Upload files

Use the [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) function to add a file to your newly created repository. You
need to specify:

1. The path of the file to upload.
2. The path of the file in the repository.
3. The repository id of where you want to add the file.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(
...     path_or_fileobj="/home/lysandre/dummy-test/README.md",
...     path_in_repo="README.md",
...     repo_id="lysandre/test-model",
... )
```

To upload more than one file at a time, take a look at the [Upload](./guides/upload) guide
which will introduce you to several methods for uploading files (with or without git).

## Next steps

The `huggingface_hub` library provides an easy way for users to interact with the Hub
with Python. To learn more about how you can manage your files and repositories on the
Hub, we recommend reading our [how-to guides](./guides/overview) to:

- [Manage your repository](./guides/repository).
- [Download](./guides/download) files from the Hub.
- [Upload](./guides/upload) files to the Hub.
- [Search the Hub](./guides/search) for your desired model or dataset.
- [Run Inference](./guides/inference) across multiple services for models hosted on the Hugging Face Hub.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/quick-start.md" />

### Git vs HTTP paradigm
https://huggingface.co/docs/huggingface_hub/main/concepts/git_vs_http.md

# Git vs HTTP paradigm

The `huggingface_hub` library is a library for interacting with the Hugging Face Hub, which is a collection of git-based repositories (models, datasets or Spaces). There are two main ways to access the Hub using `huggingface_hub`.

The first approach, the so-called "git-based" approach, relies on using standard `git` commands directly in a terminal. This method allows you to clone repositories, create commits, and push changes manually. The second option, called the "HTTP-based" approach, involves making HTTP requests using the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) client. Let's examine the pros and cons of each approach.

## Git: the historical CLI-based approach

At first, most users interacted with the Hugging Face Hub using plain `git` commands such as `git clone`, `git add`, `git commit`, `git push`, `git tag`, or `git checkout`.

This approach lets you work with a full local copy of the repository on your machine, just like in traditional software development. This can be an advantage when you need offline access or want to work with the full history of a repository. However, it also comes with downsides: you are responsible for keeping the repository up-to-date locally, handling credentials, and managing large files (via `git-lfs`), which can become cumbersome when working with large machine learning models or datasets.

In many machine learning workflows, you may only need to download a few files for inference or convert weights without needing to clone the entire repository. In such cases, using `git` can be overkill and introduce unnecessary complexity.

## HfApi: a flexible and convenient HTTP client

The [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class was developed to provide an alternative to using local git repositories, which can be cumbersome to maintain, especially when dealing with large models or datasets. The [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class offers the same functionality as git-based workflows -such as downloading and pushing files and creating branches and tags- but without the need for a local folder that needs to be kept in sync.

In addition to the functionalities already provided by `git`, the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class offers additional features, such as the ability to manage repos, download files using caching for efficient reuse, search the Hub for repos and metadata, access community features such as discussions, PRs, and comments, and configure Spaces hardware and secrets.

## What should I use ? And when ?

Overall, the **HTTP-based approach is the recommended way to use** `huggingface_hub` in all cases. [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) allows you to pull and push changes, work with PRs, tags and branches, interact with discussions and much more.

However, not all git commands are available through [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi). Some may never be implemented, but we are always trying to improve and close the gap. If you don't see your use case covered, please open [an issue on GitHub](https://github.com/huggingface/huggingface_hub)! We welcome feedback to help build the HF ecosystem with and for our users.

This preference for the HTTP-based [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) over direct `git` commands does not mean that git versioning will disappear from the Hugging Face Hub anytime soon. It will always be possible to use `git` locally in workflows where it makes sense.

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/concepts/git_vs_http.md" />

### Migrating to huggingface_hub v1.0
https://huggingface.co/docs/huggingface_hub/main/concepts/migration.md

# Migrating to huggingface_hub v1.0

The v1.0 release is a major milestone for the `huggingface_hub` library. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use.

This guide is intended to help you migrate your existing code to the new version. If you have any questions or feedback, please let us know by [opening an issue on GitHub](https://github.com/huggingface/huggingface_hub/issues).

## Python 3.9+

`huggingface_hub` now requires Python 3.9 or higher. Python 3.8 is no longer supported.

## HTTPX migration

The `huggingface_hub` library now uses [`httpx`](https://www.python-httpx.org/) instead of `requests` for HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped both `requests` and `aiohttp` dependencies.

### Breaking changes

This is a major change that affects the entire library. While we have tried to make this change as transparent as possible, you may need to update your code in some cases. Here is a list of breaking changes introduced in the process:

- **Proxy configuration**: "per method" proxies are no longer supported. Proxies must be configured globally using the `HTTP_PROXY` and `HTTPS_PROXY` environment variables.
- **Custom HTTP backend**: The `configure_http_backend` function has been removed. You should now use [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory) and [set_async_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_async_client_factory) to configure the HTTP clients.
- **Error handling**: HTTP errors are not inherited from `requests.HTTPError` anymore, but from `httpx.HTTPError`. We recommend catching `huggingface_hub.HfHubHttpError` which is a subclass of `requests.HTTPError` in v0.x and of `httpx.HTTPError` in v1.x. Catching from the `huggingface_hub` error ensures your code is compatible with both the old and new versions of the library.
- **SSLError**: `httpx` does not have the concept of `SSLError`. It is now a generic `httpx.ConnectError`.
- **`LocalEntryNotFoundError`**: This error no longer inherits from `HTTPError`. We now define a `EntryNotFoundError` (new) that is inherited by both `LocalEntryNotFoundError` (if file not found in local cache) and `RemoteEntryNotFoundError` (if file not found in repo on the Hub). Only the remote error inherits from `HTTPError`.
- **`InferenceClient`**: The `InferenceClient` can now be used as a context manager. This is especially useful when streaming tokens from a language model to ensure that the connection is closed properly.
- **`AsyncInferenceClient`**: The `trust_env` parameter has been removed from the `AsyncInferenceClient`'s constructor. Environment variables are trusted by default by `httpx`. If you explicitly don't want to trust the environment, you must configure it with [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory).

For more details, you can check [PR #3328](https://github.com/huggingface/huggingface_hub/pull/3328) that introduced `httpx`.

### Why `httpx`?


The migration from `requests` to `httpx` brings several key improvements that enhance the library's performance, reliability, and maintainability:

**Thread Safety and Connection Reuse**: `httpx` is thread-safe by design, allowing us to safely reuse the same client across multiple threads. This connection reuse reduces the overhead of establishing new connections for each HTTP request, improving performance especially when making frequent requests to the Hub.

**HTTP/2 Support**: `httpx` provides native HTTP/2 support, which offers better efficiency when making multiple requests to the same server (exactly our use case). This translates to lower latency and reduced resource consumption compared to HTTP/1.1.

**Unified Sync/Async API**: Unlike our previous setup with separate `requests` (sync) and `aiohttp` (async) dependencies, `httpx` provides both synchronous and asynchronous clients with identical behavior. This ensures that `InferenceClient` and `AsyncInferenceClient` have consistent functionality and eliminates subtle behavioral differences that previously existed between the two implementations.

**Improved SSL Error Handling**: `httpx` handles SSL errors more gracefully, making debugging connection issues easier and more reliable.

**Future-Proof Architecture**: `httpx` is actively maintained and designed for modern Python applications. In contrast, `requests` is in maintenance mode and won't receive major updates like thread-safety improvements or HTTP/2 support.

**Better Environment Variable Handling**: `httpx` provides more consistent handling of environment variables across both sync and async contexts, eliminating previous inconsistencies where `requests` would read local environment variables by default while `aiohttp` would not.

The transition to `httpx` positions `huggingface_hub` with a modern, efficient, and maintainable HTTP backend. While most users should experience seamless operation, the underlying improvements provide better performance and reliability for all Hub interactions.

## `hf_transfer`

Now that all repositories on the Hub are Xet-enabled and that `hf_xet` is the default way to download/upload files, we've removed support for the `hf_transfer` optional package. The `HF_HUB_ENABLE_HF_TRANSFER` environment variable is therefore ignored. Use [`HF_XET_HIGH_PERFORMANCE`](../package_reference/environment_variables.md) instead.

## `Repository` class

The `Repository` class has been removed in v1.0. It was a thin wrapper around the `git` CLI for managing repositories. You can still use `git` directly in the terminal, but the recommended approach is to use the HTTP-based API in the `huggingface_hub` library for a smoother experience, especially when dealing with large files.

Here is a mapping from the legacy `Repository` class to the new `HfApi` one:

| `Repository` method                        | `HfApi` method                                        |
| ------------------------------------------ | ----------------------------------------------------- |
| `repo.clone_from`                          | `snapshot_download`                                   |
| `repo.git_add` + `git_commit` + `git_push` | [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder), [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) |
| `repo.git_tag`                             | `create_tag`                                          |
| `repo.git_branch`                          | `create_branch`                                       |

## `HfFolder` class

`HfFolder` was used to manage the user access token. Use [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) to save a new token, [logout()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.logout) to delete it and [whoami()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.whoami) to check the user associated to the current token. Finally, use `get_token()` to retrieve user's token in a script.


## `InferenceApi` class

`InferenceApi` was a class to interact with the Inference API. It is now recommended to use the [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) class instead.

## Other deprecated features

Some methods and parameters have been removed in v1.0. The ones listed below have already been deprecated with a warning message in v0.x.

- `constants.hf_cache_home` has been removed. Please use `HF_HOME` instead.
- `use_auth_token` parameters have been removed from all methods. Please use `token` instead.
- `get_token_permission` method has been removed.
- `update_repo_visibility` method has been removed. Please use `update_repo_settings` instead.
- `is_write_action` parameter has been removed from `build_hf_headers` as well as `write_permission` from `login`. The concept of "write permission" has been removed and is no longer relevant now that fine-grained tokens are the recommended approach.
- `new_session` parameter in `login` has been renamed to `skip_if_logged_in` for better clarity.
- `resume_download`, `force_filename`, and `local_dir_use_symlinks` parameters have been removed from `hf_hub_download` and `snapshot_download`.
- `library`, `language`, `tags`, and `task` parameters have been removed from `list_models`.

## CLI cache commands

Cache management from the CLI has been redesigned to follow a Docker-inspired workflow. The deprecated `huggingface-cli` has been removed, `hf` (introduced in v0.34) replaces it with a clearer ressource-action CLI. 
The legacy `hf cache scan` and `hf cache delete` commands are also removed in v1.0 and are replaced with the new trio below:

- `hf cache ls` lists cache entries with concise table, JSON, or CSV output. Use `--revisions` to inspect individual revisions, add `--filter` expressions such as `size>1GB` or `accessed>30d`, and combine them with `--quiet` when you only need the identifiers.
- `hf cache rm` deletes selected cache entries. Pass one or more repo IDs (for example `model/bert-base-uncased`) or revision hashes, and optionally add `--dry-run` to preview or `--yes` to skip the confirmation prompt. This replaces both the interactive TUI and `--disable-tui` workflows from the previous command.
- `hf cache prune` performs the common cleanup task of deleting unreferenced revisions in one shot. Add `--dry-run` or `--yes` in the same way as with `hf cache rm`.

Finally, the `[cli]` extra has been removed - The CLI now ships with the core `huggingface_hub` package.

## TensorFlow and Keras 2.x support

All TensorFlow-related code and dependencies have been removed in v1.0. This includes the following breaking changes:

- `huggingface_hub[tensorflow]` is no longer a supported extra dependency
- The `split_tf_state_dict_into_shards` and `get_tf_storage_size` utility functions have been removed.
- The `tensorflow`, `fastai`, and `fastcore` versions are no longer included in the built-in headers.

The Keras 2.x integration has also been removed. This includes the `KerasModelHubMixin` class and the `save_pretrained_keras`, `from_pretrained_keras`, and `push_to_hub_keras` utilities. Keras 2.x is a legacy and unmaintained library. The recommended approach is to use Keras 3.x which is tightly integrated with the Hub (i.e. it contains built-in method to load/push to Hub). If you still want to work with Keras 2.x, you should downgrade `huggingface_hub` to v0.x version.

## `upload_file` and `upload_folder` return values

The [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) functions now return the URL of the commit created on the Hub. Previously, they returned the URL of the file or folder. This is to align with the return value of [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit), [delete_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_file) and [delete_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_folder).


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/concepts/migration.md" />

### TensorBoard logger
https://huggingface.co/docs/huggingface_hub/main/package_reference/tensorboard.md

# TensorBoard logger

TensorBoard is a visualization toolkit for machine learning experimentation. TensorBoard allows tracking and visualizing
metrics such as loss and accuracy, visualizing the model graph, viewing histograms, displaying images and much more.
TensorBoard is well integrated with the Hugging Face Hub. The Hub automatically detects TensorBoard traces (such as
`tfevents`) when pushed to the Hub which starts an instance to visualize them. To get more information about TensorBoard
integration on the Hub, check out [this guide](https://huggingface.co/docs/hub/tensorboard).

To benefit from this integration, `huggingface_hub` provides a custom logger to push logs to the Hub. It works as a
drop-in replacement for [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html) with no extra
code needed. Traces are still saved locally and a background job push them to the Hub at regular interval.

## HFSummaryWriter[[huggingface_hub.HFSummaryWriter]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HFSummaryWriter</name><anchor>huggingface_hub.HFSummaryWriter</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_tensorboard_logger.py#L46</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to which the logs will be pushed.
- **logdir** (`str`, *optional*) --
  The directory where the logs will be written. If not specified, a local directory will be created by the
  underlying `SummaryWriter` object.
- **commit_every** (`int` or `float`, *optional*) --
  The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes.
- **squash_history** (`bool`, *optional*) --
  Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
  useful to avoid degraded performances on the repo when it grows too large.
- **repo_type** (`str`, *optional*) --
  The type of the repo to which the logs will be pushed. Defaults to "model".
- **repo_revision** (`str`, *optional*) --
  The revision of the repo to which the logs will be pushed. Defaults to "main".
- **repo_private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **path_in_repo** (`str`, *optional*) --
  The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/".
- **repo_allow_patterns** (`list[str]` or `str`, *optional*) --
  A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the
  [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details.
- **repo_ignore_patterns** (`list[str]` or `str`, *optional*) --
  A list of patterns to exclude in the upload. Check out the
  [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details.
- **token** (`str`, *optional*) --
  Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more
  details
- **kwargs** --
  Additional keyword arguments passed to `SummaryWriter`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub.

Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate
thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection
issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every`
minutes (default to every 5 minutes).

> [!WARNING]
> `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice.



<ExampleCodeBlock anchor="huggingface_hub.HFSummaryWriter.example">

Examples:
```diff
# Taken from https://pytorch.org/docs/stable/tensorboard.html
- from torch.utils.tensorboard import SummaryWriter
+ from huggingface_hub import HFSummaryWriter

import numpy as np

- writer = SummaryWriter()
+ writer = HFSummaryWriter(repo_id="username/my-trained-model")

for n_iter in range(100):
    writer.add_scalar('Loss/train', np.random.random(), n_iter)
    writer.add_scalar('Loss/test', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HFSummaryWriter.example-2">

```py
>>> from huggingface_hub import HFSummaryWriter

# Logs are automatically pushed every 15 minutes (5 by default) + when exiting the context manager
>>> with HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) as logger:
...     logger.add_scalar("a", 1)
...     logger.add_scalar("b", 2)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/tensorboard.md" />

### Webhooks Server
https://huggingface.co/docs/huggingface_hub/main/package_reference/webhooks_server.md

# Webhooks Server

Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to
all repos belonging to particular users/organizations you're interested in following. To learn
more about webhooks on the Huggingface Hub, you can read the Webhooks [guide](https://huggingface.co/docs/hub/webhooks).

> [!TIP]
> Check out this [guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your webhooks server and
> deploy it as a Space.

> [!WARNING]
> This is an experimental feature. This means that we are still working on improving the API. Breaking changes might be
> introduced in the future without prior notice. Make sure to pin the version of `huggingface_hub` in your requirements.
> A warning is triggered when you use an experimental feature. You can disable it by setting `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1` as an environment variable.

## Server

The server is a [Gradio](https://gradio.app/) app. It has a UI to display instructions for you or your users and an API
to listen to webhooks. Implementing a webhook endpoint is as simple as decorating a function. You can then debug it
by redirecting the Webhooks to your machine (using a Gradio tunnel) before deploying it to a Space.

### WebhooksServer[[huggingface_hub.WebhooksServer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.WebhooksServer</name><anchor>huggingface_hub.WebhooksServer</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_server.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **ui** (`gradio.Blocks`, optional) --
  A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions
  about the configured webhooks is created.
- **webhook_secret** (`str`, optional) --
  A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as
  you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You
  can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the
  webhook endpoints are opened without any security.</paramsdesc><paramgroups>0</paramgroups></docstring>

The [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) class lets you create an instance of a Gradio app that can receive Huggingface webhooks.
These webhooks can be registered using the `add_webhook()` decorator. Webhook endpoints are added to
the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `launch` method has to be
called to start the app.

It is recommended to accept [WebhookPayload](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhookPayload) as the first argument of the webhook function. It is a Pydantic
model that contains all the information about the webhook event. The data will be parsed automatically for you.

Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
WebhooksServer and deploy it on a Space.

> [!WARNING]
> `WebhooksServer` is experimental. Its API is subject to change in the future.

> [!WARNING]
> You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`).



<ExampleCodeBlock anchor="huggingface_hub.WebhooksServer.example">

Example:

```python
import gradio as gr
from huggingface_hub import WebhooksServer, WebhookPayload

with gr.Blocks() as ui:
    ...

app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")

@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
    return {"message": "hello"}

app.launch()
```

</ExampleCodeBlock>


</div>

### @webhook_endpoint[[huggingface_hub.webhook_endpoint]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.webhook_endpoint</name><anchor>huggingface_hub.webhook_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_server.py#L226</source><parameters>[{"name": "path", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **path** (`str`, optional) --
  The URL path to register the webhook function. If not provided, the function name will be used as the path.
  In any case, all webhooks are registered under `/webhooks`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Decorator to start a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) and register the decorated function as a webhook endpoint.

This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret),
you can use [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) directly. You can register multiple webhook endpoints (to the same server) by using
this decorator multiple times.

Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
server and deploy it on a Space.

> [!WARNING]
> `webhook_endpoint` is experimental. Its API is subject to change in the future.

> [!WARNING]
> You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`).



Examples:
The default usage is to register a function as a webhook endpoint. The function name will be used as the path.
The server will be started automatically at exit (i.e. at the end of the script).

<ExampleCodeBlock anchor="huggingface_hub.webhook_endpoint.example">

```python
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

# Server is automatically started at the end of the script.
```

</ExampleCodeBlock>

Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you
are running it in a notebook.

<ExampleCodeBlock anchor="huggingface_hub.webhook_endpoint.example-2">

```python
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

# Start the server manually
trigger_training.launch()
```

</ExampleCodeBlock>


</div>

## Payload[[huggingface_hub.WebhookPayload]]

[WebhookPayload](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhookPayload) is the main data structure that contains the payload from Webhooks. This is
a `pydantic` class which makes it very easy to use with FastAPI. If you pass it as a parameter to a webhook endpoint, it
will be automatically validated and parsed as a Python object.

For more information about webhooks payload, you can refer to the Webhooks Payload [guide](https://huggingface.co/docs/hub/webhooks#webhook-payloads).

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayload</name><anchor>huggingface_hub.WebhookPayload</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L130</source><parameters>[{"name": "event", "val": ": WebhookPayloadEvent"}, {"name": "repo", "val": ": WebhookPayloadRepo"}, {"name": "discussion", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussion] = None"}, {"name": "comment", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadComment] = None"}, {"name": "webhook", "val": ": WebhookPayloadWebhook"}, {"name": "movedTo", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadMovedTo] = None"}, {"name": "updatedRefs", "val": ": typing.Optional[list[huggingface_hub._webhooks_payload.WebhookPayloadUpdatedRef]] = None"}]</parameters></docstring>


</div>

### WebhookPayload[[huggingface_hub.WebhookPayload]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayload</name><anchor>huggingface_hub.WebhookPayload</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L130</source><parameters>[{"name": "event", "val": ": WebhookPayloadEvent"}, {"name": "repo", "val": ": WebhookPayloadRepo"}, {"name": "discussion", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussion] = None"}, {"name": "comment", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadComment] = None"}, {"name": "webhook", "val": ": WebhookPayloadWebhook"}, {"name": "movedTo", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadMovedTo] = None"}, {"name": "updatedRefs", "val": ": typing.Optional[list[huggingface_hub._webhooks_payload.WebhookPayloadUpdatedRef]] = None"}]</parameters></docstring>


</div>

### WebhookPayloadComment[[huggingface_hub.WebhookPayloadComment]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadComment</name><anchor>huggingface_hub.WebhookPayloadComment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L95</source><parameters>[{"name": "id", "val": ": str"}, {"name": "author", "val": ": ObjectId"}, {"name": "hidden", "val": ": bool"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "url", "val": ": WebhookPayloadUrl"}]</parameters></docstring>


</div>

### WebhookPayloadDiscussion[[huggingface_hub.WebhookPayloadDiscussion]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadDiscussion</name><anchor>huggingface_hub.WebhookPayloadDiscussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L102</source><parameters>[{"name": "id", "val": ": str"}, {"name": "num", "val": ": int"}, {"name": "author", "val": ": ObjectId"}, {"name": "url", "val": ": WebhookPayloadUrl"}, {"name": "title", "val": ": str"}, {"name": "isPullRequest", "val": ": bool"}, {"name": "status", "val": ": typing.Literal['closed', 'draft', 'open', 'merged']"}, {"name": "changes", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussionChanges] = None"}, {"name": "pinned", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

### WebhookPayloadDiscussionChanges[[huggingface_hub.WebhookPayloadDiscussionChanges]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadDiscussionChanges</name><anchor>huggingface_hub.WebhookPayloadDiscussionChanges</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L90</source><parameters>[{"name": "base", "val": ": str"}, {"name": "mergeCommitId", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

### WebhookPayloadEvent[[huggingface_hub.WebhookPayloadEvent]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadEvent</name><anchor>huggingface_hub.WebhookPayloadEvent</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L85</source><parameters>[{"name": "action", "val": ": typing.Literal['create', 'delete', 'move', 'update']"}, {"name": "scope", "val": ": str"}]</parameters></docstring>


</div>

### WebhookPayloadMovedTo[[huggingface_hub.WebhookPayloadMovedTo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadMovedTo</name><anchor>huggingface_hub.WebhookPayloadMovedTo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L76</source><parameters>[{"name": "name", "val": ": str"}, {"name": "owner", "val": ": ObjectId"}]</parameters></docstring>


</div>

### WebhookPayloadRepo[[huggingface_hub.WebhookPayloadRepo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadRepo</name><anchor>huggingface_hub.WebhookPayloadRepo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L113</source><parameters>[{"name": "id", "val": ": str"}, {"name": "owner", "val": ": ObjectId"}, {"name": "head_sha", "val": ": typing.Optional[str] = None"}, {"name": "name", "val": ": str"}, {"name": "private", "val": ": bool"}, {"name": "subdomain", "val": ": typing.Optional[str] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "type", "val": ": typing.Literal['dataset', 'model', 'space']"}, {"name": "url", "val": ": WebhookPayloadUrl"}]</parameters></docstring>


</div>

### WebhookPayloadUrl[[huggingface_hub.WebhookPayloadUrl]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadUrl</name><anchor>huggingface_hub.WebhookPayloadUrl</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L71</source><parameters>[{"name": "web", "val": ": str"}, {"name": "api", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

### WebhookPayloadWebhook[[huggingface_hub.WebhookPayloadWebhook]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadWebhook</name><anchor>huggingface_hub.WebhookPayloadWebhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L81</source><parameters>[{"name": "id", "val": ": str"}, {"name": "version", "val": ": typing.Literal[3]"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/webhooks_server.md" />

### HfApi Client
https://huggingface.co/docs/huggingface_hub/main/package_reference/hf_api.md

# HfApi Client

Below is the documentation for the `HfApi` class, which serves as a Python wrapper for the Hugging Face Hub's API.

All methods from the `HfApi` are also accessible from the package's root directly. Both approaches are detailed below.

Using the root method is more straightforward but the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class gives you more flexibility.
In particular, you can pass a token that will be reused in all HTTP calls. This is different
from `hf auth login` or [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) as the token is not persisted on the machine.
It is also possible to provide a different endpoint or configure a custom user-agent.

```python
from huggingface_hub import HfApi, list_models

# Use root method
models = list_models()

# Or configure a HfApi client
hf_api = HfApi(
    endpoint="https://huggingface.co", # Can be a Private Hub endpoint.
    token="hf_xxx", # Token is not persisted on the machine.
)
models = hf_api.list_models()
```

## HfApi[[huggingface_hub.HfApi]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HfApi</name><anchor>huggingface_hub.HfApi</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1649</source><parameters>[{"name": "endpoint", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "library_name", "val": ": Optional[str] = None"}, {"name": "library_version", "val": ": Optional[str] = None"}, {"name": "user_agent", "val": ": Union[dict, str, None] = None"}, {"name": "headers", "val": ": Optional[dict[str, str]] = None"}]</parameters><paramsdesc>- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **library_name** (`str`, *optional*) --
  The name of the library that is making the HTTP request. Will be added to
  the user-agent header. Example: `"transformers"`.
- **library_version** (`str`, *optional*) --
  The version of the library that is making the HTTP request. Will be added
  to the user-agent header. Example: `"4.24.0"`.
- **user_agent** (`str`, `dict`, *optional*) --
  The user agent info in the form of a dictionary or a single string. It will
  be completed with information about the installed packages.
- **headers** (`dict`, *optional*) --
  Additional headers to be sent with each request. Example: `{"X-My-Header": "value"}`.
  Headers passed here are taking precedence over the default headers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Client to interact with the Hugging Face Hub via HTTP.

The client is initialized with some high-level settings used in all requests
made to the Hub (HF endpoint, authentication, user agents...). Using the `HfApi`
client is preferred but not mandatory as all of its public methods are exposed
directly at the root of `huggingface_hub`.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>accept_access_request</name><anchor>huggingface_hub.HfApi.accept_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8750</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to accept access request for.
- **user** (`str`) --
  The username of the user which access request should be accepted.
- **repo_type** (`str`, *optional*) --
  The type of the repo to accept access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the accepted list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Accept an access request from a user for a given gated repo.

Once the request is accepted, the user will be able to download any file of the repo and access the community
tab. If the approval mode is automatic, you don't have to accept requests manually. An accepted request can be
cancelled or rejected at any time using [cancel_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) and [reject_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_collection_item</name><anchor>huggingface_hub.HfApi.add_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8302</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_id", "val": ": str"}, {"name": "item_type", "val": ": CollectionItemType_T"}, {"name": "note", "val": ": Optional[str] = None"}, {"name": "exists_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_id** (`str`) --
  ID of the item to add to the collection. It can be the ID of a repo on the Hub (e.g. `"facebook/bart-large-mnli"`)
  or a paper id (e.g. `"2307.09288"`).
- **item_type** (`str`) --
  Type of the item to add. Can be one of `"model"`, `"dataset"`, `"space"` or `"paper"`.
- **note** (`str`, *optional*) --
  A note to attach to the item in the collection. The maximum size for a note is 500 characters.
- **exists_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if item already exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the item you try to add to the collection does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 409 if the item you try to add to the collection is already in the collection (and exists_ok=False)</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>
Add an item to a collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection)





<ExampleCodeBlock anchor="huggingface_hub.HfApi.add_collection_item.example">

Example:

```py
>>> from huggingface_hub import add_collection_item
>>> collection = add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="pierre-loic/climate-news-articles",
...     item_type="dataset"
... )
>>> collection.items[-1].item_id
"pierre-loic/climate-news-articles"
# ^item got added to the collection on last position

# Add item with a note
>>> add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="datasets/climate_fever",
...     item_type="dataset"
...     note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
(...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_space_secret</name><anchor>huggingface_hub.HfApi.add_space_secret</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6770</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "value", "val": ": str"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Secret key. Example: `"GITHUB_API_KEY"`
- **value** (`str`) --
  Secret value. Example: `"your_github_api_key"`.
- **description** (`str`, *optional*) --
  Secret description. Example: `"Github API key to access the Github API"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Adds or updates a secret in a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_space_variable</name><anchor>huggingface_hub.HfApi.add_space_variable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6859</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "value", "val": ": str"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **value** (`str`) --
  Variable value. Example: `"the_model_repo_id"`.
- **description** (`str`) --
  Description of the variable. Example: `"Model Repo ID of the implemented model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Adds or updates a variable in a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>auth_check</name><anchor>huggingface_hub.HfApi.auth_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9780</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to check for access. Format should be `"user/repo_name"`.
  Example: `"user/my-cool-model"`.

- **repo_type** (`str`, *optional*) --
  The type of the repository. Should be one of `"model"`, `"dataset"`, or `"space"`.
  If not specified, the default is `"model"`.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  Raised if the repository does not exist, is private, or the user does not have access. This can
  occur if the `repo_id` or `repo_type` is incorrect or if the repository is private but the user
  is not authenticated.

- [GatedRepoError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.GatedRepoError) -- 
  Raised if the repository exists but is gated and the user is not authorized to access it.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [GatedRepoError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.GatedRepoError)</raisederrors></docstring>

Check if the provided user token has access to a specific repository on the Hugging Face Hub.

This method verifies whether the user, authenticated via the provided token, has access to the specified
repository. If the repository is not found or if the user lacks the required permissions to access it,
the method raises an appropriate exception.







Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.auth_check.example">

Check if the user has access to a repository:

```python
>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError

try:
    auth_check("user/my-cool-model")
except GatedRepoError:
    # Handle gated repository error
    print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
    # Handle repository not found error
    print("The repository was not found or you do not have access.")
```

</ExampleCodeBlock>

In this example:
- If the user has access, the method completes successfully.
- If the repository is gated or does not exist, appropriate exceptions are raised, allowing the user
to handle them accordingly.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cancel_access_request</name><anchor>huggingface_hub.HfApi.cancel_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8710</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to cancel access request for.
- **user** (`str`) --
  The username of the user which access request should be cancelled.
- **repo_type** (`str`, *optional*) --
  The type of the repo to cancel access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the pending list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Cancel an access request from a user for a given gated repo.

A cancelled request will go back to the pending list and the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cancel_job</name><anchor>huggingface_hub.HfApi.cancel_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10101</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Cancel a compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>change_discussion_status</name><anchor>huggingface_hub.HfApi.change_discussion_status</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6525</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "new_status", "val": ": Literal['open', 'closed']"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "comment", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **new_status** (`str`) --
  The new status for the discussion, either `"open"` or `"closed"`.
- **comment** (`str`, *optional*) --
  An optional comment to post with the status change.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionStatusChange](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionStatusChange)</rettype><retdesc>the status change event</retdesc></docstring>
Closes or re-opens a Discussion or Pull Request.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.change_discussion_status.example">

Examples:
```python
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>comment_discussion</name><anchor>huggingface_hub.HfApi.comment_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6382</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment** (`str`) --
  The content of the comment to create. Comments support markdown formatting.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the newly created comment</retdesc></docstring>
Creates a new comment on the given Discussion.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.comment_discussion.example">

Examples:
```python

>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """

>>> HfApi().comment_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_branch</name><anchor>huggingface_hub.HfApi.create_branch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5732</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which the branch will be created.
  Example: `"user/my-cool-model"`.

- **branch** (`str`) --
  The name of the branch to create.

- **revision** (`str`, *optional*) --
  The git revision to create the branch from. It can be a branch name or
  the OID/SHA of a commit, as a hexadecimal string. Defaults to the head
  of the `"main"` branch.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if creating a branch on a dataset or
  space, `None` or `"model"` if tagging a model. Default is `None`.

- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if branch already exists.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If invalid reference for a branch. Ex: `refs/pr/5` or 'refs/foo/bar'.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the branch already exists on the repo (error 409) and `exist_ok` is
  set to `False`.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError) or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Create a new branch for a repo on the Hub, starting from the specified revision (defaults to `main`).
To find a revision suiting your needs, you can use [list_repo_refs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) or [list_repo_commits()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_commits).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_collection</name><anchor>huggingface_hub.HfApi.create_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8128</source><parameters>[{"name": "title", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "private", "val": ": bool = False"}, {"name": "exists_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **title** (`str`) --
  Title of the collection to create. Example: `"Recent models"`.
- **namespace** (`str`, *optional*) --
  Namespace of the collection to create (username or org). Will default to the owner name.
- **description** (`str`, *optional*) --
  Description of the collection to create.
- **private** (`bool`, *optional*) --
  Whether the collection should be private or not. Defaults to `False` (i.e. public collection).
- **exists_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if collection already exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Create a new Collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_collection.example">

Example:

```py
>>> from huggingface_hub import create_collection
>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
>>> collection.slug
"username/iccv-2023-64f9a55bb3115b4f513ec026"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_commit</name><anchor>huggingface_hub.HfApi.create_commit</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3960</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "operations", "val": ": Iterable[CommitOperation]"}, {"name": "commit_message", "val": ": str"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "num_threads", "val": ": int = 5"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which the commit will be created, for example:
  `"username/custom_transformers"`

- **operations** (`Iterable` of `CommitOperation()`) --
  An iterable of operations to include in the commit, either:

  - [CommitOperationAdd](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationAdd) to upload a file
  - [CommitOperationDelete](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationDelete) to delete a file
  - [CommitOperationCopy](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationCopy) to copy a file

  Operation objects will be mutated to include information relative to the upload. Do not reuse the
  same objects for multiple commits.

- **commit_message** (`str`) --
  The summary (first line) of the commit that will be created.

- **commit_description** (`str`, *optional*) --
  The description of the commit that will be created

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.

- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.

- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.

- **num_threads** (`int`, *optional*) --
  Number of concurrent threads for uploading files. Defaults to 5.
  Setting it to 2 means at most 2 files will be uploaded concurrently.

- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string.
  Shorthands (7 first characters) are also supported. If specified and `create_pr` is `False`,
  the commit will fail if `revision` does not point to `parent_commit`. If specified and `create_pr`
  is `True`, the pull request will be created from `parent_commit`. Specifying `parent_commit`
  ensures the repo has not changed before committing the changes, and can be especially useful
  if the repo is updated / committed to concurrently.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If commit message is empty.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If parent commit is not a valid commit OID.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If a README.md file with an invalid metadata section is committed. In this case, the commit will fail
  early, before trying to upload any file.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `create_pr` is `True` and revision is neither `None` nor `"main"`.
- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.</raises><raisederrors>``ValueError`` or [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Creates a commit in the given repo, deleting & uploading files as needed.

> [!WARNING]
> The input list of `CommitOperation` will be mutated during the commit process. Do not reuse the same objects
> for multiple commits.

> [!WARNING]
> `create_commit` assumes that the repo already exists on the Hub. If you get a
> Client error 404, please make sure you are authenticated and that `repo_id` and
> `repo_type` are set correctly. If repo does not exist, create it first using
> [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

> [!WARNING]
> `create_commit` is limited to 25k LFS files and a 1GB payload for regular files.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_discussion</name><anchor>huggingface_hub.HfApi.create_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6209</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "pull_request", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **title** (`str`) --
  The title of the discussion. It can be up to 200 characters long,
  and must be at least 3 characters long. Leading and trailing whitespaces
  will be stripped.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **description** (`str`, *optional*) --
  An optional description for the Pull Request.
  Defaults to `"Discussion opened with the huggingface_hub Python library"`
- **pull_request** (`bool`, *optional*) --
  Whether to create a Pull Request or discussion. If `True`, creates a Pull Request.
  If `False`, creates a discussion. Defaults to `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a Discussion or Pull Request.

Pull Requests created programmatically will be in `"draft"` status.

Creating a Pull Request with changes can also be done at once with [HfApi.create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit).



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_inference_endpoint</name><anchor>huggingface_hub.HfApi.create_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7380</source><parameters>[{"name": "name", "val": ": str"}, {"name": "repository", "val": ": str"}, {"name": "framework", "val": ": str"}, {"name": "accelerator", "val": ": str"}, {"name": "instance_size", "val": ": str"}, {"name": "instance_type", "val": ": str"}, {"name": "region", "val": ": str"}, {"name": "vendor", "val": ": str"}, {"name": "account_id", "val": ": Optional[str] = None"}, {"name": "min_replica", "val": ": int = 1"}, {"name": "max_replica", "val": ": int = 1"}, {"name": "scale_to_zero_timeout", "val": ": Optional[int] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "task", "val": ": Optional[str] = None"}, {"name": "custom_image", "val": ": Optional[dict] = None"}, {"name": "env", "val": ": Optional[dict[str, str]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, str]] = None"}, {"name": "type", "val": ": InferenceEndpointType = <InferenceEndpointType.PROTECTED: 'protected'>"}, {"name": "domain", "val": ": Optional[str] = None"}, {"name": "path", "val": ": Optional[str] = None"}, {"name": "cache_http_responses", "val": ": Optional[bool] = None"}, {"name": "tags", "val": ": Optional[list[str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The unique name for the new Inference Endpoint.
- **repository** (`str`) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **accelerator** (`str`) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **region** (`str`) --
  The cloud region in which the Inference Endpoint will be created (e.g. `"us-east-1"`).
- **vendor** (`str`) --
  The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. `"aws"`).
- **account_id** (`str`, *optional*) --
  The account ID used to link a VPC to a private Inference Endpoint (if applicable).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint. To enable
  scaling to zero, set this value to 0 and adjust `scale_to_zero_timeout` accordingly. Defaults to 1.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero, or no scaling to zero if
  set to None and `min_replica` is not 0. Defaults to None.
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **env** (`dict[str, str]`, *optional*) --
  Non-secret environment variables to inject in the container environment.
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.
- **type** ([`InferenceEndpointType]`, *optional*) --
  The type of the Inference Endpoint, which can be `"protected"` (default), `"public"` or `"private"`.
- **domain** (`str`, *optional*) --
  The custom domain for the Inference Endpoint deployment, if setup the inference endpoint will be available at this domain (e.g. `"my-new-domain.cool-website.woof"`).
- **path** (`str`, *optional*) --
  The custom path to the deployed model, should start with a `/` (e.g. `"/models/google-bert/bert-base-uncased"`).
- **cache_http_responses** (`bool`, *optional*) --
  Whether to cache HTTP responses from the Inference Endpoint. Defaults to `False`.
- **tags** (`list[str]`, *optional*) --
  A list of tags to associate with the Inference Endpoint.
- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be created. Defaults to the current user's namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the updated Inference Endpoint.</retdesc></docstring>
Create a new Inference Endpoint.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "my-endpoint-name",
...     repository="gpt2",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl",
... )
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', status="pending",...)

# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
"..."
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example-2">

```python
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x1",
...     instance_type="nvidia-a10g",
...     env={
...           "MAX_BATCH_PREFILL_TOKENS": "2048",
...           "MAX_INPUT_LENGTH": "1024",
...           "MAX_TOTAL_TOKENS": "1512",
...           "MODEL_ID": "/repository"
...         },
...     custom_image={
...         "health_route": "/health",
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
...    secrets={"MY_SECRET_KEY": "secret_value"},
...    tags=["dev", "text-generation"],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example-3">

```python
# Start an Inference Endpoint running ProsusAI/finbert while scaling to zero in 15 minutes
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "finbert-classifier",
...     repository="ProsusAI/finbert",
...     framework="pytorch",
...     task="text-classification",
...     min_replica=0,
...     scale_to_zero_timeout=15,
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl",
... )
>>> endpoint.wait(timeout=300)
# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
TextClassificationOutputElement(label='positive', score=0.8983615040779114)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_inference_endpoint_from_catalog</name><anchor>huggingface_hub.HfApi.create_inference_endpoint_from_catalog</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7609</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "name", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The ID of the model in the catalog to deploy as an Inference Endpoint.
- **name** (`str`, *optional*) --
  The unique name for the new Inference Endpoint. If not provided, a random name will be generated.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be created. Defaults to the current user's namespace.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the new Inference Endpoint.</retdesc></docstring>
Create a new Inference Endpoint from a model in the Hugging Face Inference Catalog.

The goal of the Inference Catalog is to provide a curated list of models that are optimized for inference
and for which default configurations have been tested. See https://endpoints.huggingface.co/catalog for a list
of available models in the catalog.







> [!WARNING]
> `create_inference_endpoint_from_catalog` is experimental. Its API is subject to change in the future. Please provide feedback
> if you have any suggestions or requests.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_pull_request</name><anchor>huggingface_hub.HfApi.create_pull_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6298</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **title** (`str`) --
  The title of the discussion. It can be up to 200 characters long,
  and must be at least 3 characters long. Leading and trailing whitespaces
  will be stripped.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **description** (`str`, *optional*) --
  An optional description for the Pull Request.
  Defaults to `"Discussion opened with the huggingface_hub Python library"`
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a Pull Request . Pull Requests created programmatically will be in `"draft"` status.

Creating a Pull Request with changes can also be done at once with [HfApi.create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit);

This is a wrapper around [HfApi.create_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_discussion).



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_repo</name><anchor>huggingface_hub.HfApi.create_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3595</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}, {"name": "resource_group_id", "val": ": Optional[str] = None"}, {"name": "space_sdk", "val": ": Optional[str] = None"}, {"name": "space_hardware", "val": ": Optional[SpaceHardware] = None"}, {"name": "space_storage", "val": ": Optional[SpaceStorage] = None"}, {"name": "space_sleep_time", "val": ": Optional[int] = None"}, {"name": "space_secrets", "val": ": Optional[list[dict[str, str]]] = None"}, {"name": "space_variables", "val": ": Optional[list[dict[str, str]]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo already exists.
- **resource_group_id** (`str`, *optional*) --
  Resource group in which to create the repo. Resource groups is only available for Enterprise Hub organizations and
  allow to define which members of the organization can access the resource. The ID of a resource group
  can be found in the URL of the resource's page on the Hub (e.g. `"66670e5163145ca562cb1988"`).
  To learn more about resource groups, see https://huggingface.co/docs/hub/en/security-resource-groups.
- **space_sdk** (`str`, *optional*) --
  Choice of SDK to use if repo_type is "space". Can be "streamlit", "gradio", "docker", or "static".
- **space_hardware** (`SpaceHardware` or `str`, *optional*) --
  Choice of Hardware if repo_type is "space". See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for a complete list.
- **space_storage** (`SpaceStorage` or `str`, *optional*) --
  Choice of persistent storage tier. Example: `"small"`. See [SpaceStorage](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceStorage) for a complete list.
- **space_sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **space_secrets** (`list[dict[str, str]]`, *optional*) --
  A list of secret keys to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
- **space_variables** (`list[dict[str, str]]`, *optional*) --
  A list of public environment variables to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.</paramsdesc><paramgroups>0</paramgroups><rettype>[RepoUrl](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.RepoUrl)</rettype><retdesc>URL to the newly created repo. Value is a subclass of `str` containing
attributes like `endpoint`, `repo_type` and `repo_id`.</retdesc></docstring>
Create an empty repo on the HuggingFace Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_scheduled_job</name><anchor>huggingface_hub.HfApi.create_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10245</source><parameters>[{"name": "image", "val": ": str"}, {"name": "command", "val": ": list[str]"}, {"name": "schedule", "val": ": str"}, {"name": "suspend", "val": ": Optional[bool] = None"}, {"name": "concurrency", "val": ": Optional[bool] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **image** (`str`) --
  The Docker image to use.
  Examples: `"ubuntu"`, `"python:3.12"`, `"pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"`.
  Example with an image from a Space: `"hf.co/spaces/lhoestq/duckdb"`.

- **command** (`list[str]`) --
  The command to run. Example: `["echo", "hello"]`.

- **schedule** (`str`) --
  One of "@annually", "@yearly", "@monthly", "@weekly", "@daily", "@hourly", or a
  CRON schedule expression (e.g., '0 9 * * 1' for 9 AM every Monday).

- **suspend** (`bool`, *optional*) --
  If True, the scheduled Job is suspended (paused).  Defaults to False.

- **concurrency** (`bool`, *optional*) --
  If True, multiple instances of this Job can run concurrently. Defaults to False.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Create scheduled compute Jobs on Hugging Face infrastructure.



Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example">

Create your first scheduled Job:

```python
>>> from huggingface_hub import create_scheduled_job
>>> create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"], schedule="@hourly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example-2">

Use a CRON schedule expression:

```python
>>> from huggingface_hub import create_scheduled_job
>>> create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('this runs every 5min')"], schedule="*/5 * * * *")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example-3">

Create a scheduled GPU Job:

```python
>>> from huggingface_hub import create_scheduled_job
>>> image = "pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"
>>> command = ["python", "-c", "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"]
>>> create_scheduled_job(image, command, flavor="a10g-small", schedule="@hourly")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_scheduled_uv_job</name><anchor>huggingface_hub.HfApi.create_scheduled_uv_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10517</source><parameters>[{"name": "script", "val": ": str"}, {"name": "script_args", "val": ": Optional[list[str]] = None"}, {"name": "schedule", "val": ": str"}, {"name": "suspend", "val": ": Optional[bool] = None"}, {"name": "concurrency", "val": ": Optional[bool] = None"}, {"name": "dependencies", "val": ": Optional[list[str]] = None"}, {"name": "python", "val": ": Optional[str] = None"}, {"name": "image", "val": ": Optional[str] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "_repo", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **script** (`str`) --
  Path or URL of the UV script, or a command.

- **script_args** (`list[str]`, *optional*) --
  Arguments to pass to the script, or a command.

- **schedule** (`str`) --
  One of "@annually", "@yearly", "@monthly", "@weekly", "@daily", "@hourly", or a
  CRON schedule expression (e.g., '0 9 * * 1' for 9 AM every Monday).

- **suspend** (`bool`, *optional*) --
  If True, the scheduled Job is suspended (paused).  Defaults to False.

- **concurrency** (`bool`, *optional*) --
  If True, multiple instances of this Job can run concurrently. Defaults to False.

- **dependencies** (`list[str]`, *optional*) --
  Dependencies to use to run the UV script.

- **python** (`str`, *optional*) --
  Use a specific Python version. Default is 3.12.

- **image** (`str`, *optional*, defaults to "ghcr.io/astral-sh/uv --python3.12-bookworm"):
  Use a custom Docker image with `uv` installed.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run a UV script Job on Hugging Face infrastructure.



Example:

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example">

Schedule a script from a URL:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "https://raw.githubusercontent.com/huggingface/trl/refs/heads/main/trl/scripts/sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example-2">

Schedule a local script:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "my_sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example-3">

Schedule a command:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "lighteval"
>>> script_args= ["endpoint", "inference-providers", "model_name=openai/gpt-oss-20b,provider=auto", "lighteval|gsm8k|0|0"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["lighteval"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_tag</name><anchor>huggingface_hub.HfApi.create_tag</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5864</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "tag", "val": ": str"}, {"name": "tag_message", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a commit will be tagged.
  Example: `"user/my-cool-model"`.

- **tag** (`str`) --
  The name of the tag to create.

- **tag_message** (`str`, *optional*) --
  The description of the tag to create.

- **revision** (`str`, *optional*) --
  The git revision to tag. It can be a branch name or the OID/SHA of a
  commit, as a hexadecimal string. Shorthands (7 first characters) are
  also supported. Defaults to the head of the `"main"` branch.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if tagging a dataset or
  space, `None` or `"model"` if tagging a model. Default is
  `None`.

- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if tag already exists.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the branch already exists on the repo (error 409) and `exist_ok` is
  set to `False`.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Tag a given commit of a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_webhook</name><anchor>huggingface_hub.HfApi.create_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9032</source><parameters>[{"name": "url", "val": ": Optional[str] = None"}, {"name": "job_id", "val": ": Optional[str] = None"}, {"name": "watched", "val": ": list[Union[dict, WebhookWatchedItem]]"}, {"name": "domains", "val": ": Optional[list[constants.WEBHOOK_DOMAIN_T]] = None"}, {"name": "secret", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **url** (`str`) --
  URL to send the payload to.
- **job_id** (`str`) --
  ID of the source Job to trigger with the webhook payload in the environment variable WEBHOOK_PAYLOAD.
  Additional environment variables are available for convenience: WEBHOOK_REPO_ID, WEBHOOK_REPO_TYPE and WEBHOOK_SECRET.
- **watched** (`list[WebhookWatchedItem]`) --
  List of [WebhookWatchedItem](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookWatchedItem) to be watched by the webhook. It can be users, orgs, models, datasets or spaces.
  Watched items can also be provided as plain dictionaries.
- **domains** (`list[Literal["repo", "discussion"]]`, optional) --
  List of domains to watch. It can be "repo", "discussion" or both.
- **secret** (`str`, optional) --
  A secret to sign the payload with.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[WebhookInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookInfo)</rettype><retdesc>Info about the newly created webhook.</retdesc></docstring>
Create a new webhook.

The webhook can either send a payload to a URL, or trigger a Job to run on Hugging Face infrastructure.
This function should be called with one of `url` or `job_id`, but not both.







Example:

Create a webhook that sends a payload to a URL
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_webhook.example">

```python
>>> from huggingface_hub import create_webhook
>>> payload = create_webhook(
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
...     domains=["repo", "discussion"],
...     secret="my-secret",
... )
>>> print(payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    job=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>

Run a Job and then create a webhook that triggers this Job
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_webhook.example-2">

```python
>>> from huggingface_hub import create_webhook, run_job
>>> job = run_job(
...     image="ubuntu",
...     command=["bash", "-c", r"echo An event occured in $WEBHOOK_REPO_ID: $WEBHOOK_PAYLOAD"],
... )
>>> payload = create_webhook(
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     job_id=job.id,
...     domains=["repo", "discussion"],
...     secret="my-secret",
... )
>>> print(payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url=None,
    job=JobSpec(
        docker_image='ubuntu',
        space_id=None,
        command=['bash', '-c', 'echo An event occured in $WEBHOOK_REPO_ID: $WEBHOOK_PAYLOAD'],
        arguments=[],
        environment={},
        secrets=[],
        flavor='cpu-basic',
        timeout=None,
        tags=None,
        arch=None
    ),
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dataset_info</name><anchor>huggingface_hub.HfApi.dataset_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2556</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandDatasetProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the dataset repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandDatasetProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `files_metadata` is passed.
  Possible values are `"author"`, `"cardData"`, `"citation"`, `"createdAt"`, `"disabled"`, `"description"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"lastModified"`, `"likes"`, `"paperswithcode_id"`, `"private"`, `"siblings"`, `"sha"`, `"tags"`, `"trendingScore"`,`"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[hf_api.DatasetInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DatasetInfo)</rettype><retdesc>The dataset repository information.</retdesc></docstring>

Get info on one specific dataset on huggingface.co.

Dataset can be private if you pass an acceptable token.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_branch</name><anchor>huggingface_hub.HfApi.delete_branch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5812</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a branch will be deleted.
  Example: `"user/my-cool-model"`.

- **branch** (`str`) --
  The name of the branch to delete.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if creating a branch on a dataset or
  space, `None` or `"model"` if tagging a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If trying to delete a protected branch. Ex: `main` cannot be deleted.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If trying to delete a branch that does not exist.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Delete a branch from a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_collection</name><anchor>huggingface_hub.HfApi.delete_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8264</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "missing_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to delete. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **missing_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if collection doesn't exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete a collection on the Hub.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_collection.example">

Example:

```py
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
```

</ExampleCodeBlock>

> [!WARNING]
> This is a non-revertible action. A deleted collection cannot be restored.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_collection_item</name><anchor>huggingface_hub.HfApi.delete_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8437</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_object_id", "val": ": str"}, {"name": "missing_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_object_id** (`str`) --
  ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
  It must be retrieved from a [CollectionItem](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.CollectionItem) object. Example: `collection.items[0].item_object_id`.
- **missing_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if item doesn't exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete an item from a collection.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_collection_item.example">

Example:

```py
>>> from huggingface_hub import get_collection, delete_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Delete item based on its ID
>>> delete_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_file</name><anchor>huggingface_hub.HfApi.delete_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4819</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example:
  `"checkpoints/1fec34a/weights.bin"`
- **repo_id** (`str`) --
  The repository from which the file will be deleted, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or
  space, `None` or `"model"` if in a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Delete {path_in_repo} with huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes a file in the given repo.



> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.
>     - [EntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.EntryNotFoundError)
>       If the file to download cannot be found.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_files</name><anchor>huggingface_hub.HfApi.delete_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4906</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "delete_patterns", "val": ": list[str]"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository from which the folder will be deleted, for example:
  `"username/custom_transformers"`
- **delete_patterns** (`list[str]`) --
  List of files or folders to delete. Each string can either be
  a file path, a folder path or a Unix shell-style wildcard.
  E.g. `["file.txt", "folder/", "data/*.parquet"]`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
  to the stored token.
- **repo_type** (`str`, *optional*) --
  Type of the repo to delete files from. Can be `"model"`,
  `"dataset"` or `"space"`. Defaults to `"model"`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary (first line) of the generated commit. Defaults to
  `f"Delete files using huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete files from a repository on the Hub.

If a folder path is provided, the entire folder is deleted as well as
all files it contained.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_folder</name><anchor>huggingface_hub.HfApi.delete_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4982</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative folder path in the repo, for example: `"checkpoints/1fec34a"`.
- **repo_id** (`str`) --
  The repository from which the folder will be deleted, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
  to the stored token.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the folder is in a dataset or
  space, `None` or `"model"` if in a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Delete folder {path_in_repo} with huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes a folder in the given repo.

Simple wrapper around [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_inference_endpoint</name><anchor>huggingface_hub.HfApi.delete_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7875</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to delete.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete an Inference Endpoint.

This operation is not reversible. If you don't want to be charged for an Inference Endpoint, it is preferable
to pause it with [pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint) or scale it to zero with [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint).

For convenience, you can also delete an Inference Endpoint using [InferenceEndpoint.delete()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.delete).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_repo</name><anchor>huggingface_hub.HfApi.delete_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3739</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "missing_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model.
- **missing_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo does not exist.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to delete from cannot be found and `missing_ok` is set to False (default).</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_scheduled_job</name><anchor>huggingface_hub.HfApi.delete_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10429</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_secret</name><anchor>huggingface_hub.HfApi.delete_space_secret</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6810</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Secret key. Example: `"GITHUB_API_KEY"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Deletes a secret from a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_storage</name><anchor>huggingface_hub.HfApi.delete_space_storage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7287</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to update. Example: `"open-llm-leaderboard/open_llm_leaderboard"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc><raises>- `BadRequestError` -- 
  If space has no persistent storage.</raises><raisederrors>`BadRequestError`</raisederrors></docstring>
Delete persistent storage for a Space.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_variable</name><anchor>huggingface_hub.HfApi.delete_space_variable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6900</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Deletes a variable from a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_tag</name><anchor>huggingface_hub.HfApi.delete_tag</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5938</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "tag", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a tag will be deleted.
  Example: `"user/my-cool-model"`.

- **tag** (`str`) --
  The name of the tag to delete.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if tagging a dataset or space, `None` or
  `"model"` if tagging a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If tag is not found.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Delete a tag from a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_webhook</name><anchor>huggingface_hub.HfApi.delete_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9349</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to delete.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>
Delete a webhook.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_webhook.example">

Example:
```python
>>> from huggingface_hub import delete_webhook
>>> delete_webhook("654bbbc16f2ec14d77f109cc")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_webhook</name><anchor>huggingface_hub.HfApi.disable_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9296</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to disable.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[WebhookInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookInfo)</rettype><retdesc>Info about the disabled webhook.</retdesc></docstring>
Disable a webhook (makes it "disabled").







<ExampleCodeBlock anchor="huggingface_hub.HfApi.disable_webhook.example">

Example:
```python
>>> from huggingface_hub import disable_webhook
>>> disabled_webhook = disable_webhook("654bbbc16f2ec14d77f109cc")
>>> disabled_webhook
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    jon=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=True,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>duplicate_space</name><anchor>huggingface_hub.HfApi.duplicate_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7133</source><parameters>[{"name": "from_id", "val": ": str"}, {"name": "to_id", "val": ": Optional[str] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "exist_ok", "val": ": bool = False"}, {"name": "hardware", "val": ": Optional[SpaceHardware] = None"}, {"name": "storage", "val": ": Optional[SpaceStorage] = None"}, {"name": "sleep_time", "val": ": Optional[int] = None"}, {"name": "secrets", "val": ": Optional[list[dict[str, str]]] = None"}, {"name": "variables", "val": ": Optional[list[dict[str, str]]] = None"}]</parameters><paramsdesc>- **from_id** (`str`) --
  ID of the Space to duplicate. Example: `"pharma/CLIP-Interrogator"`.
- **to_id** (`str`, *optional*) --
  ID of the new Space. Example: `"dog/CLIP-Interrogator"`. If not provided, the new Space will have the same
  name as the original Space, but in your account.
- **private** (`bool`, *optional*) --
  Whether the new Space should be private or not. Defaults to the same privacy as the original Space.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo already exists.
- **hardware** (`SpaceHardware` or `str`, *optional*) --
  Choice of Hardware. Example: `"t4-medium"`. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for a complete list.
- **storage** (`SpaceStorage` or `str`, *optional*) --
  Choice of persistent storage tier. Example: `"small"`. See [SpaceStorage](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceStorage) for a complete list.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **secrets** (`list[dict[str, str]]`, *optional*) --
  A list of secret keys to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
- **variables** (`list[dict[str, str]]`, *optional*) --
  A list of public environment variables to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.</paramsdesc><paramgroups>0</paramgroups><rettype>[RepoUrl](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.RepoUrl)</rettype><retdesc>URL to the newly created repo. Value is a subclass of `str` containing
attributes like `endpoint`, `repo_type` and `repo_id`.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If one of `from_id` or `to_id` cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- `HfHubHTTPError` -- 
  If the HuggingFace API returned an error</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or `HfHubHTTPError`</raisederrors></docstring>
Duplicate a Space.

Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state
as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.duplicate_space.example">

Example:
```python
>>> from huggingface_hub import duplicate_space

# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)

# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>edit_discussion_comment</name><anchor>huggingface_hub.HfApi.edit_discussion_comment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6653</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment_id", "val": ": str"}, {"name": "new_content", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment_id** (`str`) --
  The ID of the comment to edit.
- **new_content** (`str`) --
  The new content of the comment. Comments support markdown formatting.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the edited comment</retdesc></docstring>
Edits a comment on a Discussion / Pull Request.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_webhook</name><anchor>huggingface_hub.HfApi.enable_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9243</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to enable.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[WebhookInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookInfo)</rettype><retdesc>Info about the enabled webhook.</retdesc></docstring>
Enable a webhook (makes it "active").







<ExampleCodeBlock anchor="huggingface_hub.HfApi.enable_webhook.example">

Example:
```python
>>> from huggingface_hub import enable_webhook
>>> enabled_webhook = enable_webhook("654bbbc16f2ec14d77f109cc")
>>> enabled_webhook
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fetch_job_logs</name><anchor>huggingface_hub.HfApi.fetch_job_logs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9927</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Fetch all the logs from a compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.fetch_job_logs.example">

Example:

```python
>>> from huggingface_hub import fetch_job_logs, run_job
>>> job = run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
>>> for log in fetch_job_logs(job.id):
...     print(log)
Hello from HF compute!
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>file_exists</name><anchor>huggingface_hub.HfApi.file_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2858</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **filename** (`str`) --
  The name of the file to check, for example:
  `"config.json"`
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information. Defaults to `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the file exists, False otherwise.</retdesc></docstring>

Checks if a file exists in a repository on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.file_exists.example">

Examples:
```py
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> file_exists("bigcode/not-a-repo", "config.json")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_collection</name><anchor>huggingface_hub.HfApi.get_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8089</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection of the Hub. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Gets information about a Collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_collection.example">

Example:

```py
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem(
    item_object_id='651446103cd773a050bf64c2',
    item_id='TheBloke/U-Amethyst-20B-AWQ',
    item_type='model',
    position=88,
    note=None
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_dataset_tags</name><anchor>huggingface_hub.HfApi.get_dataset_tags</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1783</source><parameters>[]</parameters></docstring>

List all valid dataset tags as a nested namespace object.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_discussion_details</name><anchor>huggingface_hub.HfApi.get_discussion_details</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6133</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Fetches a Discussion's / Pull Request 's details from the Hub.



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_full_repo_name</name><anchor>huggingface_hub.HfApi.get_full_repo_name</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5987</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "organization", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **model_id** (`str`) --
  The name of the model.
- **organization** (`str`, *optional*) --
  If passed, the repository name will be in the organization
  namespace instead of the user namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The repository name in the user's namespace
({username}/{model_id}) if no organization is passed, and under the
organization namespace ({organization}/{model_id}) otherwise.</retdesc></docstring>

Returns the repository name for a given model ID and optional
organization.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_hf_file_metadata</name><anchor>huggingface_hub.HfApi.get_hf_file_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5166</source><parameters>[{"name": "url", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "timeout", "val": ": Optional[float] = 10"}]</parameters><paramsdesc>- **url** (`str`) --
  File url, for example returned by [hf_hub_url()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_url).
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **timeout** (`float`, *optional*, defaults to 10) --
  How many seconds to wait for the server to send metadata before giving up.</paramsdesc><paramgroups>0</paramgroups><retdesc>A [HfFileMetadata](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.HfFileMetadata) object containing metadata such as location, etag, size and commit_hash.</retdesc></docstring>
Fetch metadata of a file versioned on the Hub for a given url.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_inference_endpoint</name><anchor>huggingface_hub.HfApi.get_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7691</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to retrieve information about.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the requested Inference Endpoint.</retdesc></docstring>
Get information about an Inference Endpoint.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_inference_endpoint.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_tags</name><anchor>huggingface_hub.HfApi.get_model_tags</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1774</source><parameters>[]</parameters></docstring>

List all valid model tags as a nested namespace object


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_organization_overview</name><anchor>huggingface_hub.HfApi.get_organization_overview</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9574</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get an overview of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended method
  for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Organization`</rettype><retdesc>An `Organization` object with the organization's overview.</retdesc><raises>- [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>``HTTPError``</raisederrors></docstring>

Get an overview of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_paths_info</name><anchor>huggingface_hub.HfApi.get_paths_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3316</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "paths", "val": ": Union[list[str], str]"}, {"name": "expand", "val": ": bool = False"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **paths** (`Union[list[str], str]`, *optional*) --
  The paths to get information about. If a path do not exist, it is ignored without raising
  an exception.
- **expand** (`bool`, *optional*, defaults to `False`) --
  Whether to fetch more information about the paths (e.g. last commit and files' security scan results). This
  operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
  As pagination is implemented in `huggingface_hub`, this is transparent for you except for the time it
  takes to get the results.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information. Defaults to `"main"` branch.
- **repo_type** (`str`, *optional*) --
  The type of the repository from which to get the information (`"model"`, `"dataset"` or `"space"`.
  Defaults to `"model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[Union[RepoFile, RepoFolder]]`</rettype><retdesc>The information about the paths, as a list of `RepoFile` and `RepoFolder` objects.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Get information about a repo's paths.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_paths_info.example">

Example:
```py
>>> from huggingface_hub import get_paths_info
>>> paths_info = get_paths_info("allenai/c4", ["README.md", "en"], repo_type="dataset")
>>> paths_info
[
    RepoFile(path='README.md', size=2379, blob_id='f84cb4c97182890fc1dbdeaf1a6a468fd27b4fff', lfs=None, last_commit=None, security=None),
    RepoFolder(path='en', tree_id='dc943c4c40f53d02b31ced1defa7e5f438d5862e', last_commit=None)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_repo_discussions</name><anchor>huggingface_hub.HfApi.get_repo_discussions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6025</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "discussion_type", "val": ": Optional[constants.DiscussionTypeFilter] = None"}, {"name": "discussion_status", "val": ": Optional[constants.DiscussionStatusFilter] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **author** (`str`, *optional*) --
  Pass a value to filter by discussion author. `None` means no filter.
  Default is `None`.
- **discussion_type** (`str`, *optional*) --
  Set to `"pull_request"` to fetch only pull requests, `"discussion"`
  to fetch only discussions. Set to `"all"` or `None` to fetch both.
  Default is `None`.
- **discussion_status** (`str`, *optional*) --
  Set to `"open"` (respectively `"closed"`) to fetch only open
  (respectively closed) discussions. Set to `"all"` or `None`
  to fetch both.
  Default is `None`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if fetching from a dataset or
  space, `None` or `"model"` if fetching from a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterator[Discussion]`</rettype><retdesc>An iterator of [Discussion](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.Discussion) objects.</retdesc></docstring>

Fetches Discussions and Pull Requests for the given repo.







Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_repo_discussions.example">

Collecting all discussions of a repo in a list:

```python
>>> from huggingface_hub import get_repo_discussions
>>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_repo_discussions.example-2">

Iterating over discussions of a repo:

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(repo_id="bert-base-uncased"):
...     print(discussion.num, discussion.title)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_safetensors_metadata</name><anchor>huggingface_hub.HfApi.get_safetensors_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5489</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or space, `None` or `"model"` if in a
  model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
  head of the `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`SafetensorsRepoMetadata`</rettype><retdesc>information related to safetensors repo.</retdesc><raises>- `NotASafetensorsRepoError` -- 
  If the repo is not a safetensors repo i.e. doesn't have either a
  `model.safetensors` or a `model.safetensors.index.json` file.
- `SafetensorsParsingError` -- 
  If a safetensors file header couldn't be parsed correctly.</raises><raisederrors>`NotASafetensorsRepoError` or `SafetensorsParsingError`</raisederrors></docstring>

Parse metadata for a safetensors repo on the Hub.

We first check if the repo has a single safetensors file or a sharded safetensors repo. If it's a single
safetensors file, we parse the metadata from this file. If it's a sharded safetensors repo, we parse the
metadata from the index file and then parse the metadata from each shard.

To parse metadata from a single safetensors file, use [parse_safetensors_file_metadata()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.parse_safetensors_file_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_safetensors_metadata.example">

Example:
```py
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
    metadata=None,
    sharded=False,
    weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
    files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}

# Parse repo with sharded model
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00,  5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72  # All safetensors files have been fetched

# Parse repo with sharded model
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_space_runtime</name><anchor>huggingface_hub.HfApi.get_space_runtime</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6929</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Gets runtime information about a Space.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_space_variables</name><anchor>huggingface_hub.HfApi.get_space_variables</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6836</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to query. Example: `"bigcode/in-the-stack"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Gets all variables from a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_user_overview</name><anchor>huggingface_hub.HfApi.get_user_overview</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9548</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get an overview of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`User`</rettype><retdesc>A [User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) object with the user's overview.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get an overview of a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_webhook</name><anchor>huggingface_hub.HfApi.get_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8928</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to get.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[WebhookInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookInfo)</rettype><retdesc>Info about the webhook.</retdesc></docstring>
Get a webhook by its id.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_webhook.example">

Example:
```python
>>> from huggingface_hub import get_webhook
>>> webhook = get_webhook("654bbbc16f2ec14d77f109cc")
>>> print(webhook)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    secret="my-secret",
    domains=["repo", "discussion"],
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>grant_access</name><anchor>huggingface_hub.HfApi.grant_access</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8873</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to grant access to.
- **user** (`str`) --
  The username of the user to grant access.
- **repo_type** (`str`, *optional*) --
  The type of the repo to grant access to. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 400 if the user already has access to the repo.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Grant access to a user for a given gated repo.

Granting access don't require for the user to send an access request by themselves. The user is automatically
added to the accepted list meaning they can download the files You can revoke the granted access at any time
using [cancel_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) or [reject_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>hf_hub_download</name><anchor>huggingface_hub.HfApi.hf_hub_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5240</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "subfolder", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "cache_dir", "val": ": Union[str, Path, None] = None"}, {"name": "local_dir", "val": ": Union[str, Path, None] = None"}, {"name": "force_download", "val": ": bool = False"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "dry_run", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **subfolder** (`str`, *optional*) --
  An optional value corresponding to a folder inside the repository.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded file will be placed under this directory.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in
  the local cache.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `httpx.request`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **dry_run** (`bool`, *optional*, defaults to `False`) --
  If `True`, perform a dry run without actually downloading the file. Returns a
  [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) object containing information about what would be downloaded.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo)</rettype><retdesc>- If `dry_run=False`: Local path of file or if networking is off, last version of file cached on disk.
- If `dry_run=True`: A [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) object containing download information.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- `~utils.RemoteEntryNotFoundError` -- 
  If the file to download cannot be found.
- [LocalEntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) -- 
  If network is disabled or unavailable and file is not found in cache.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` but the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- 
  If ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or `~utils.RemoteEntryNotFoundError` or [LocalEntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download a given file if it's not already present in the local cache.

The new cache file layout looks like this:
- The cache directory contains one subfolder per repo_id (namespaced by repo type)
- inside each repo folder:
  - refs is a list of the latest known revision => commit_hash pairs
  - blobs contains the actual file blobs (identified by their git-sha or sha256, depending on
  whether they're LFS files or not)
  - snapshots contains one subfolder per commit, each "commit" contains the subset of the files
  that have been resolved at that particular commit. Each filename is a symlink to the blob
  at that particular commit.

<ExampleCodeBlock anchor="huggingface_hub.HfApi.hf_hub_download.example">

```
[  96]  .
└── [ 160]  models--julien-c--EsperBERTo-small
    ├── [ 160]  blobs
    │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
    │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
    │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
    ├── [  96]  refs
    │   └── [  40]  main
    └── [ 128]  snapshots
        ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
        │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
        │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
            ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
            └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
```

</ExampleCodeBlock>

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files. While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>hide_discussion_comment</name><anchor>huggingface_hub.HfApi.hide_discussion_comment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6710</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment_id** (`str`) --
  The ID of the comment to edit.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the hidden comment</retdesc></docstring>
Hides a comment on a Discussion / Pull Request.

> [!WARNING]
> Hidden comments' content cannot be retrieved anymore. Hiding a comment is irreversible.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inspect_job</name><anchor>huggingface_hub.HfApi.inspect_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10050</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Inspect a compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.inspect_job.example">

Example:

```python
>>> from huggingface_hub import inspect_job, run_job
>>> job = run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
>>> inspect_job(job.id)
JobInfo(
    id='68780d00bbe36d38803f645f',
    created_at=datetime.datetime(2025, 7, 16, 20, 35, 12, 808000, tzinfo=datetime.timezone.utc),
    docker_image='python:3.12',
    space_id=None,
    command=['python', '-c', "print('Hello from HF compute!')"],
    arguments=[],
    environment={},
    secrets={},
    flavor='cpu-basic',
    status=JobStatus(stage='RUNNING', message=None)
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inspect_scheduled_job</name><anchor>huggingface_hub.HfApi.inspect_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10390</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Inspect a scheduled compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.inspect_scheduled_job.example">

Example:

```python
>>> from huggingface_hub import inspect_job, create_scheduled_job
>>> scheduled_job = create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"], schedule="@hourly")
>>> inspect_scheduled_job(scheduled_job.id)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_accepted_access_requests</name><anchor>huggingface_hub.HfApi.list_accepted_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8557</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get accepted access requests for a given gated repo.

An accepted request means the user has requested access to the repo and the request has been accepted. The user
can download any file of the repo. If the approval mode is automatic, this list should contains by default all
requests. Accepted requests can be cancelled or rejected at any time using [cancel_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) and
[reject_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request). A cancelled request will go back to the pending list while a rejected request will
go to the rejected list. In both cases, the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_accepted_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_accepted_access_requests

>>> requests = list_accepted_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='accepted',
        fields=None,
    ),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_collections</name><anchor>huggingface_hub.HfApi.list_collections</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8033</source><parameters>[{"name": "owner", "val": ": Union[list[str], str, None] = None"}, {"name": "item", "val": ": Union[list[str], str, None] = None"}, {"name": "sort", "val": ": Optional[Literal['lastModified', 'trending', 'upvotes']] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **owner** (`list[str]` or `str`, *optional*) --
  Filter by owner's username.
- **item** (`list[str]` or `str`, *optional*) --
  Filter collections containing a particular items. Example: `"models/teknium/OpenHermes-2.5-Mistral-7B"`, `"datasets/squad"` or `"papers/2311.12983"`.
- **sort** (`Literal["lastModified", "trending", "upvotes"]`, *optional*) --
  Sort collections by last modified, trending or upvotes.
- **limit** (`int`, *optional*) --
  Maximum number of collections to be returned.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[Collection]`</rettype><retdesc>an iterable of [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection) objects.</retdesc></docstring>
List collections on the Huggingface Hub, given some filters.

> [!WARNING]
> When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items
> from a collection, you must use [get_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_collection).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_datasets</name><anchor>huggingface_hub.HfApi.list_datasets</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1992</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "benchmark", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "dataset_name", "val": ": Optional[str] = None"}, {"name": "gated", "val": ": Optional[bool] = None"}, {"name": "language_creators", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "language", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "multilinguality", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "size_categories", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "task_categories", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "task_ids", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "sort", "val": ": Optional[Union[Literal['last_modified'], str]] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandDatasetProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "tags", "val": ": Optional[Union[str, list[str]]] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable[str]`, *optional*) --
  A string or list of string to filter datasets on the hub.
- **author** (`str`, *optional*) --
  A string which identify the author of the returned datasets.
- **benchmark** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by their official benchmark.
- **dataset_name** (`str`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by its name, such as `SQAC` or `wikineural`
- **gated** (`bool`, *optional*) --
  A boolean to filter datasets on the Hub that are gated or not. By default, all datasets are returned.
  If `gated=True` is passed, only gated datasets are returned.
  If `gated=False` is passed, only non-gated datasets are returned.
- **language_creators** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub with how the data was curated, such as `crowdsourced` or
  `machine_generated`.
- **language** (`str` or `List`, *optional*) --
  A string or list of strings representing a two-character language to
  filter datasets by on the Hub.
- **multilinguality** (`str` or `List`, *optional*) --
  A string or list of strings representing a filter for datasets that
  contain multiple languages.
- **size_categories** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the size of the dataset such as `100K<n<1M` or
  `1M<n<10M`.
- **tags** (`str` or `List`, *optional*) --
  Deprecated. Pass tags in `filter` to filter datasets by tags.
- **task_categories** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the designed task, such as `audio_classification` or
  `named_entity_recognition`.
- **task_ids** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the specific task such as `speech_emotion_recognition` or
  `paraphrase`.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned datasets.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at", "downloads" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of datasets fetched. Leaving this option
  to `None` fetches all datasets.
- **expand** (`list[ExpandDatasetProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"citation"`, `"createdAt"`, `"disabled"`, `"description"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"lastModified"`, `"likes"`, `"paperswithcode_id"`, `"private"`, `"siblings"`, `"sha"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all dataset data, including the `last_modified`,
  the `card_data` and  the files. Can contain useful information such as the
  PapersWithCode ID.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[DatasetInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.DatasetInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DatasetInfo) objects.</retdesc></docstring>

List datasets hosted on the Huggingface Hub, given some filters.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_datasets.example">

Example usage with the `filter` argument:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all datasets
>>> api.list_datasets()


# List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")


# List only the datasets in russian for language modeling
>>> api.list_datasets(
...     filter=("language:ru", "task_ids:language-modeling")
... )

# List FiftyOne datasets (identified by the tag "fiftyone" in dataset card)
>>> api.list_datasets(tags="fiftyone")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_datasets.example-2">

Example usage with the `search` argument:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all datasets with "text" in their name
>>> api.list_datasets(search="text")

# List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_inference_catalog</name><anchor>huggingface_hub.HfApi.list_inference_catalog</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7661</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).</paramsdesc><paramgroups>0</paramgroups><rettype>List`str`</rettype><retdesc>A list of model IDs available in the catalog.</retdesc></docstring>
List models available in the Hugging Face Inference Catalog.

The goal of the Inference Catalog is to provide a curated list of models that are optimized for inference
and for which default configurations have been tested. See https://endpoints.huggingface.co/catalog for a list
of available models in the catalog.

Use [create_inference_endpoint_from_catalog()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint_from_catalog) to deploy a model from the catalog.







> [!WARNING]
> `list_inference_catalog` is experimental. Its API is subject to change in the future. Please provide feedback
> if you have any suggestions or requests.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_inference_endpoints</name><anchor>huggingface_hub.HfApi.list_inference_endpoints</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7322</source><parameters>[{"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **namespace** (`str`, *optional*) --
  The namespace to list endpoints for. Defaults to the current user. Set to `"*"` to list all endpoints
  from all namespaces (i.e. personal namespace and all orgs the user belongs to).
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>list[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>A list of all inference endpoints for the given namespace.</retdesc></docstring>
Lists all inference endpoints for the given namespace.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_inference_endpoints.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_inference_endpoints()
[InferenceEndpoint(name='my-endpoint', ...), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_jobs</name><anchor>huggingface_hub.HfApi.list_jobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10018</source><parameters>[{"name": "timeout", "val": ": Optional[int] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.

- **namespace** (`str`, *optional*) --
  The namespace from where it lists the jobs. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

List compute Jobs on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_lfs_files</name><anchor>huggingface_hub.HfApi.list_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3473</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository for which you are listing LFS files.
- **repo_type** (`str`, *optional*) --
  Type of repository. Set to `"dataset"` or `"space"` if listing from a dataset or space, `None` or
  `"model"` if listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[LFSFileInfo]`</rettype><retdesc>An iterator of `LFSFileInfo` objects.</retdesc></docstring>

List all LFS files in a repo on the Hub.

This is primarily useful to count how much storage a repo is using and to eventually clean up large files
with [permanently_delete_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.permanently_delete_lfs_files). Note that this would be a permanent action that will affect all commits
referencing this deleted files and that cannot be undone.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_lfs_files.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_liked_repos</name><anchor>huggingface_hub.HfApi.list_liked_repos</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2366</source><parameters>[{"name": "user", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **user** (`str`, *optional*) --
  Name of the user for which you want to fetch the likes.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[UserLikes](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.UserLikes)</rettype><retdesc>object containing the user name and 3 lists of repo ids (1 for
models, 1 for datasets and 1 for Spaces).</retdesc><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `user` is not passed and no token found (either from argument or from machine).</raises><raisederrors>``ValueError``</raisederrors></docstring>

List all public repos liked by a user on huggingface.co.

This list is public so token is optional. If `user` is not passed, it defaults to
the logged in user.

See also [unlike()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.unlike).











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_liked_repos.example">

Example:
```python
>>> from huggingface_hub import list_liked_repos

>>> likes = list_liked_repos("julien-c")

>>> likes.user
"julien-c"

>>> likes.models
["osanseviero/streamlit_1.15", "Xhaheen/ChatGPT_HF", ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_models</name><anchor>huggingface_hub.HfApi.list_models</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1792</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "apps", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "gated", "val": ": Optional[bool] = None"}, {"name": "inference", "val": ": Optional[Literal['warm']] = None"}, {"name": "inference_provider", "val": ": Optional[Union[Literal['all'], 'PROVIDER_T', list['PROVIDER_T']]] = None"}, {"name": "model_name", "val": ": Optional[str] = None"}, {"name": "trained_dataset", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "pipeline_tag", "val": ": Optional[str] = None"}, {"name": "emissions_thresholds", "val": ": Optional[tuple[float, float]] = None"}, {"name": "sort", "val": ": Union[Literal['last_modified'], str, None] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandModelProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "cardData", "val": ": bool = False"}, {"name": "fetch_config", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable[str]`, *optional*) --
  A string or list of string to filter models on the Hub.
  Models can be filtered by library, language, task, tags, and more.
- **author** (`str`, *optional*) --
  A string which identify the author (user or organization) of the
  returned models.
- **apps** (`str` or `List`, *optional*) --
  A string or list of strings to filter models on the Hub that
  support the specified apps. Example values include `"ollama"` or `["ollama", "vllm"]`.
- **gated** (`bool`, *optional*) --
  A boolean to filter models on the Hub that are gated or not. By default, all models are returned.
  If `gated=True` is passed, only gated models are returned.
  If `gated=False` is passed, only non-gated models are returned.
- **inference** (`Literal["warm"]`, *optional*) --
  If "warm", filter models on the Hub currently served by at least one provider.
- **inference_provider** (`Literal["all"]` or `str`, *optional*) --
  A string to filter models on the Hub that are served by a specific provider.
  Pass `"all"` to get all models served by at least one provider.
- **model_name** (`str`, *optional*) --
  A string that contain complete or partial names for models on the
  Hub, such as "bert" or "bert-base-cased"
- **trained_dataset** (`str` or `List`, *optional*) --
  A string tag or a list of string tags of the trained dataset for a
  model on the Hub.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned model ids.
- **pipeline_tag** (`str`, *optional*) --
  A string pipeline tag to filter models on the Hub by, such as `summarization`.
- **emissions_thresholds** (`Tuple`, *optional*) --
  A tuple of two ints or floats representing a minimum and maximum
  carbon footprint to filter the resulting models with in grams.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at", "downloads" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of models fetched. Leaving this option
  to `None` fetches all models.
- **expand** (`list[ExpandModelProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full`, `cardData` or `fetch_config` are passed.
  Possible values are `"author"`, `"cardData"`, `"config"`, `"createdAt"`, `"disabled"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"gguf"`, `"inference"`, `"inferenceProviderMapping"`, `"lastModified"`, `"library_name"`, `"likes"`, `"mask_token"`, `"model-index"`, `"pipeline_tag"`, `"private"`, `"safetensors"`, `"sha"`, `"siblings"`, `"spaces"`, `"tags"`, `"transformersInfo"`, `"trendingScore"`, `"widgetData"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all model data, including the `last_modified`,
  the `sha`, the files and the `tags`. This is set to `True` by
  default when using a filter.
- **cardData** (`bool`, *optional*) --
  Whether to grab the metadata for the model as well. Can contain
  useful information such as carbon emissions, metrics, and
  datasets trained on.
- **fetch_config** (`bool`, *optional*) --
  Whether to fetch the model configs as well. This is not included
  in `full` due to its size.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[ModelInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.ModelInfo) objects.</retdesc></docstring>

List models hosted on the Huggingface Hub, given some filters.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_models.example">

Example:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all models
>>> api.list_models()

# List text classification models
>>> api.list_models(filter="text-classification")

# List models from the KerasHub library
>>> api.list_models(filter="keras-hub")

# List models served by Cohere
>>> api.list_models(inference_provider="cohere")

# List models with "bert" in their name
>>> api.list_models(search="bert")

# List models with "bert" in their name and pushed by google
>>> api.list_models(search="bert", author="google")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_organization_followers</name><anchor>huggingface_hub.HfApi.list_organization_followers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9601</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get the followers of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) objects with the followers of the organization.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

List followers of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_organization_members</name><anchor>huggingface_hub.HfApi.list_organization_members</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9630</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get the members of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) objects with the members of the organization.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

List of members of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_papers</name><anchor>huggingface_hub.HfApi.list_papers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9714</source><parameters>[{"name": "query", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **query** (`str`, *optional*) --
  A search query string to find papers.
  If provided, returns papers that match the query.
- **token** (Union[bool, str, None], *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[PaperInfo]`</rettype><retdesc>an iterable of `huggingface_hub.hf_api.PaperInfo` objects.</retdesc></docstring>

List daily papers on the Hugging Face Hub given a search query.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_papers.example">

Example:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_pending_access_requests</name><anchor>huggingface_hub.HfApi.list_pending_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8493</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get pending access requests for a given gated repo.

A pending request means the user has requested access to the repo but the request has not been processed yet.
If the approval mode is automatic, this list should be empty. Pending requests can be accepted or rejected
using [accept_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and [reject_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_pending_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_pending_access_requests, accept_access_request

# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='pending',
        fields=None,
    ),
    ...
]

# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_rejected_access_requests</name><anchor>huggingface_hub.HfApi.list_rejected_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8619</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get rejected access requests for a given gated repo.

A rejected request means the user has requested access to the repo and the request has been explicitly rejected
by a repo owner (either you or another user from your organization). The user cannot download any file of the
repo. Rejected requests can be accepted or cancelled at any time using [accept_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and
[cancel_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request). A cancelled request will go back to the pending list while an accepted request will
go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_rejected_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_rejected_access_requests

>>> requests = list_rejected_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='rejected',
        fields=None,
    ),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_commits</name><anchor>huggingface_hub.HfApi.list_repo_commits</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3230</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "formatted", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing commits from a dataset or a Space, `None` or `"model"` if
  listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **formatted** (`bool`) --
  Whether to return the HTML-formatted title and description of the commits. Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>list[[GitCommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitCommitInfo)]</rettype><retdesc>list of objects containing information about the commits for a repo on the Hub.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Get the list of commits of a given revision for a repo on the Hub.

Commits are sorted by date (last commit first).



<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_commits.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]

# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
    commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
    authors=['system'],
    created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
    title='initial commit',
    message='',
    formatted_title=None,
    formatted_message=None
)

# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)
```

</ExampleCodeBlock>










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_files</name><anchor>huggingface_hub.HfApi.list_repo_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2916</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or space, `None` or `"model"` if uploading to
  a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[str]`</rettype><retdesc>the list of files in a given repository.</retdesc></docstring>

Get the list of files in a given repo.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_likers</name><anchor>huggingface_hub.HfApi.list_repo_likers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2442</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to retrieve . Example: `"user/my-cool-model"`.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) objects.</retdesc></docstring>

List all users who liked a given repo on the hugging Face Hub.

See also [list_liked_repos()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_liked_repos).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_refs</name><anchor>huggingface_hub.HfApi.list_repo_refs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3158</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "include_pull_requests", "val": ": bool = False"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing refs from a dataset or a Space,
  `None` or `"model"` if listing from a model. Default is `None`.
- **include_pull_requests** (`bool`, *optional*) --
  Whether to include refs from pull requests in the list. Defaults to `False`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[GitRefs](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitRefs)</rettype><retdesc>object containing all information about branches and tags for a
repo on the Hub.</retdesc></docstring>

Get the list of refs of a given repo (both tags and branches).



<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_refs.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])

>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
    branches=[
        GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
        GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
    ],
    converts=[],
    tags=[
        GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
    ]
)
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_tree</name><anchor>huggingface_hub.HfApi.list_repo_tree</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2953</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "path_in_repo", "val": ": Optional[str] = None"}, {"name": "recursive", "val": ": bool = False"}, {"name": "expand", "val": ": bool = False"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **path_in_repo** (`str`, *optional*) --
  Relative path of the tree (folder) in the repo, for example:
  `"checkpoints/1fec34a/results"`. Will default to the root tree (folder) of the repository.
- **recursive** (`bool`, *optional*, defaults to `False`) --
  Whether to list tree's files and folders recursively.
- **expand** (`bool`, *optional*, defaults to `False`) --
  Whether to fetch more information about the tree's files and folders (e.g. last commit and files' security scan results). This
  operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
  As pagination is implemented in `huggingface_hub`, this is transparent for you except for the time it
  takes to get the results.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the tree. Defaults to `"main"` branch.
- **repo_type** (`str`, *optional*) --
  The type of the repository from which to get the tree (`"model"`, `"dataset"` or `"space"`.
  Defaults to `"model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[Union[RepoFile, RepoFolder]]`</rettype><retdesc>The information about the tree's files and folders, as an iterable of `RepoFile` and `RepoFolder` objects. The order of the files and folders is
not guaranteed.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.
- `~utils.RemoteEntryNotFoundError` -- 
  If the tree (folder) does not exist (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or `~utils.RemoteEntryNotFoundError`</raisederrors></docstring>

List a repo tree's files and folders and get information about them.











Examples:

Get information about a repo's tree.
<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_tree.example">

```py
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("lysandre/arxiv-nlp")
>>> repo_tree
<generator object HfApi.list_repo_tree at 0x7fa4088e1ac0>
>>> list(repo_tree)
[
    RepoFile(path='.gitattributes', size=391, blob_id='ae8c63daedbd4206d7d40126955d4e6ab1c80f8f', lfs=None, last_commit=None, security=None),
    RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
    RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='flax_model.msgpack', size=497764107, blob_id='8095a62ccb4d806da7666fcda07467e2d150218e',
        lfs={'size': 497764107, 'sha256': 'd88b0d6a6ff9c3f8151f9d3228f57092aaea997f09af009eefd7373a77b5abb9', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='merges.txt', size=456318, blob_id='226b0752cac7789c48f0cb3ec53eda48b7be36cc', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='pytorch_model.bin', size=548123560, blob_id='64eaa9c526867e404b68f2c5d66fd78e27026523',
        lfs={'size': 548123560, 'sha256': '9be78edb5b928eba33aa88f431551348f7466ba9f5ef3daf1d552398722a5436', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='vocab.json', size=898669, blob_id='b00361fece0387ca34b4b8b8539ed830d644dbeb', lfs=None, last_commit=None, security=None)]
]
```

</ExampleCodeBlock>

Get even more information about a repo's tree (last commit and files' security scan results)
<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_tree.example-2">

```py
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("prompthero/openjourney-v4", expand=True)
>>> list(repo_tree)
[
    RepoFolder(
        path='feature_extractor',
        tree_id='aa536c4ea18073388b5b0bc791057a7296a00398',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFolder(
        path='safety_checker',
        tree_id='65aef9d787e5557373fdf714d6c34d4fcdd70440',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFile(
        path='model_index.json',
        size=582,
        blob_id='d3d7c1e8c3e78eeb1640b8e2041ee256e24c9ee1',
        lfs=None,
        last_commit={
            'oid': 'b195ed2d503f3eb29637050a886d77bd81d35f0e',
            'title': 'Fix deprecation warning by changing `CLIPFeatureExtractor` to `CLIPImageProcessor`. (#54)',
            'date': datetime.datetime(2023, 5, 15, 21, 41, 59, tzinfo=datetime.timezone.utc)
        },
        security={
            'safe': True,
            'av_scan': {'virusFound': False, 'virusNames': None},
            'pickle_import_scan': None
        }
    )
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_scheduled_jobs</name><anchor>huggingface_hub.HfApi.list_scheduled_jobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10358</source><parameters>[{"name": "timeout", "val": ": Optional[int] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.

- **namespace** (`str`, *optional*) --
  The namespace from where it lists the jobs. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

List scheduled compute Jobs on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_spaces</name><anchor>huggingface_hub.HfApi.list_spaces</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2205</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "datasets", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "models", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "linked", "val": ": bool = False"}, {"name": "sort", "val": ": Union[Literal['last_modified'], str, None] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandSpaceProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable`, *optional*) --
  A string tag or list of tags that can be used to identify Spaces on the Hub.
- **author** (`str`, *optional*) --
  A string which identify the author of the returned Spaces.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned Spaces.
- **datasets** (`str` or `Iterable`, *optional*) --
  Whether to return Spaces that make use of a dataset.
  The name of a specific dataset can be passed as a string.
- **models** (`str` or `Iterable`, *optional*) --
  Whether to return Spaces that make use of a model.
  The name of a specific model can be passed as a string.
- **linked** (`bool`, *optional*) --
  Whether to return Spaces that make use of either a model or a dataset.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of Spaces fetched. Leaving this option
  to `None` fetches all Spaces.
- **expand** (`list[ExpandSpaceProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"datasets"`, `"disabled"`, `"lastModified"`, `"createdAt"`, `"likes"`, `"models"`, `"private"`, `"runtime"`, `"sdk"`, `"siblings"`, `"sha"`, `"subdomain"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all Spaces data, including the `last_modified`, `siblings`
  and `card_data` fields.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[SpaceInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.SpaceInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.SpaceInfo) objects.</retdesc></docstring>

List spaces hosted on the Huggingface Hub, given some filters.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_user_followers</name><anchor>huggingface_hub.HfApi.list_user_followers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9658</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get the followers of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) objects with the followers of the user.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get the list of followers of a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_user_following</name><anchor>huggingface_hub.HfApi.list_user_following</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9686</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get the users followed by.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.User) objects with the users followed by the user.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get the list of users followed by a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_webhooks</name><anchor>huggingface_hub.HfApi.list_webhooks</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8981</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[WebhookInfo]`</rettype><retdesc>List of webhook info objects.</retdesc></docstring>
List all configured webhooks.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_webhooks.example">

Example:
```python
>>> from huggingface_hub import list_webhooks
>>> webhooks = list_webhooks()
>>> len(webhooks)
2
>>> webhooks[0]
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    secret="my-secret",
    domains=["repo", "discussion"],
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge_pull_request</name><anchor>huggingface_hub.HfApi.merge_pull_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6600</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "comment", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment** (`str`, *optional*) --
  An optional comment to post with the status change.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionStatusChange](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionStatusChange)</rettype><retdesc>the status change event</retdesc></docstring>
Merges a Pull Request.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>model_info</name><anchor>huggingface_hub.HfApi.model_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2481</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "securityStatus", "val": ": Optional[bool] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandModelProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the model repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **securityStatus** (`bool`, *optional*) --
  Whether to retrieve the security status from the model
  repository as well. The security status will be returned in the `security_repo_status` field.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandModelProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `securityStatus` or `files_metadata` are passed.
  Possible values are `"author"`, `"baseModels"`, `"cardData"`, `"childrenModelCount"`, `"config"`, `"createdAt"`, `"disabled"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"gguf"`, `"inference"`, `"inferenceProviderMapping"`, `"lastModified"`, `"library_name"`, `"likes"`, `"mask_token"`, `"model-index"`, `"pipeline_tag"`, `"private"`, `"safetensors"`, `"sha"`, `"siblings"`, `"spaces"`, `"tags"`, `"transformersInfo"`, `"trendingScore"`, `"widgetData"`, `"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.ModelInfo)</rettype><retdesc>The model repository information.</retdesc></docstring>

Get info on one specific model on huggingface.co

Model can be private if you pass an acceptable token or are logged in.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>move_repo</name><anchor>huggingface_hub.HfApi.move_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3864</source><parameters>[{"name": "from_id", "val": ": str"}, {"name": "to_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **from_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`. Original repository identifier.
- **to_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`. Final repository identifier.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Moving a repository from namespace1/repo_name1 to namespace2/repo_name2

Note there are certain limitations. For more information about moving
repositories, please see
https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.



> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>paper_info</name><anchor>huggingface_hub.HfApi.paper_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9760</source><parameters>[{"name": "id", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`, **optional**) --
  ArXiv id of the paper.</paramsdesc><paramgroups>0</paramgroups><rettype>`PaperInfo`</rettype><retdesc>A `PaperInfo` object.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the paper does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get information for a paper on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>parse_safetensors_file_metadata</name><anchor>huggingface_hub.HfApi.parse_safetensors_file_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5629</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or space, `None` or `"model"` if in a
  model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
  head of the `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`SafetensorsFileMetadata`</rettype><retdesc>information related to a safetensors file.</retdesc><raises>- `NotASafetensorsRepoError` -- 
  If the repo is not a safetensors repo i.e. doesn't have either a
  `model.safetensors` or a `model.safetensors.index.json` file.
- `SafetensorsParsingError` -- 
  If a safetensors file header couldn't be parsed correctly.</raises><raisederrors>`NotASafetensorsRepoError` or `SafetensorsParsingError`</raisederrors></docstring>

Parse metadata from a safetensors file on the Hub.

To parse metadata from all safetensors files in a repo at once, use [get_safetensors_metadata()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_safetensors_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause_inference_endpoint</name><anchor>huggingface_hub.HfApi.pause_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7903</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to pause.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the paused Inference Endpoint.</retdesc></docstring>
Pause an Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using [resume_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint).
This is different than scaling the Inference Endpoint to zero with [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint), which
would be automatically restarted when a request is made to it.

For convenience, you can also pause an Inference Endpoint using [pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause_space</name><anchor>huggingface_hub.HfApi.pause_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7048</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to pause. Example: `"Salesforce/BLIP2"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about your Space including `stage=PAUSED` and requested hardware.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you
  are not authenticated.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don't
  own, either ask the owner by opening a Discussion or duplicate the Space.
- [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide
  a static Space, you can set it to private.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Pause your Space.

A paused Space stops executing until manually restarted by its owner. This is different from the sleeping
state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the
hardware you've selected. To restart your Space, use [restart_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.restart_space) and go to your Space settings page.

For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause).












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>permanently_delete_lfs_files</name><anchor>huggingface_hub.HfApi.permanently_delete_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3527</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "lfs_files", "val": ": Iterable[LFSFileInfo]"}, {"name": "rewrite_history", "val": ": bool = True"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository for which you are listing LFS files.
- **lfs_files** (`Iterable[LFSFileInfo]`) --
  An iterable of `LFSFileInfo` items to permanently delete from the repo. Use [list_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_lfs_files) to list
  all LFS files from a repo.
- **rewrite_history** (`bool`, *optional*, default to `True`) --
  Whether to rewrite repository history to remove file pointers referencing the deleted LFS files (recommended).
- **repo_type** (`str`, *optional*) --
  Type of repository. Set to `"dataset"` or `"space"` if listing from a dataset or space, `None` or
  `"model"` if listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Permanently delete LFS files from a repo on the Hub.

> [!WARNING]
> This is a permanent action that will affect all commits referencing the deleted files and might corrupt your
> repository. This is a non-revertible operation. Use it only if you know what you are doing.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.permanently_delete_lfs_files.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preupload_lfs_files</name><anchor>huggingface_hub.HfApi.preupload_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4247</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "additions", "val": ": Iterable[CommitOperationAdd]"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "num_threads", "val": ": int = 5"}, {"name": "free_memory", "val": ": bool = True"}, {"name": "gitignore_content", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which you will commit the files, for example: `"username/custom_transformers"`.

- **operations** (`Iterable` of [CommitOperationAdd](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationAdd)) --
  The list of files to upload. Warning: the objects in this list will be mutated to include information
  relative to the upload. Do not reuse the same objects for multiple commits.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  The type of repository to upload to (e.g. `"model"` -default-, `"dataset"` or `"space"`).

- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.

- **create_pr** (`boolean`, *optional*) --
  Whether or not you plan to create a Pull Request with that commit. Defaults to `False`.

- **num_threads** (`int`, *optional*) --
  Number of concurrent threads for uploading files. Defaults to 5.
  Setting it to 2 means at most 2 files will be uploaded concurrently.

- **gitignore_content** (`str`, *optional*) --
  The content of the `.gitignore` file to know which files should be ignored. The order of priority
  is to first check if `gitignore_content` is passed, then check if the `.gitignore` file is present
  in the list of files to commit and finally default to the `.gitignore` file already hosted on the Hub
  (if any).</paramsdesc><paramgroups>0</paramgroups></docstring>
Pre-upload LFS files to S3 in preparation on a future commit.

This method is useful if you are generating the files to upload on-the-fly and you don't want to store them
in memory before uploading them all at once.

> [!WARNING]
> This is a power-user method. You shouldn't need to call it directly to make a normal commit.
> Use [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) directly instead.

> [!WARNING]
> Commit operations will be mutated during the process. In particular, the attached `path_or_fileobj` will be
> removed after the upload to save memory (and replaced by an empty `bytes` object). Do not reuse the same
> objects except to pass them to [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit). If you don't want to remove the attached content from the
> commit operation object, pass `free_memory=False`.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.preupload_lfs_files.example">

Example:
```py
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo

>>> repo_id = create_repo("test_preupload").repo_id

# Generate and preupload LFS files one by one
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
...     content = ... # generate binary content
...     addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
...     preupload_lfs_files(repo_id, additions=[addition]) # upload + free memory
...     operations.append(addition)

# Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reject_access_request</name><anchor>huggingface_hub.HfApi.reject_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8792</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "rejection_reason", "val": ": Optional[str]"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to reject access request for.
- **user** (`str`) --
  The username of the user which access request should be rejected.
- **repo_type** (`str`, *optional*) --
  The type of the repo to reject access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **rejection_reason** (`str`, *optional*) --
  Optional rejection reason that will be visible to the user (max 200 characters).
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the rejected list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Reject an access request from a user for a given gated repo.

A rejected request will go to the rejected list. The user cannot download any file of the repo. Rejected
requests can be accepted or cancelled at any time using [accept_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and [cancel_access_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request).
A cancelled request will go back to the pending list while an accepted request will go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>rename_discussion</name><anchor>huggingface_hub.HfApi.rename_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6458</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "new_title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **new_title** (`str`) --
  The new title for the discussion
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionTitleChange](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionTitleChange)</rettype><retdesc>the title change event</retdesc></docstring>
Renames a Discussion.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.rename_discussion.example">

Examples:
```python
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>repo_exists</name><anchor>huggingface_hub.HfApi.repo_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2767</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the repository exists, False otherwise.</retdesc></docstring>

Checks if a repository exists on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.repo_exists.example">

Examples:
```py
>>> from huggingface_hub import repo_exists
>>> repo_exists("google/gemma-7b")
True
>>> repo_exists("google/not-a-repo")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>repo_info</name><anchor>huggingface_hub.HfApi.repo_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2696</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[Union[ExpandModelProperty_T, ExpandDatasetProperty_T, ExpandSpaceProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the
  information.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **expand** (`ExpandModelProperty_T` or `ExpandDatasetProperty_T` or `ExpandSpaceProperty_T`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `files_metadata` is passed.
  For an exhaustive list of available properties, check out [model_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.model_info), [dataset_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) or [space_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.space_info).
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[SpaceInfo, DatasetInfo, ModelInfo]`</rettype><retdesc>The repository information, as a
[huggingface_hub.hf_api.DatasetInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DatasetInfo), [huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.ModelInfo)
or [huggingface_hub.hf_api.SpaceInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.SpaceInfo) object.</retdesc></docstring>

Get the info object for a given repo of a given type.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>request_space_hardware</name><anchor>huggingface_hub.HfApi.request_space_hardware</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6950</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "hardware", "val": ": SpaceHardware"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "sleep_time", "val": ": Optional[int] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **hardware** (`str` or [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware)) --
  Hardware on which to run the Space. Example: `"t4-medium"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Request new hardware for a Space.







> [!TIP]
> It is also possible to request hardware directly when creating the Space repo! See [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo) for details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>request_space_storage</name><anchor>huggingface_hub.HfApi.request_space_storage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7251</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "storage", "val": ": SpaceStorage"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to update. Example: `"open-llm-leaderboard/open_llm_leaderboard"`.
- **storage** (`str` or [SpaceStorage](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceStorage)) --
  Storage tier. Either 'small', 'medium', or 'large'.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Request persistent storage for a Space.







> [!TIP]
> It is not possible to decrease persistent storage after its granted. To do so, you must delete it
> via [delete_space_storage()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_space_storage).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>restart_space</name><anchor>huggingface_hub.HfApi.restart_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7087</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "factory_reboot", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to restart. Example: `"Salesforce/BLIP2"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **factory_reboot** (`bool`, *optional*) --
  If `True`, the Space will be rebuilt from scratch without caching any requirements.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about your Space.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you
  are not authenticated.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don't
  own, either ask the owner by opening a Discussion or duplicate the Space.
- [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide
  a static Space, you can set it to private.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Restart your Space.

This is the only way to programmatically restart a Space if you've put it on Pause (see [pause_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_space)). You
must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be
billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.

For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause).












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume_inference_endpoint</name><anchor>huggingface_hub.HfApi.resume_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7938</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "running_ok", "val": ": bool = True"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to resume.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **running_ok** (`bool`, *optional*) --
  If `True`, the method will not raise an error if the Inference Endpoint is already running. Defaults to
  `True`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the resumed Inference Endpoint.</retdesc></docstring>
Resume an Inference Endpoint.

For convenience, you can also resume an Inference Endpoint using [InferenceEndpoint.resume()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume_scheduled_job</name><anchor>huggingface_hub.HfApi.resume_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10488</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Resume (unpause) a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>revision_exists</name><anchor>huggingface_hub.HfApi.revision_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2811</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`) --
  The revision of the repository to check.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the repository and the revision exists, False otherwise.</retdesc></docstring>

Checks if a specific revision exists on a repo on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.revision_exists.example">

Examples:
```py
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_as_future</name><anchor>huggingface_hub.HfApi.run_as_future</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1697</source><parameters>[{"name": "fn", "val": ": Callable[..., R]"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **fn** (`Callable`) --
  The method to run in the background.
- ***args,** **kwargs --
  Arguments with which the method will be called.</paramsdesc><paramgroups>0</paramgroups><rettype>`Future`</rettype><retdesc>a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) instance to
get the result of the task.</retdesc></docstring>

Run a method in the background and return a Future instance.

The main goal is to run methods without blocking the main thread (e.g. to push data during a training).
Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts
by parallelizing lots of call to the API, you must setup and use your own [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor).

Note: Most-used methods like [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) and [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) have a `run_as_future: bool`
argument to directly call them in the background. This is equivalent to calling `api.run_as_future(...)` on them
but less verbose.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_as_future.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.run_as_future(api.whoami) # instant
>>> future.done()
False
>>> future.result() # wait until complete and return result
(...)
>>> future.done()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_job</name><anchor>huggingface_hub.HfApi.run_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9844</source><parameters>[{"name": "image", "val": ": str"}, {"name": "command", "val": ": list[str]"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **image** (`str`) --
  The Docker image to use.
  Examples: `"ubuntu"`, `"python:3.12"`, `"pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"`.
  Example with an image from a Space: `"hf.co/spaces/lhoestq/duckdb"`.

- **command** (`list[str]`) --
  The command to run. Example: `["echo", "hello"]`.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run compute Jobs on Hugging Face infrastructure.



Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_job.example">

Run your first Job:

```python
>>> from huggingface_hub import run_job
>>> run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_job.example-2">

Run a GPU Job:

```python
>>> from huggingface_hub import run_job
>>> image = "pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"
>>> command = ["python", "-c", "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"]
>>> run_job(image=image, command=command, flavor="a10g-small")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_uv_job</name><anchor>huggingface_hub.HfApi.run_uv_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10130</source><parameters>[{"name": "script", "val": ": str"}, {"name": "script_args", "val": ": Optional[list[str]] = None"}, {"name": "dependencies", "val": ": Optional[list[str]] = None"}, {"name": "python", "val": ": Optional[str] = None"}, {"name": "image", "val": ": Optional[str] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "_repo", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **script** (`str`) --
  Path or URL of the UV script, or a command.

- **script_args** (`list[str]`, *optional*) --
  Arguments to pass to the script or command.

- **dependencies** (`list[str]`, *optional*) --
  Dependencies to use to run the UV script.

- **python** (`str`, *optional*) --
  Use a specific Python version. Default is 3.12.

- **image** (`str`, *optional*, defaults to "ghcr.io/astral-sh/uv --python3.12-bookworm"):
  Use a custom Docker image with `uv` installed.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run a UV script Job on Hugging Face infrastructure.



Example:

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example">

Run a script from a URL:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "https://raw.githubusercontent.com/huggingface/trl/refs/heads/main/trl/scripts/sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> run_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example-2">

Run a local script:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "my_sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> run_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example-3">

Run a command:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "lighteval"
>>> script_args= ["endpoint", "inference-providers", "model_name=openai/gpt-oss-20b,provider=auto", "lighteval|gsm8k|0|0"]
>>> run_uv_job(script, script_args=script_args, dependencies=["lighteval"], flavor="a10g-small")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_to_zero_inference_endpoint</name><anchor>huggingface_hub.HfApi.scale_to_zero_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7984</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to scale to zero.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the scaled-to-zero Inference Endpoint.</retdesc></docstring>
Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a
cold start delay. This is different than pausing the Inference Endpoint with [pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint), which
would require a manual resume with [resume_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint).

For convenience, you can also scale an Inference Endpoint to zero using [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_space_sleep_time</name><anchor>huggingface_hub.HfApi.set_space_sleep_time</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7000</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "sleep_time", "val": ": int"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to pause (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Set a custom sleep time for a Space running on upgraded hardware..

Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in "sleep"
mode. If a new visitor lands on your Space, it will "wake it up". Only upgraded hardware can have a
configurable sleep time. To know more about the sleep stage, please refer to
https://huggingface.co/docs/hub/spaces-gpus#sleep-time.







> [!TIP]
> It is also possible to set a custom sleep time when requesting hardware with [request_space_hardware()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.request_space_hardware).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>snapshot_download</name><anchor>huggingface_hub.HfApi.snapshot_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5374</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "cache_dir", "val": ": Union[str, Path, None] = None"}, {"name": "local_dir", "val": ": Union[str, Path, None] = None"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "force_download", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "max_workers", "val": ": int = 8"}, {"name": "tqdm_class", "val": ": Optional[type[base_tqdm]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded files will be placed under this directory.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `httpx.request`.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in the local cache.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are downloaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not downloaded.
- **max_workers** (`int`, *optional*) --
  Number of concurrent threads to download files (1 thread = 1 file download).
  Defaults to 8.
- **tqdm_class** (`tqdm`, *optional*) --
  If provided, overwrites the default behavior for the progress bar. Passed
  argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior.
  Note that the `tqdm_class` is not passed to each individual download.
  Defaults to the custom HF progress bar that can be disabled by setting
  `HF_HUB_DISABLE_PROGRESS_BARS` environment variable.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>folder path of the repo snapshot.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` and the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- if
  ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  if some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download repo files.

Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from
a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
`allow_patterns` and `ignore_patterns`.

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files.While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.

An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly
configured. It is also not possible to filter which files to download when cloning a repository using git.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>space_info</name><anchor>huggingface_hub.HfApi.space_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2626</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandSpaceProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the space repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandSpaceProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"createdAt"`, `"datasets"`, `"disabled"`, `"lastModified"`, `"likes"`, `"models"`, `"private"`, `"runtime"`, `"sdk"`, `"siblings"`, `"sha"`, `"subdomain"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.SpaceInfo)</rettype><retdesc>The space repository information.</retdesc></docstring>

Get info on one specific Space on huggingface.co.

Space can be private if you pass an acceptable token.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>super_squash_history</name><anchor>huggingface_hub.HfApi.super_squash_history</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3393</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **branch** (`str`, *optional*) --
  The branch to squash. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The commit message to use for the squashed commit.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing commits from a dataset or a Space, `None` or `"model"` if
  listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the branch to squash cannot be found.
- [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If invalid reference for a branch. You cannot squash history on tags.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Squash commit history on a branch for a repo on the Hub.

Squashing the repo history is useful when you know you'll make hundreds of commits and you don't want to
clutter the history. Squashing commits can only be performed from the head of a branch.

> [!WARNING]
> Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.

> [!WARNING]
> Once the history of a branch has been squashed, it is not possible to merge it back into another branch since
> their history will have diverged.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.super_squash_history.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id

# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")

# Squash history
>>> api.super_squash_history(repo_id=repo_id)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>suspend_scheduled_job</name><anchor>huggingface_hub.HfApi.suspend_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10459</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Suspend (pause) a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unlike</name><anchor>huggingface_hub.HfApi.unlike</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2315</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to unlike. Example: `"user/my-cool-model"`.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if unliking a dataset or space, `None` or
  `"model"` if unliking a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Unlike a given repo on the Hub (e.g. remove from favorite list).

To prevent spam usage, it is not possible to `like` a repository from a script.

See also [list_liked_repos()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_liked_repos).







<ExampleCodeBlock anchor="huggingface_hub.HfApi.unlike.example">

Example:
```python
>>> from huggingface_hub import list_liked_repos, unlike
>>> "gpt2" in list_liked_repos().models # we assume you have already liked gpt2
True
>>> unlike("gpt2")
>>> "gpt2" in list_liked_repos().models
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_collection_item</name><anchor>huggingface_hub.HfApi.update_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8384</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_object_id", "val": ": str"}, {"name": "note", "val": ": Optional[str] = None"}, {"name": "position", "val": ": Optional[int] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_object_id** (`str`) --
  ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
  It must be retrieved from a [CollectionItem](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.CollectionItem) object. Example: `collection.items[0].item_object_id`.
- **note** (`str`, *optional*) --
  A note to attach to the item in the collection. The maximum size for a note is 500 characters.
- **position** (`int`, *optional*) --
  New position of the item in the collection.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Update an item in a collection.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_collection_item.example">

Example:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Update item based on its ID (add note + update position)
>>> update_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
...     note="Newly updated model!"
...     position=0,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_collection_metadata</name><anchor>huggingface_hub.HfApi.update_collection_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8196</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "title", "val": ": Optional[str] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "position", "val": ": Optional[int] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "theme", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **title** (`str`) --
  Title of the collection to update.
- **description** (`str`, *optional*) --
  Description of the collection to update.
- **position** (`int`, *optional*) --
  New position of the collection in the list of collections of the user.
- **private** (`bool`, *optional*) --
  Whether the collection should be private or not.
- **theme** (`str`, *optional*) --
  Theme of the collection on the Hub.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Update metadata of a collection on the Hub.

All arguments are optional. Only provided metadata will be updated.



Returns: [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_collection_metadata.example">

Example:

```py
>>> from huggingface_hub import update_collection_metadata
>>> collection = update_collection_metadata(
...     collection_slug="username/iccv-2023-64f9a55bb3115b4f513ec026",
...     title="ICCV Oct. 2023"
...     description="Portfolio of models, datasets, papers and demos I presented at ICCV Oct. 2023",
...     private=False,
...     theme="pink",
... )
>>> collection.slug
"username/iccv-oct-2023-64f9a55bb3115b4f513ec026"
# ^collection slug got updated but not the trailing ID
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_inference_endpoint</name><anchor>huggingface_hub.HfApi.update_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7738</source><parameters>[{"name": "name", "val": ": str"}, {"name": "accelerator", "val": ": Optional[str] = None"}, {"name": "instance_size", "val": ": Optional[str] = None"}, {"name": "instance_type", "val": ": Optional[str] = None"}, {"name": "min_replica", "val": ": Optional[int] = None"}, {"name": "max_replica", "val": ": Optional[int] = None"}, {"name": "scale_to_zero_timeout", "val": ": Optional[int] = None"}, {"name": "repository", "val": ": Optional[str] = None"}, {"name": "framework", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "task", "val": ": Optional[str] = None"}, {"name": "custom_image", "val": ": Optional[dict] = None"}, {"name": "env", "val": ": Optional[dict[str, str]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, str]] = None"}, {"name": "domain", "val": ": Optional[str] = None"}, {"name": "path", "val": ": Optional[str] = None"}, {"name": "cache_http_responses", "val": ": Optional[bool] = None"}, {"name": "tags", "val": ": Optional[list[str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to update.

- **accelerator** (`str`, *optional*) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`, *optional*) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`, *optional*) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero.

- **repository** (`str`, *optional*) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`, *optional*) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **env** (`dict[str, str]`, *optional*) --
  Non-secret environment variables to inject in the container environment
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.

- **domain** (`str`, *optional*) --
  The custom domain for the Inference Endpoint deployment, if setup the inference endpoint will be available at this domain (e.g. `"my-new-domain.cool-website.woof"`).
- **path** (`str`, *optional*) --
  The custom path to the deployed model, should start with a `/` (e.g. `"/models/google-bert/bert-base-uncased"`).

- **cache_http_responses** (`bool`, *optional*) --
  Whether to cache HTTP responses from the Inference Endpoint.
- **tags** (`list[str]`, *optional*) --
  A list of tags to associate with the Inference Endpoint.

- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be updated. Defaults to the current user's namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the updated Inference Endpoint.</retdesc></docstring>
Update an Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, the route, or any combination.
All arguments are optional but at least one must be provided.

For convenience, you can also update an Inference Endpoint using [InferenceEndpoint.update()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_repo_settings</name><anchor>huggingface_hub.HfApi.update_repo_settings</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3789</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "gated", "val": ": Optional[Literal['auto', 'manual', False]] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a /.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  The gated status for the repository. If set to `None` (default), the `gated` setting of the repository won't be updated.
  * "auto": The repository is gated, and access requests are automatically approved or denied based on predefined criteria.
  * "manual": The repository is gated, and access requests require manual approval.
  * False : The repository is not gated, and anyone can access it.
- **private** (`bool`, *optional*) --
  Whether the repository should be private.
- **token** (`Union[str, bool, None]`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token,
  which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass False.
- **repo_type** (`str`, *optional*) --
  The type of the repository to update settings from (`"model"`, `"dataset"` or `"space"`).
  Defaults to `"model"`.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If gated is not one of "auto", "manual", or False.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If repo_type is not one of the values in constants.REPO_TYPES.
- [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the request to the Hugging Face Hub API fails.
- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.</raises><raisederrors>``ValueError`` or [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Update the settings of a repository, including gated access and visibility.

To give more control over how repos are used, the Hub allows repo authors to enable
access requests for their repos, and also to set the visibility of the repo to private.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_webhook</name><anchor>huggingface_hub.HfApi.update_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9162</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "url", "val": ": Optional[str] = None"}, {"name": "watched", "val": ": Optional[list[Union[dict, WebhookWatchedItem]]] = None"}, {"name": "domains", "val": ": Optional[list[constants.WEBHOOK_DOMAIN_T]] = None"}, {"name": "secret", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to be updated.
- **url** (`str`, optional) --
  The URL to which the payload will be sent.
- **watched** (`list[WebhookWatchedItem]`, optional) --
  List of items to watch. It can be users, orgs, models, datasets, or spaces.
  Refer to [WebhookWatchedItem](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookWatchedItem) for more details. Watched items can also be provided as plain dictionaries.
- **domains** (`list[Literal["repo", "discussion"]]`, optional) --
  The domains to watch. This can include "repo", "discussion", or both.
- **secret** (`str`, optional) --
  A secret to sign the payload with, providing an additional layer of security.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[WebhookInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookInfo)</rettype><retdesc>Info about the updated webhook.</retdesc></docstring>
Update an existing webhook.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_webhook.example">

Example:
```python
>>> from huggingface_hub import update_webhook
>>> updated_payload = update_webhook(
...     webhook_id="654bbbc16f2ec14d77f109cc",
...     url="https://new.webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     domains=["repo"],
...     secret="my-secret",
... )
>>> print(updated_payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    url="https://new.webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo"],
    secret="my-secret",
    disabled=False,
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_file</name><anchor>huggingface_hub.HfApi.upload_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4435</source><parameters>[{"name": "path_or_fileobj", "val": ": Union[str, Path, bytes, BinaryIO]"}, {"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **path_or_fileobj** (`str`, `Path`, `bytes`, or `IO`) --
  Path to a file on the local machine or binary data stream /
  fileobj / buffer.
- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example:
  `"checkpoints/1fec34a/weights.bin"`
- **repo_id** (`str`) --
  The repository to which the file will be uploaded, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc></docstring>

Upload a local file (up to 50 GB) to the given repo. The upload is done
through a HTTP post request, and doesn't require git or git-lfs to be
installed.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.

> [!WARNING]
> `upload_file` assumes that the repo already exists on the Hub. If you get a
> Client error 404, please make sure you are authenticated and that `repo_id` and
> `repo_type` are set correctly. If repo does not exist, create it first using
> [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

<ExampleCodeBlock anchor="huggingface_hub.HfApi.upload_file.example">

Example:

```python
>>> from huggingface_hub import upload_file

>>> with open("./local/filepath", "rb") as fobj:
...     upload_file(
...         path_or_fileobj=fileobj,
...         path_in_repo="remote/file/path.h5",
...         repo_id="username/my-dataset",
...         repo_type="dataset",
...         token="my_token",
...     )

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
... )

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
...     create_pr=True,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_folder</name><anchor>huggingface_hub.HfApi.upload_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4617</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": Union[str, Path]"}, {"name": "path_in_repo", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "delete_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to which the file will be uploaded, for example:
  `"username/custom_transformers"`
- **folder_path** (`str` or `Path`) --
  Path to the folder to upload on the local file system
- **path_in_repo** (`str`, *optional*) --
  Relative path of the directory in the repo, for example:
  `"checkpoints/1fec34a/results"`. Will default to the root folder of the repository.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to:
  `f"Upload {path_in_repo} with huggingface_hub"`
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`. If `revision` is not
  set, PR is opened against the `"main"` branch. If `revision` is set and is a branch, PR is opened
  against this branch. If `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **delete_patterns** (`list[str]` or `str`, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo while committing
  new files. This is useful if you don't know which files have already been uploaded.
  Note: to avoid discrepancies the `.gitattributes` file is not deleted even if it matches the pattern.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc></docstring>

Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn't require git or
git-lfs to be installed.

The structure of the folder will be preserved. Files with the same name already present in the repository will
be overwritten. Others will be left untouched.

Use the `allow_patterns` and `ignore_patterns` arguments to specify which files to upload. These parameters
accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as
documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). If both `allow_patterns` and
`ignore_patterns` are provided, both constraints apply. By default, all files from the folder are uploaded.

Use the `delete_patterns` argument to specify remote files you want to delete. Input type is the same as for
`allow_patterns` (see above). If `path_in_repo` is also provided, the patterns are matched against paths
relative to this folder. For example, `upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")`
will delete any remote file under `./experiment/logs/`. Note that the `.gitattributes` file will not be deleted
even if it matches the patterns.

Any `.git/` folder present in any subdirectory will be ignored. However, please be aware that the `.gitignore`
file is not taken into account.

Uses `HfApi.create_commit` under the hood.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>     if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>     if some parameter value is invalid

> [!WARNING]
> `upload_folder` assumes that the repo already exists on the Hub. If you get a Client error 404, please make
> sure you are authenticated and that `repo_id` and `repo_type` are set correctly. If repo does not exist, create
> it first using [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

> [!TIP]
> When dealing with a large folder (thousands of files or hundreds of GB), we recommend using [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) instead.

<ExampleCodeBlock anchor="huggingface_hub.HfApi.upload_folder.example">

Example:

```python
# Upload checkpoints folder except the log files
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     ignore_patterns="**/logs/*.txt",
... )

# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     delete_patterns="**/logs/*.txt",
... )

# Upload checkpoints folder while creating a PR
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     create_pr=True,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_large_folder</name><anchor>huggingface_hub.HfApi.upload_large_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5050</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": Union[str, Path]"}, {"name": "repo_type", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "num_workers", "val": ": Optional[int] = None"}, {"name": "print_report", "val": ": bool = True"}, {"name": "print_report_every", "val": ": int = 60"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to which the file will be uploaded.
  E.g. `"HuggingFaceTB/smollm-corpus"`.
- **folder_path** (`str` or `Path`) --
  Path to the folder to upload on the local file system.
- **repo_type** (`str`) --
  Type of the repository. Must be one of `"model"`, `"dataset"` or `"space"`.
  Unlike in all other `HfApi` methods, `repo_type` is explicitly required here. This is to avoid
  any mistake when uploading a large folder to the Hub, and therefore prevent from having to re-upload
  everything.
- **revision** (`str`, `optional`) --
  The branch to commit to. If not provided, the `main` branch will be used.
- **private** (`bool`, `optional`) --
  Whether the repository should be private.
  If `None` (default), the repo will be public unless the organization's default is private.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **num_workers** (`int`, *optional*) --
  Number of workers to start. Defaults to `os.cpu_count() - 2` (minimum 2).
  A higher number of workers may speed up the process if your machine allows it. However, on machines with a
  slower connection, it is recommended to keep the number of workers low to ensure better resumability.
  Indeed, partially uploaded files will have to be completely re-uploaded if the process is interrupted.
- **print_report** (`bool`, *optional*) --
  Whether to print a report of the upload progress. Defaults to True.
  Report is printed to `sys.stdout` every X seconds (60 by defaults) and overwrites the previous report.
- **print_report_every** (`int`, *optional*) --
  Frequency at which the report is printed. Defaults to 60 seconds.</paramsdesc><paramgroups>0</paramgroups></docstring>
Upload a large folder to the Hub in the most resilient way possible.

Several workers are started to upload files in an optimized way. Before being committed to a repo, files must be
hashed and be pre-uploaded if they are LFS files. Workers will perform these tasks for each file in the folder.
At each step, some metadata information about the upload process is saved in the folder under `.cache/.huggingface/`
to be able to resume the process if interrupted. The whole process might result in several commits.



> [!TIP]
> A few things to keep in mind:
>     - Repository limits still apply: https://huggingface.co/docs/hub/repositories-recommendations
>     - Do not start several processes in parallel.
>     - You can interrupt and resume the process at any time.
>     - Do not upload the same folder to several repositories. If you need to do so, you must delete the local `.cache/.huggingface/` folder first.

> [!WARNING]
> While being much more robust to upload large folders, `upload_large_folder` is more limited than [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) feature-wise. In practice:
>     - you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally.
>     - you cannot set a custom `commit_message` and `commit_description` since multiple commits are created.
>     - you cannot delete from the repo while uploading. Please make a separate commit first.
>     - you cannot create a PR directly. Please create a PR first (from the UI or using [create_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)) and then commit to it by passing `revision`.

**Technical details:**

`upload_large_folder` process is as follow:
1. (Check parameters and setup.)
2. Create repo if missing.
3. List local files to upload.
4. Run validation checks and display warnings if repository limits might be exceeded:
   - Warns if the total number of files exceeds 100k (recommended limit).
   - Warns if any folder contains more than 10k files (recommended limit).
   - Warns about files larger than 20GB (recommended) or 50GB (hard limit).
5. Start workers. Workers can perform the following tasks:
   - Hash a file.
   - Get upload mode (regular or LFS) for a list of files.
   - Pre-upload an LFS file.
   - Commit a bunch of files.
Once a worker finishes a task, it will move on to the next task based on the priority list (see below) until
all files are uploaded and committed.
6. While workers are up, regularly print a report to sys.stdout.

Order of priority:
1. Commit if more than 5 minutes since last commit attempt (and at least 1 file).
2. Commit if at least 150 files are ready to commit.
3. Get upload mode if at least 10 files have been hashed.
4. Pre-upload LFS file if at least 1 file and no worker is pre-uploading.
5. Hash file if at least 1 file and no worker is hashing.
6. Get upload mode if at least 1 file and no worker is getting upload mode.
7. Pre-upload LFS file if at least 1 file.
8. Hash file if at least 1 file to hash.
9. Get upload mode if at least 1 file to get upload mode.
10. Commit if at least 1 file to commit and at least 1 min since last commit attempt.
11. Commit if at least 1 file to commit and all other queues are empty.

Special rules:
- Only one worker can commit at a time.
- If no tasks are available, the worker waits for 10 seconds before checking again.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>verify_repo_checksums</name><anchor>huggingface_hub.HfApi.verify_repo_checksums</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3085</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "local_dir", "val": ": Optional[Union[str, Path]] = None"}, {"name": "cache_dir", "val": ": Optional[Union[str, Path]] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  The type of the repository from which to get the tree (`"model"`, `"dataset"` or `"space"`.
  Defaults to `"model"`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the tree. Defaults to `"main"` branch.
- **local_dir** (`str` or `Path`, *optional*) --
  The local directory to verify.
- **cache_dir** (`str` or `Path`, *optional*) --
  The cache directory to verify.
- **token** (Union[bool, str, None], optional) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`FolderVerification`</rettype><retdesc>a structured result containing the verification details.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Verify local files for a repo against Hub checksums.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>whoami</name><anchor>huggingface_hub.HfApi.whoami</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1737</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Call HF API to know "whoami".




</div></div>

## API Dataclasses

### AccessRequest[[huggingface_hub.hf_api.AccessRequest]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.AccessRequest</name><anchor>huggingface_hub.hf_api.AccessRequest</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L420</source><parameters>[{"name": "username", "val": ": str"}, {"name": "fullname", "val": ": str"}, {"name": "email", "val": ": Optional[str]"}, {"name": "timestamp", "val": ": datetime"}, {"name": "status", "val": ": Literal['pending', 'accepted', 'rejected']"}, {"name": "fields", "val": ": Optional[dict[str, Any]] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user who requested access.
- **fullname** (`str`) --
  Fullname of the user who requested access.
- **email** (`Optional[str]`) --
  Email of the user who requested access.
  Can only be `None` in the /accepted list if the user was granted access manually.
- **timestamp** (`datetime`) --
  Timestamp of the request.
- **status** (`Literal["pending", "accepted", "rejected"]`) --
  Status of the request. Can be one of `["pending", "accepted", "rejected"]`.
- **fields** (`dict[str, Any]`, *optional*) --
  Additional fields filled by the user in the gate form.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a user access request.




</div>

### CommitInfo[[huggingface_hub.CommitInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitInfo</name><anchor>huggingface_hub.CommitInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L349</source><parameters>[{"name": "*args", "val": ""}, {"name": "commit_url", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **commit_url** (`str`) --
  Url where to find the commit.

- **commit_message** (`str`) --
  The summary (first line) of the commit that has been created.

- **commit_description** (`str`) --
  Description of the commit that has been created. Can be empty.

- **oid** (`str`) --
  Commit hash id. Example: `"91c54ad1727ee830252e457677f467be0bfd8a57"`.

- **pr_url** (`str`, *optional*) --
  Url to the PR that has been created, if any. Populated when `create_pr=True`
  is passed.

- **pr_revision** (`str`, *optional*) --
  Revision of the PR that has been created, if any. Populated when
  `create_pr=True` is passed. Example: `"refs/pr/1"`.

- **pr_num** (`int`, *optional*) --
  Number of the PR discussion that has been created, if any. Populated when
  `create_pr=True` is passed. Can be passed as `discussion_num` in
  [get_discussion_details()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details). Example: `1`.

- **repo_url** (`RepoUrl`) --
  Repo URL of the commit containing info like repo_id, repo_type, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a newly created commit.

Returned by any method that creates a commit on the Hub: [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit), [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder),
[delete_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_file), [delete_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_folder). It inherits from `str` for backward compatibility but using methods specific
to `str` is deprecated.




</div>

### DatasetInfo[[huggingface_hub.DatasetInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetInfo</name><anchor>huggingface_hub.DatasetInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L898</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of dataset.
- **author** (`str`) --
  Author of the dataset.
- **sha** (`str`) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **disabled** (`bool`, *optional*) --
  Is the repo disabled.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **downloads** (`int`) --
  Number of downloads of the dataset over the last 30 days.
- **downloads_all_time** (`int`) --
  Cumulated number of downloads of the model since its creation.
- **likes** (`int`) --
  Number of likes of the dataset.
- **tags** (`list[str]`) --
  List of tags of the dataset.
- **card_data** (`DatasetCardData`, *optional*) --
  Model Card Metadata  as a [huggingface_hub.repocard_data.DatasetCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.DatasetCardData) object.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the dataset.
- **paperswithcode_id** (`str`, *optional*) --
  Papers with code ID of the dataset.
- **trending_score** (`int`, *optional*) --
  Trending score of the dataset.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a dataset on the Hub. This object is returned by [dataset_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) and [list_datasets()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_datasets).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing datasets
> using [list_datasets()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_datasets) only a subset of the attributes are returned.




</div>

### DryRunFileInfo[[huggingface_hub.DryRunFileInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DryRunFileInfo</name><anchor>huggingface_hub.DryRunFileInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L154</source><parameters>[{"name": "commit_hash", "val": ": str"}, {"name": "file_size", "val": ": int"}, {"name": "filename", "val": ": str"}, {"name": "local_path", "val": ": str"}, {"name": "is_cached", "val": ": bool"}, {"name": "will_download", "val": ": bool"}]</parameters><paramsdesc>- **commit_hash** (`str`) --
  The commit_hash related to the file.
- **file_size** (`int`) --
  Size of the file. In case of an LFS file, contains the size of the actual LFS file, not the pointer.
- **filename** (`str`) --
  Name of the file in the repo.
- **is_cached** (`bool`) --
  Whether the file is already cached locally.
- **will_download** (`bool`) --
  Whether the file will be downloaded if `hf_hub_download` is called with `dry_run=False`.
  In practice, will_download is `True` if the file is not cached or if `force_download=True`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Information returned when performing a dry run of a file download.

Returned by [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) when `dry_run=True`.




</div>

### GitRefInfo[[huggingface_hub.GitRefInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitRefInfo</name><anchor>huggingface_hub.GitRefInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1246</source><parameters>[{"name": "name", "val": ": str"}, {"name": "ref", "val": ": str"}, {"name": "target_commit", "val": ": str"}]</parameters><paramsdesc>- **name** (`str`) --
  Name of the reference (e.g. tag name or branch name).
- **ref** (`str`) --
  Full git ref on the Hub (e.g. `"refs/heads/main"` or `"refs/tags/v1.0"`).
- **target_commit** (`str`) --
  OID of the target commit for the ref (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a git reference for a repo on the Hub.




</div>

### GitCommitInfo[[huggingface_hub.GitCommitInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitCommitInfo</name><anchor>huggingface_hub.GitCommitInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1291</source><parameters>[{"name": "commit_id", "val": ": str"}, {"name": "authors", "val": ": list[str]"}, {"name": "created_at", "val": ": datetime"}, {"name": "title", "val": ": str"}, {"name": "message", "val": ": str"}, {"name": "formatted_title", "val": ": Optional[str]"}, {"name": "formatted_message", "val": ": Optional[str]"}]</parameters><paramsdesc>- **commit_id** (`str`) --
  OID of the commit (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)
- **authors** (`list[str]`) --
  List of authors of the commit.
- **created_at** (`datetime`) --
  Datetime when the commit was created.
- **title** (`str`) --
  Title of the commit. This is a free-text value entered by the authors.
- **message** (`str`) --
  Description of the commit. This is a free-text value entered by the authors.
- **formatted_title** (`str`) --
  Title of the commit formatted as HTML. Only returned if `formatted=True` is set.
- **formatted_message** (`str`) --
  Description of the commit formatted as HTML. Only returned if `formatted=True` is set.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a git commit for a repo on the Hub. Check out [list_repo_commits()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_commits) for more details.




</div>

### GitRefs[[huggingface_hub.GitRefs]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitRefs</name><anchor>huggingface_hub.GitRefs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1265</source><parameters>[{"name": "branches", "val": ": list[GitRefInfo]"}, {"name": "converts", "val": ": list[GitRefInfo]"}, {"name": "tags", "val": ": list[GitRefInfo]"}, {"name": "pull_requests", "val": ": Optional[list[GitRefInfo]] = None"}]</parameters><paramsdesc>- **branches** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about branches on the repo.
- **converts** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about "convert" refs on the repo.
  Converts are refs used (internally) to push preprocessed data in Dataset repos.
- **tags** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about tags on the repo.
- **pull_requests** (`list[GitRefInfo]`, *optional*) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about pull requests on the repo.
  Only returned if `include_prs=True` is set.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about all git references for a repo on the Hub.

Object is returned by [list_repo_refs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs).




</div>

### InferenceProviderMapping[[huggingface_hub.hf_api.InferenceProviderMapping]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.InferenceProviderMapping</name><anchor>huggingface_hub.hf_api.InferenceProviderMapping</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L679</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### LFSFileInfo[[huggingface_hub.hf_api.LFSFileInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.LFSFileInfo</name><anchor>huggingface_hub.hf_api.LFSFileInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1556</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **file_oid** (`str`) --
  SHA-256 object ID of the file. This is the identifier to pass when permanently deleting the file.
- **filename** (`str`) --
  Possible filename for the LFS object. See the note above for more information.
- **oid** (`str`) --
  OID of the LFS object.
- **pushed_at** (`datetime`) --
  Date the LFS object was pushed to the repo.
- **ref** (`str`, *optional*) --
  Ref where the LFS object has been pushed (if any).
- **size** (`int`) --
  Size of the LFS object.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a file stored as LFS on a repo on the Hub.

Used in the context of listing and permanently deleting LFS files from a repo to free-up space.
See [list_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_lfs_files) and [permanently_delete_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.permanently_delete_lfs_files) for more details.

Git LFS files are tracked using SHA-256 object IDs, rather than file paths, to optimize performance
This approach is necessary because a single object can be referenced by multiple paths across different commits,
making it impractical to search and resolve these connections. Check out [our documentation](https://huggingface.co/docs/hub/storage-limits#advanced-track-lfs-file-references)
to learn how to know which filename(s) is(are) associated with each SHA.



<ExampleCodeBlock anchor="huggingface_hub.hf_api.LFSFileInfo.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
```

</ExampleCodeBlock>


</div>

### ModelInfo[[huggingface_hub.ModelInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelInfo</name><anchor>huggingface_hub.ModelInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L704</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of model.
- **author** (`str`, *optional*) --
  Author of the model.
- **sha** (`str`, *optional*) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **disabled** (`bool`, *optional*) --
  Is the repo disabled.
- **downloads** (`int`) --
  Number of downloads of the model over the last 30 days.
- **downloads_all_time** (`int`) --
  Cumulated number of downloads of the model since its creation.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **gguf** (`dict`, *optional*) --
  GGUF information of the model.
- **inference** (`Literal["warm"]`, *optional*) --
  Status of the model on Inference Providers. Warm if the model is served by at least one provider.
- **inference_provider_mapping** (`list[InferenceProviderMapping]`, *optional*) --
  A list of `InferenceProviderMapping` ordered after the user's provider order.
- **likes** (`int`) --
  Number of likes of the model.
- **library_name** (`str`, *optional*) --
  Library associated with the model.
- **tags** (`list[str]`) --
  List of tags of the model. Compared to `card_data.tags`, contains extra tags computed by the Hub
  (e.g. supported libraries, model's arXiv).
- **pipeline_tag** (`str`, *optional*) --
  Pipeline tag associated with the model.
- **mask_token** (`str`, *optional*) --
  Mask token used by the model.
- **widget_data** (`Any`, *optional*) --
  Widget data associated with the model.
- **model_index** (`dict`, *optional*) --
  Model index for evaluation.
- **config** (`dict`, *optional*) --
  Model configuration.
- **transformers_info** (`TransformersInfo`, *optional*) --
  Transformers-specific info (auto class, processor, etc.) associated with the model.
- **trending_score** (`int`, *optional*) --
  Trending score of the model.
- **card_data** (`ModelCardData`, *optional*) --
  Model Card Metadata  as a [huggingface_hub.repocard_data.ModelCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCardData) object.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the model.
- **spaces** (`list[str]`, *optional*) --
  List of spaces using the model.
- **safetensors** (`SafeTensorsInfo`, *optional*) --
  Model's safetensors information.
- **security_repo_status** (`dict`, *optional*) --
  Model's security scan status.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a model on the Hub. This object is returned by [model_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.model_info) and [list_models()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_models).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing models
> using [list_models()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_models) only a subset of the attributes are returned.




</div>

### RepoSibling[[huggingface_hub.hf_api.RepoSibling]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.RepoSibling</name><anchor>huggingface_hub.hf_api.RepoSibling</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L558</source><parameters>[{"name": "rfilename", "val": ": str"}, {"name": "size", "val": ": Optional[int] = None"}, {"name": "blob_id", "val": ": Optional[str] = None"}, {"name": "lfs", "val": ": Optional[BlobLfsInfo] = None"}]</parameters><paramsdesc>- **rfilename** (str) --
  file name, relative to the repo root.
- **size** (`int`, *optional*) --
  The file's size, in bytes. This attribute is defined when `files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set
  to `True`. It's `None` otherwise.
- **blob_id** (`str`, *optional*) --
  The file's git OID. This attribute is defined when `files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set to
  `True`. It's `None` otherwise.
- **lfs** (`BlobLfsInfo`, *optional*) --
  The file's LFS metadata. This attribute is defined when`files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set to
  `True` and the file is stored with Git LFS. It's `None` otherwise.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains basic information about a repo file inside a repo on the Hub.

> [!TIP]
> All attributes of this class are optional except `rfilename`. This is because only the file names are returned when
> listing repositories on the Hub (with [list_models()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_models), [list_datasets()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_datasets) or [list_spaces()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_spaces)). If you need more
> information like file size, blob id or lfs details, you must request them specifically from one repo at a time
> (using [model_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.model_info), [dataset_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) or [space_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.space_info)) as it adds more constraints on the backend server to
> retrieve these.




</div>

### RepoFile[[huggingface_hub.hf_api.RepoFile]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.RepoFile</name><anchor>huggingface_hub.hf_api.RepoFile</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L590</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (str) --
  file path relative to the repo root.
- **size** (`int`) --
  The file's size, in bytes.
- **blob_id** (`str`) --
  The file's git OID.
- **lfs** (`BlobLfsInfo`, *optional*) --
  The file's LFS metadata.
- **last_commit** (`LastCommitInfo`, *optional*) --
  The file's last commit metadata. Only defined if [list_repo_tree()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_tree) and [get_paths_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_paths_info)
  are called with `expand=True`.
- **security** (`BlobSecurityInfo`, *optional*) --
  The file's security scan metadata. Only defined if [list_repo_tree()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_tree) and [get_paths_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_paths_info)
  are called with `expand=True`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a file on the Hub.




</div>

### RepoUrl[[huggingface_hub.RepoUrl]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.RepoUrl</name><anchor>huggingface_hub.RepoUrl</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L496</source><parameters>[{"name": "url", "val": ": Any"}, {"name": "endpoint", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **url** (`Any`) --
  String value of the repo url.
- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If URL cannot be parsed.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `repo_type` is unknown.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Subclass of `str` describing a repo URL on the Hub.

`RepoUrl` is returned by `HfApi.create_repo`. It inherits from `str` for backward
compatibility. At initialization, the URL is parsed to populate properties:
- endpoint (`str`)
- namespace (`Optional[str]`)
- repo_name (`str`)
- repo_id (`str`)
- repo_type (`Literal["model", "dataset", "space"]`)
- url (`str`)



<ExampleCodeBlock anchor="huggingface_hub.RepoUrl.example">

Example:
```py
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')

>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')

>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')

>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
```

</ExampleCodeBlock>






</div>

### SafetensorsRepoMetadata[[huggingface_hub.utils.SafetensorsRepoMetadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.SafetensorsRepoMetadata</name><anchor>huggingface_hub.utils.SafetensorsRepoMetadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L74</source><parameters>[{"name": "metadata", "val": ": typing.Optional[dict]"}, {"name": "sharded", "val": ": bool"}, {"name": "weight_map", "val": ": dict"}, {"name": "files_metadata", "val": ": dict"}]</parameters><paramsdesc>- **metadata** (`dict`, *optional*) --
  The metadata contained in the 'model.safetensors.index.json' file, if it exists. Only populated for sharded
  models.
- **sharded** (`bool`) --
  Whether the repo contains a sharded model or not.
- **weight_map** (`dict[str, str]`) --
  A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors.
- **files_metadata** (`dict[str, SafetensorsFileMetadata]`) --
  A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as
  a `SafetensorsFileMetadata` object.
- **parameter_count** (`dict[str, int]`) --
  A map of the number of parameters per data type. Keys are data types and values are the number of parameters
  of that data type.</paramsdesc><paramgroups>0</paramgroups></docstring>
Metadata for a Safetensors repo.

A repo is considered to be a Safetensors repo if it contains either a 'model.safetensors' weight file (non-shared
model) or a 'model.safetensors.index.json' index file (sharded model) at its root.

This class is returned by [get_safetensors_metadata()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_safetensors_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### SafetensorsFileMetadata[[huggingface_hub.utils.SafetensorsFileMetadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.SafetensorsFileMetadata</name><anchor>huggingface_hub.utils.SafetensorsFileMetadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L44</source><parameters>[{"name": "metadata", "val": ": dict"}, {"name": "tensors", "val": ": dict"}]</parameters><paramsdesc>- **metadata** (`dict`) --
  The metadata contained in the file.
- **tensors** (`dict[str, TensorInfo]`) --
  A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as a
  `TensorInfo` object.
- **parameter_count** (`dict[str, int]`) --
  A map of the number of parameters per data type. Keys are data types and values are the number of parameters
  of that data type.</paramsdesc><paramgroups>0</paramgroups></docstring>
Metadata for a Safetensors file hosted on the Hub.

This class is returned by [parse_safetensors_file_metadata()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.parse_safetensors_file_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### SpaceInfo[[huggingface_hub.SpaceInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceInfo</name><anchor>huggingface_hub.SpaceInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1012</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of the Space.
- **author** (`str`, *optional*) --
  Author of the Space.
- **sha** (`str`, *optional*) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **disabled** (`bool`, *optional*) --
  Is the Space disabled.
- **host** (`str`, *optional*) --
  Host URL of the Space.
- **subdomain** (`str`, *optional*) --
  Subdomain of the Space.
- **likes** (`int`) --
  Number of likes of the Space.
- **tags** (`list[str]`) --
  List of tags of the Space.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the Space.
- **card_data** (`SpaceCardData`, *optional*) --
  Space Card Metadata  as a [huggingface_hub.repocard_data.SpaceCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.SpaceCardData) object.
- **runtime** (`SpaceRuntime`, *optional*) --
  Space runtime information as a [huggingface_hub.hf_api.SpaceRuntime](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceRuntime) object.
- **sdk** (`str`, *optional*) --
  SDK used by the Space.
- **models** (`list[str]`, *optional*) --
  List of models used by the Space.
- **datasets** (`list[str]`, *optional*) --
  List of datasets used by the Space.
- **trending_score** (`int`, *optional*) --
  Trending score of the Space.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a Space on the Hub. This object is returned by [space_info()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.space_info) and [list_spaces()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_spaces).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing spaces
> using [list_spaces()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_spaces) only a subset of the attributes are returned.




</div>

### TensorInfo[[huggingface_hub.utils.TensorInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.TensorInfo</name><anchor>huggingface_hub.utils.TensorInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L14</source><parameters>[{"name": "dtype", "val": ": typing.Literal['F64', 'F32', 'F16', 'BF16', 'I64', 'I32', 'I16', 'I8', 'U8', 'BOOL']"}, {"name": "shape", "val": ": list"}, {"name": "data_offsets", "val": ": tuple"}]</parameters><paramsdesc>- **dtype** (`str`) --
  The data type of the tensor ("F64", "F32", "F16", "BF16", "I64", "I32", "I16", "I8", "U8", "BOOL").
- **shape** (`list[int]`) --
  The shape of the tensor.
- **data_offsets** (`tuple[int, int]`) --
  The offsets of the data in the file as a tuple `[BEGIN, END]`.
- **parameter_count** (`int`) --
  The number of parameters in the tensor.</paramsdesc><paramgroups>0</paramgroups></docstring>
Information about a tensor.

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### User[[huggingface_hub.User]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.User</name><anchor>huggingface_hub.User</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1411</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **username** (`str`) --
  Name of the user on the Hub (unique).
- **fullname** (`str`) --
  User's full name.
- **avatar_url** (`str`) --
  URL of the user's avatar.
- **details** (`str`, *optional*) --
  User's details.
- **is_following** (`bool`, *optional*) --
  Whether the authenticated user is following this user.
- **is_pro** (`bool`, *optional*) --
  Whether the user is a pro user.
- **num_models** (`int`, *optional*) --
  Number of models created by the user.
- **num_datasets** (`int`, *optional*) --
  Number of datasets created by the user.
- **num_spaces** (`int`, *optional*) --
  Number of spaces created by the user.
- **num_discussions** (`int`, *optional*) --
  Number of discussions initiated by the user.
- **num_papers** (`int`, *optional*) --
  Number of papers authored by the user.
- **num_upvotes** (`int`, *optional*) --
  Number of upvotes received by the user.
- **num_likes** (`int`, *optional*) --
  Number of likes given by the user.
- **num_following** (`int`, *optional*) --
  Number of users this user is following.
- **num_followers** (`int`, *optional*) --
  Number of users following this user.
- **orgs** (list of `Organization`) --
  List of organizations the user is part of.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a user on the Hub.




</div>

### UserLikes[[huggingface_hub.UserLikes]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.UserLikes</name><anchor>huggingface_hub.UserLikes</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1324</source><parameters>[{"name": "user", "val": ": str"}, {"name": "total", "val": ": int"}, {"name": "datasets", "val": ": list[str]"}, {"name": "models", "val": ": list[str]"}, {"name": "spaces", "val": ": list[str]"}]</parameters><paramsdesc>- **user** (`str`) --
  Name of the user for which we fetched the likes.
- **total** (`int`) --
  Total number of likes.
- **datasets** (`list[str]`) --
  List of datasets liked by the user (as repo_ids).
- **models** (`list[str]`) --
  List of models liked by the user (as repo_ids).
- **spaces** (`list[str]`) --
  List of spaces liked by the user (as repo_ids).</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a user likes on the Hub.




</div>

### WebhookInfo[[huggingface_hub.WebhookInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookInfo</name><anchor>huggingface_hub.WebhookInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L465</source><parameters>[{"name": "id", "val": ": str"}, {"name": "url", "val": ": Optional[str]"}, {"name": "job", "val": ": Optional[JobSpec]"}, {"name": "watched", "val": ": list[WebhookWatchedItem]"}, {"name": "domains", "val": ": list[constants.WEBHOOK_DOMAIN_T]"}, {"name": "secret", "val": ": Optional[str]"}, {"name": "disabled", "val": ": bool"}]</parameters><paramsdesc>- **id** (`str`) --
  ID of the webhook.
- **url** (`str`, *optional*) --
  URL of the webhook.
- **job** (`JobSpec`, *optional*) --
  Specifications of the Job to trigger.
- **watched** (`list[WebhookWatchedItem]`) --
  List of items watched by the webhook, see [WebhookWatchedItem](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.WebhookWatchedItem).
- **domains** (`list[WEBHOOK_DOMAIN_T]`) --
  List of domains the webhook is watching. Can be one of `["repo", "discussions"]`.
- **secret** (`str`, *optional*) --
  Secret of the webhook.
- **disabled** (`bool`) --
  Whether the webhook is disabled or not.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a webhook.

One of `url` or `job` is specified, but not both.




</div>

### WebhookWatchedItem[[huggingface_hub.WebhookWatchedItem]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookWatchedItem</name><anchor>huggingface_hub.WebhookWatchedItem</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L450</source><parameters>[{"name": "type", "val": ": Literal['dataset', 'model', 'org', 'space', 'user']"}, {"name": "name", "val": ": str"}]</parameters><paramsdesc>- **type** (`Literal["dataset", "model", "org", "space", "user"]`) --
  Type of the item to be watched. Can be one of `["dataset", "model", "org", "space", "user"]`.
- **name** (`str`) --
  Name of the item to be watched. Can be the username, organization name, model name, dataset name or space name.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about the items watched by a webhook.




</div>

## CommitOperation[[huggingface_hub.CommitOperationAdd]]

Below are the supported values for `CommitOperation()`:

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationAdd</name><anchor>huggingface_hub.CommitOperationAdd</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L125</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "path_or_fileobj", "val": ": typing.Union[str, pathlib.Path, bytes, typing.BinaryIO]"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
- **path_or_fileobj** (`str`, `Path`, `bytes`, or `BinaryIO`) --
  Either:
  - a path to a local file (as `str` or `pathlib.Path`) to upload
  - a buffer of bytes (`bytes`) holding the content of the file to upload
  - a "file object" (subclass of `io.BufferedIOBase`), typically obtained
    with `open(path, "rb")`. It must support `seek()` and `tell()` methods.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is not one of `str`, `Path`, `bytes` or `io.BufferedIOBase`.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is a `str` or `Path` but not a path to an existing file.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is a `io.BufferedIOBase` but it doesn't support both
  `seek()` and `tell()`.</raises><raisederrors>``ValueError``</raisederrors></docstring>

Data structure holding necessary info to upload a file to a repository on the Hub.









<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>as_file</name><anchor>huggingface_hub.CommitOperationAdd.as_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L207</source><parameters>[{"name": "with_tqdm", "val": ": bool = False"}]</parameters><paramsdesc>- **with_tqdm** (`bool`, *optional*, defaults to `False`) --
  If True, iterating over the file object will display a progress bar. Only
  works if the file-like object is a path to a file. Pure bytes and buffers
  are not supported.</paramsdesc><paramgroups>0</paramgroups></docstring>

A context manager that yields a file-like object allowing to read the underlying
data behind `path_or_fileobj`.



<ExampleCodeBlock anchor="huggingface_hub.CommitOperationAdd.as_file.example">

Example:

```python
>>> operation = CommitOperationAdd(
...        path_in_repo="remote/dir/weights.h5",
...        path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')

>>> with operation.as_file() as file:
...     content = file.read()

>>> with operation.as_file(with_tqdm=True) as file:
...     while True:
...         data = file.read(1024)
...         if not data:
...              break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]

>>> with operation.as_file(with_tqdm=True) as file:
...     httpx.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>b64content</name><anchor>huggingface_hub.CommitOperationAdd.b64content</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L257</source><parameters>[]</parameters></docstring>

The base64-encoded content of `path_or_fileobj`

Returns: `bytes`


</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationDelete</name><anchor>huggingface_hub.CommitOperationDelete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L58</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "is_folder", "val": ": typing.Union[bool, typing.Literal['auto']] = 'auto'"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
  for a file or `"checkpoints/1fec34a/"` for a folder.
- **is_folder** (`bool` or `Literal["auto"]`, *optional*) --
  Whether the Delete Operation applies to a folder or not. If "auto", the path
  type (file or folder) is guessed automatically by looking if path ends with
  a "/" (folder) or not (file). To explicitly set the path type, you can set
  `is_folder=True` or `is_folder=False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Data structure holding necessary info to delete a file or a folder from a repository
on the Hub.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationCopy</name><anchor>huggingface_hub.CommitOperationCopy</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L89</source><parameters>[{"name": "src_path_in_repo", "val": ": str"}, {"name": "path_in_repo", "val": ": str"}, {"name": "src_revision", "val": ": typing.Optional[str] = None"}, {"name": "_src_oid", "val": ": typing.Optional[str] = None"}, {"name": "_dest_oid", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **src_path_in_repo** (`str`) --
  Relative filepath in the repo of the file to be copied, e.g. `"checkpoints/1fec34a/weights.bin"`.
- **path_in_repo** (`str`) --
  Relative filepath in the repo where to copy the file, e.g. `"checkpoints/1fec34a/weights_copy.bin"`.
- **src_revision** (`str`, *optional*) --
  The git revision of the file to be copied. Can be any valid git revision.
  Default to the target commit revision.</paramsdesc><paramgroups>0</paramgroups></docstring>

Data structure holding necessary info to copy a file in a repository on the Hub.

Limitations:
- Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
- Cross-repository copies are not supported.

Note: you can combine a [CommitOperationCopy](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationCopy) and a [CommitOperationDelete](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationDelete) to rename an LFS file on the Hub.




</div>

## CommitScheduler[[huggingface_hub.CommitScheduler]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitScheduler</name><anchor>huggingface_hub.CommitScheduler</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L29</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "every", "val": ": typing.Union[int, float] = 5"}, {"name": "path_in_repo", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "squash_history", "val": ": bool = False"}, {"name": "hf_api", "val": ": typing.Optional[ForwardRef('HfApi')] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to commit to.
- **folder_path** (`str` or `Path`) --
  Path to the local folder to upload regularly.
- **every** (`int` or `float`, *optional*) --
  The number of minutes between each commit. Defaults to 5 minutes.
- **path_in_repo** (`str`, *optional*) --
  Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder
  of the repository.
- **repo_type** (`str`, *optional*) --
  The type of the repo to commit to. Defaults to `model`.
- **revision** (`str`, *optional*) --
  The revision of the repo to commit to. Defaults to `main`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **token** (`str`, *optional*) --
  The token to use to commit to the repo. Defaults to the token saved on the machine.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **squash_history** (`bool`, *optional*) --
  Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
  useful to avoid degraded performances on the repo when it grows too large.
- **hf_api** (`HfApi`, *optional*) --
  The [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) client to use to commit to the Hub. Can be set with custom settings (user agent, token,...).</paramsdesc><paramgroups>0</paramgroups></docstring>

Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).

The recommended way to use the scheduler is to use it as a context manager. This ensures that the scheduler is
properly stopped and the last commit is triggered when the script ends. The scheduler can also be stopped manually
with the `stop` method. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads)
to learn more about how to use it.



<ExampleCodeBlock anchor="huggingface_hub.CommitScheduler.example">

Example:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler

# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)

>>> with csv_path.open("a") as f:
...     f.write("first line")

# Some time later (...)
>>> with csv_path.open("a") as f:
...     f.write("second line")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.CommitScheduler.example-2">

Example using a context manager:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler

>>> with CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path="watched_folder", every=10) as scheduler:
...     csv_path = Path("watched_folder/data.csv")
...     with csv_path.open("a") as f:
...         f.write("first line")
...     (...)
...     with csv_path.open("a") as f:
...         f.write("second line")

# Scheduler is now stopped and last commit have been triggered
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.CommitScheduler.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L204</source><parameters>[]</parameters></docstring>

Push folder to the Hub and return the commit info.

> [!WARNING]
> This method is not meant to be called directly. It is run in the background by the scheduler, respecting a
> queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency
> issues.

The default behavior of `push_to_hub` is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler) and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>stop</name><anchor>huggingface_hub.CommitScheduler.stop</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L157</source><parameters>[]</parameters></docstring>
Stop the scheduler.

A stopped scheduler cannot be restarted. Mostly for tests purposes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>trigger</name><anchor>huggingface_hub.CommitScheduler.trigger</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L181</source><parameters>[]</parameters></docstring>
Trigger a `push_to_hub` and return a future.

This method is automatically called every `every` minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.


</div></div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/hf_api.md" />

### Managing collections
https://huggingface.co/docs/huggingface_hub/main/package_reference/collections.md

# Managing collections

Check out the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) documentation page for the reference of methods to manage your Space on the Hub.

- Get collection content: [get_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_collection)
- Create new collection: [create_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_collection)
- Update a collection: [update_collection_metadata()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_collection_metadata)
- Delete a collection: [delete_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_collection)
- Add an item to a collection: [add_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.add_collection_item)
- Update an item in a collection: [update_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item)
- Remove an item from a collection: [delete_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_collection_item)


### Collection[[huggingface_hub.Collection]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Collection</name><anchor>huggingface_hub.Collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1183</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **slug** (`str`) --
  Slug of the collection. E.g. `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **title** (`str`) --
  Title of the collection. E.g. `"Recent models"`.
- **owner** (`str`) --
  Owner of the collection. E.g. `"TheBloke"`.
- **items** (`list[CollectionItem]`) --
  List of items in the collection.
- **last_updated** (`datetime`) --
  Date of the last update of the collection.
- **position** (`int`) --
  Position of the collection in the list of collections of the owner.
- **private** (`bool`) --
  Whether the collection is private or not.
- **theme** (`str`) --
  Theme of the collection. E.g. `"green"`.
- **upvotes** (`int`) --
  Number of upvotes of the collection.
- **description** (`str`, *optional*) --
  Description of the collection, as plain text.
- **url** (`str`) --
  (property) URL of the collection on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a Collection on the Hub.




</div>

### CollectionItem[[huggingface_hub.CollectionItem]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CollectionItem</name><anchor>huggingface_hub.CollectionItem</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1138</source><parameters>[{"name": "_id", "val": ": str"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": CollectionItemType_T"}, {"name": "position", "val": ": int"}, {"name": "note", "val": ": Optional[dict] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **item_object_id** (`str`) --
  Unique ID of the item in the collection.
- **item_id** (`str`) --
  ID of the underlying object on the Hub. Can be either a repo_id, a paper id or a collection slug.
  e.g. `"jbilcke-hf/ai-comic-factory"`, `"2307.09288"`, `"celinah/cerebras-function-calling-682607169c35fbfa98b30b9a"`.
- **item_type** (`str`) --
  Type of the underlying object. Can be one of `"model"`, `"dataset"`, `"space"`, `"paper"` or `"collection"`.
- **position** (`int`) --
  Position of the item in the collection.
- **note** (`str`, *optional*) --
  Note associated with the item, as plain text.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about an item of a Collection (model, dataset, Space, paper or collection).




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/collections.md" />

### Strict Dataclasses
https://huggingface.co/docs/huggingface_hub/main/package_reference/dataclasses.md

# Strict Dataclasses

The `huggingface_hub` package provides a utility to create **strict dataclasses**. These are enhanced versions of Python's standard `dataclass` with additional validation features. Strict dataclasses ensure that fields are validated both during initialization and assignment, making them ideal for scenarios where data integrity is critical.

## Overview

Strict dataclasses are created using the `@strict` decorator. They extend the functionality of regular dataclasses by:

- Validating field types based on type hints
- Supporting custom validators for additional checks
- Optionally allowing arbitrary keyword arguments in the constructor
- Validating fields both at initialization and during assignment

## Benefits

- **Data Integrity**: Ensures fields always contain valid data
- **Ease of Use**: Integrates seamlessly with Python's `dataclass` module
- **Flexibility**: Supports custom validators for complex validation logic
- **Lightweight**: Requires no additional dependencies such as Pydantic, attrs, or similar libraries

## Usage

### Basic Example

```python
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

# Custom validator to ensure a value is positive
@as_validated_field
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")

@strict
@dataclass
class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

    # Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")
```

Fields are validated during initialization:

```python
config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError
```

Consistency between fields is also validated during initialization (class-wise validation):

```python
# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError
```

Fields are also validated during assignment:

```python
config.hidden_size = 512  # Valid
config.hidden_size = -1   # Raises StrictDataclassFieldValidationError
```

To re-run class-wide validation after assignment, you must call `.validate` explicitly:

```python
config.validate()  # Runs all class validators
```

### Custom Validators

You can attach multiple custom validators to fields using `validated_field`. A validator is a callable that takes a single argument and raises an exception if the value is invalid.

```python
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, validated_field

def multiple_of_64(value: int):
    if value % 64 != 0:
        raise ValueError(f"Value must be a multiple of 64, got {value}")

@strict
@dataclass
class Config:
    hidden_size: int = validated_field(validator=[positive_int, multiple_of_64])
```

In this example, both validators are applied to the `hidden_size` field.

### Additional Keyword Arguments

By default, strict dataclasses only accept fields defined in the class. You can allow additional keyword arguments by setting `accept_kwargs=True` in the `@strict` decorator.

```python
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict

@strict(accept_kwargs=True)
@dataclass
class ConfigWithKwargs:
    model_type: str
    vocab_size: int = 16

config = ConfigWithKwargs(model_type="bert", vocab_size=30000, extra_field="extra_value")
print(config)  # ConfigWithKwargs(model_type='bert', vocab_size=30000, *extra_field='extra_value')
```

Additional keyword arguments appear in the string representation of the dataclass but are prefixed with `*` to highlight that they are not validated.

### Integration with Type Hints

Strict dataclasses respect type hints and validate them automatically. For example:

```python
from typing import List
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict

@strict
@dataclass
class Config:
    layers: List[int]

config = Config(layers=[64, 128])  # Valid
config = Config(layers="not_a_list")  # Raises StrictDataclassFieldValidationError
```

Supported types include:
- Any
- Union
- Optional
- Literal
- List
- Dict
- Tuple
- Set

And any combination of these types. If your need more complex type validation, you can do it through a custom validator.

### Class validators

Methods named `validate_xxx` are treated as class validators. These methods must only take `self` as an argument. Class validators are run once during initialization, right after `__post_init__`. You can define as many of them as needed—they'll be executed sequentially in the order they appear.

Note that class validators are not automatically re-run when a field is updated after initialization. To manually re-validate the object, you need to call `obj.validate()`.

```py
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict

@strict
@dataclass
class Config:
    foo: str
    foo_length: int
    upper_case: bool = False

    def validate_foo_length(self):
        if len(self.foo) != self.foo_length:
            raise ValueError(f"foo must be {self.foo_length} characters long, got {len(self.foo)}")

    def validate_foo_casing(self):
        if self.upper_case and self.foo.upper() != self.foo:
            raise ValueError(f"foo must be uppercase, got {self.foo}")

config = Config(foo="bar", foo_length=3) # ok

config.upper_case = True
config.validate() # Raises StrictDataclassClassValidationError

Config(foo="abcd", foo_length=3) # Raises StrictDataclassFieldValidationError
Config(foo="Bar", foo_length=3, upper_case=True) # Raises StrictDataclassFieldValidationError
```

> [!WARNING]
> Method `.validate()` is a reserved name on strict dataclasses.
> To prevent unexpected behaviors, a `StrictDataclassDefinitionError` error will be raised if your class already defines one.

## API Reference

### `@strict`[[huggingface_hub.dataclasses.strict]]

The `@strict` decorator enhances a dataclass with strict validation.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.dataclasses.strict</name><anchor>huggingface_hub.dataclasses.strict</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/dataclasses.py#L55</source><parameters>[{"name": "accept_kwargs", "val": ": bool = False"}]</parameters><paramsdesc>- **cls** --
  The class to convert to a strict dataclass.
- **accept_kwargs** (`bool`, *optional*) --
  If True, allows arbitrary keyword arguments in `__init__`. Defaults to False.</paramsdesc><paramgroups>0</paramgroups><retdesc>The enhanced dataclass with strict validation on field assignment.</retdesc></docstring>

Decorator to add strict validation to a dataclass.

This decorator must be used on top of `@dataclass` to ensure IDEs and static typing tools
recognize the class as a dataclass.

Can be used with or without arguments:
- `@strict`
- `@strict(accept_kwargs=True)`





<ExampleCodeBlock anchor="huggingface_hub.dataclasses.strict.example">

Example:
```py
>>> from dataclasses import dataclass
>>> from huggingface_hub.dataclasses import as_validated_field, strict, validated_field

>>> @as_validated_field
>>> def positive_int(value: int):
...     if not value >= 0:
...         raise ValueError(f"Value must be positive, got {value}")

>>> @strict(accept_kwargs=True)
... @dataclass
... class User:
...     name: str
...     age: int = positive_int(default=10)

# Initialize
>>> User(name="John")
User(name='John', age=10)

# Extra kwargs are accepted
>>> User(name="John", age=30, lastname="Doe")
User(name='John', age=30, *lastname='Doe')

# Invalid type => raises
>>> User(name="John", age="30")
huggingface_hub.errors.StrictDataclassFieldValidationError: Validation error for field 'age':
    TypeError: Field 'age' expected int, got str (value: '30')

# Invalid value => raises
>>> User(name="John", age=-1)
huggingface_hub.errors.StrictDataclassFieldValidationError: Validation error for field 'age':
    ValueError: Value must be positive, got -1
```

</ExampleCodeBlock>


</div>

### `validate_typed_dict`[[huggingface_hub.dataclasses.validate_typed_dict]]

Method to validate that a dictionary conforms to the types defined in a `TypedDict` class.

This is the equivalent to dataclass validation but for `TypedDict`s. Since typed dicts are never instantiated (only used by static type checkers), validation step must be manually called.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.dataclasses.validate_typed_dict</name><anchor>huggingface_hub.dataclasses.validate_typed_dict</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/dataclasses.py#L255</source><parameters>[{"name": "schema", "val": ": type"}, {"name": "data", "val": ": dict"}]</parameters><paramsdesc>- **schema** (`type[TypedDictType]`) --
  The TypedDict class defining the expected structure and types.
- **data** (`dict`) --
  The dictionary to validate.</paramsdesc><paramgroups>0</paramgroups><raises>- ``StrictDataclassFieldValidationError`` -- 
  If any field in the dictionary does not conform to the expected type.</raises><raisederrors>``StrictDataclassFieldValidationError``</raisederrors></docstring>

Validate that a dictionary conforms to the types defined in a TypedDict class.

Under the hood, the typed dict is converted to a strict dataclass and validated using the `@strict` decorator.







<ExampleCodeBlock anchor="huggingface_hub.dataclasses.validate_typed_dict.example">

Example:
```py
>>> from typing import Annotated, TypedDict
>>> from huggingface_hub.dataclasses import validate_typed_dict

>>> def positive_int(value: int):
...     if not value >= 0:
...         raise ValueError(f"Value must be positive, got {value}")

>>> class User(TypedDict):
...     name: str
...     age: Annotated[int, positive_int]

>>> # Valid data
>>> validate_typed_dict(User, {"name": "John", "age": 30})

>>> # Invalid type for age
>>> validate_typed_dict(User, {"name": "John", "age": "30"})
huggingface_hub.errors.StrictDataclassFieldValidationError: Validation error for field 'age':
    TypeError: Field 'age' expected int, got str (value: '30')

>>> # Invalid value for age
>>> validate_typed_dict(User, {"name": "John", "age": -1})
huggingface_hub.errors.StrictDataclassFieldValidationError: Validation error for field 'age':
    ValueError: Value must be positive, got -1
```

</ExampleCodeBlock>


</div>

### `as_validated_field`[[huggingface_hub.dataclasses.as_validated_field]]

Decorator to create a `validated_field`. Recommended for fields with a single validator to avoid boilerplate code.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.dataclasses.as_validated_field</name><anchor>huggingface_hub.dataclasses.as_validated_field</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/dataclasses.py#L395</source><parameters>[{"name": "validator", "val": ": typing.Callable[[typing.Any], NoneType]"}]</parameters><paramsdesc>- **validator** (`Callable`) --
  A method that takes a value as input and raises ValueError/TypeError if the value is invalid.</paramsdesc><paramgroups>0</paramgroups></docstring>

Decorates a validator function as a `validated_field` (i.e. a dataclass field with a custom validator).




</div>

### `validated_field`[[huggingface_hub.dataclasses.validated_field]]

Creates a dataclass field with custom validation.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.dataclasses.validated_field</name><anchor>huggingface_hub.dataclasses.validated_field</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/dataclasses.py#L352</source><parameters>[{"name": "validator", "val": ": typing.Union[list[typing.Callable[[typing.Any], NoneType]], typing.Callable[[typing.Any], NoneType]]"}, {"name": "default", "val": ": typing.Union[typing.Any, dataclasses._MISSING_TYPE] = <dataclasses._MISSING_TYPE object at 0x7f1789b07610>"}, {"name": "default_factory", "val": ": typing.Union[typing.Callable[[], typing.Any], dataclasses._MISSING_TYPE] = <dataclasses._MISSING_TYPE object at 0x7f1789b07610>"}, {"name": "init", "val": ": bool = True"}, {"name": "repr", "val": ": bool = True"}, {"name": "hash", "val": ": typing.Optional[bool] = None"}, {"name": "compare", "val": ": bool = True"}, {"name": "metadata", "val": ": typing.Optional[dict] = None"}, {"name": "**kwargs", "val": ": typing.Any"}]</parameters><paramsdesc>- **validator** (`Callable` or `list[Callable]`) --
  A method that takes a value as input and raises ValueError/TypeError if the value is invalid.
  Can be a list of validators to apply multiple checks.
- ****kwargs** --
  Additional arguments to pass to `dataclasses.field()`.</paramsdesc><paramgroups>0</paramgroups><retdesc>A field with the validator attached in metadata</retdesc></docstring>

Create a dataclass field with a custom validator.

Useful to apply several checks to a field. If only applying one rule, check out the `as_validated_field` decorator.






</div>

### Errors[[huggingface_hub.errors.StrictDataclassError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.StrictDataclassError</name><anchor>huggingface_hub.errors.StrictDataclassError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L356</source><parameters>""</parameters></docstring>
Base exception for strict dataclasses.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.StrictDataclassDefinitionError</name><anchor>huggingface_hub.errors.StrictDataclassDefinitionError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L360</source><parameters>""</parameters></docstring>
Exception thrown when a strict dataclass is defined incorrectly.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.StrictDataclassFieldValidationError</name><anchor>huggingface_hub.errors.StrictDataclassFieldValidationError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L364</source><parameters>[{"name": "field", "val": ": str"}, {"name": "cause", "val": ": Exception"}]</parameters></docstring>
Exception thrown when a strict dataclass fails validation for a given field.

</div>

## Why Not Use `pydantic`? (or `attrs`? or `marshmallow_dataclass`?)

- See discussion in https://github.com/huggingface/transformers/issues/36329 regarding adding Pydantic as a dependency. It would be a heavy addition and require careful logic to support both v1 and v2.
- We don't need most of Pydantic's features, especially those related to automatic casting, jsonschema, serialization, aliases, etc.
- We don't need the ability to instantiate a class from a dictionary.
- We don't want to mutate data. In `@strict`, "validation" means "checking if a value is valid." In Pydantic, "validation" means "casting a value, possibly mutating it, and then checking if it's valid."
- We don't need blazing-fast validation. `@strict` isn't designed for heavy loads where performance is critical. Common use cases involve validating a model configuration (performed once and negligible compared to running a model). This allows us to keep the code minimal.

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/dataclasses.md" />

### Cache-system reference
https://huggingface.co/docs/huggingface_hub/main/package_reference/cache.md

# Cache-system reference

The caching system was updated in v0.8.0 to become the central cache-system shared
across libraries that depend on the Hub. Read the [cache-system guide](../guides/manage-cache)
for a detailed presentation of caching at HF.

## Helpers

### try_to_load_from_cache[[huggingface_hub.try_to_load_from_cache]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.try_to_load_from_cache</name><anchor>huggingface_hub.try_to_load_from_cache</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L1407</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **cache_dir** (`str` or `os.PathLike`) --
  The folder where the cached files lie.
- **repo_id** (`str`) --
  The ID of the repo on huggingface.co.
- **filename** (`str`) --
  The filename to look for inside `repo_id`.
- **revision** (`str`, *optional*) --
  The specific model version to use. Will default to `"main"` if it's not provided and no `commit_hash` is
  provided either.
- **repo_type** (`str`, *optional*) --
  The type of the repository. Will default to `"model"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Optional[str]` or `_CACHED_NO_EXIST`</rettype><retdesc>Will return `None` if the file was not cached. Otherwise:
- The exact path to the cached file if it's found in the cache
- A special value `_CACHED_NO_EXIST` if the file does not exist at the given commit hash and this fact was
  cached.</retdesc></docstring>

Explores the cache to return the latest cached file for a given revision if found.

This function will not raise any exception if the file in not cached.







<ExampleCodeBlock anchor="huggingface_hub.try_to_load_from_cache.example">

Example:

```python
from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST

filepath = try_to_load_from_cache()
if isinstance(filepath, str):
    # file exists and is cached
    ...
elif filepath is _CACHED_NO_EXIST:
    # non-existence of file is cached
    ...
else:
    # file is not cached
    ...
```

</ExampleCodeBlock>


</div>

### cached_assets_path[[huggingface_hub.cached_assets_path]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.cached_assets_path</name><anchor>huggingface_hub.cached_assets_path</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_assets.py#L21</source><parameters>[{"name": "library_name", "val": ": str"}, {"name": "namespace", "val": ": str = 'default'"}, {"name": "subfolder", "val": ": str = 'default'"}, {"name": "assets_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}]</parameters><paramsdesc>- **library_name** (`str`) --
  Name of the library that will manage the cache folder. Example: `"dataset"`.
- **namespace** (`str`, *optional*, defaults to "default") --
  Namespace to which the data belongs. Example: `"SQuAD"`.
- **subfolder** (`str`, *optional*, defaults to "default") --
  Subfolder in which the data will be stored. Example: `extracted`.
- **assets_dir** (`str`, `Path`, *optional*) --
  Path to the folder where assets are cached. This must not be the same folder
  where Hub files are cached. Defaults to `HF_HOME / "assets"` if not provided.
  Can also be set with `HF_ASSETS_CACHE` environment variable.</paramsdesc><paramgroups>0</paramgroups><retdesc>Path to the cache folder (`Path`).</retdesc></docstring>
Return a folder path to cache arbitrary files.

`huggingface_hub` provides a canonical folder path to store assets. This is the
recommended way to integrate cache in a downstream library as it will benefit from
the builtins tools to scan and delete the cache properly.

The distinction is made between files cached from the Hub and assets. Files from the
Hub are cached in a git-aware manner and entirely managed by `huggingface_hub`. See
[related documentation](https://huggingface.co/docs/huggingface_hub/how-to-cache).
All other files that a downstream library caches are considered to be "assets"
(files downloaded from external sources, extracted from a .tar archive, preprocessed
for training,...).

Once the folder path is generated, it is guaranteed to exist and to be a directory.
The path is based on 3 levels of depth: the library name, a namespace and a
subfolder. Those 3 levels grants flexibility while allowing `huggingface_hub` to
expect folders when scanning/deleting parts of the assets cache. Within a library,
it is expected that all namespaces share the same subset of subfolder names but this
is not a mandatory rule. The downstream library has then full control on which file
structure to adopt within its cache. Namespace and subfolder are optional (would
default to a `"default/"` subfolder) but library name is mandatory as we want every
downstream library to manage its own cache.

<ExampleCodeBlock anchor="huggingface_hub.cached_assets_path.example">

Expected tree:
```text
    assets/
    └── datasets/
    │   ├── SQuAD/
    │   │   ├── downloaded/
    │   │   ├── extracted/
    │   │   └── processed/
    │   ├── Helsinki-NLP--tatoeba_mt/
    │       ├── downloaded/
    │       ├── extracted/
    │       └── processed/
    └── transformers/
        ├── default/
        │   ├── something/
        ├── bert-base-cased/
        │   ├── default/
        │   └── training/
    hub/
    └── models--julien-c--EsperBERTo-small/
        ├── blobs/
        │   ├── (...)
        │   ├── (...)
        ├── refs/
        │   └── (...)
        └── [ 128]  snapshots/
            ├── 2439f60ef33a0d46d85da5001d52aeda5b00ce9f/
            │   ├── (...)
            └── bbc77c8132af1cc5cf678da3f1ddf2de43606d48/
                └── (...)
```

</ExampleCodeBlock>






<ExampleCodeBlock anchor="huggingface_hub.cached_assets_path.example-2">

Example:
```py
>>> from huggingface_hub import cached_assets_path

>>> cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="download")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/SQuAD/download')

>>> cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="extracted")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/SQuAD/extracted')

>>> cached_assets_path(library_name="datasets", namespace="Helsinki-NLP/tatoeba_mt")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/Helsinki-NLP--tatoeba_mt/default')

>>> cached_assets_path(library_name="datasets", assets_dir="/tmp/tmp123456")
PosixPath('/tmp/tmp123456/datasets/default/default')
```

</ExampleCodeBlock>


</div>

### scan_cache_dir[[huggingface_hub.scan_cache_dir]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.scan_cache_dir</name><anchor>huggingface_hub.scan_cache_dir</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L561</source><parameters>[{"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}]</parameters><paramsdesc>- **cache_dir** (`str` or `Path`, `optional`) --
  Cache directory to cache. Defaults to the default HF cache directory.</paramsdesc><paramgroups>0</paramgroups></docstring>
Scan the entire HF cache-system and return a [~HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo) structure.

Use `scan_cache_dir` in order to programmatically scan your cache-system. The cache
will be scanned repo by repo. If a repo is corrupted, a [~CorruptedCacheException](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CorruptedCacheException)
will be thrown internally but captured and returned in the [~HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo)
structure. Only valid repos get a proper report.

<ExampleCodeBlock anchor="huggingface_hub.scan_cache_dir.example">

```py
>>> from huggingface_hub import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(
    size_on_disk=3398085269,
    repos=frozenset({
        CachedRepoInfo(
            repo_id='t5-small',
            repo_type='model',
            repo_path=PosixPath(...),
            size_on_disk=970726914,
            nb_files=11,
            revisions=frozenset({
                CachedRevisionInfo(
                    commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5',
                    size_on_disk=970726339,
                    snapshot_path=PosixPath(...),
                    files=frozenset({
                        CachedFileInfo(
                            file_name='config.json',
                            size_on_disk=1197
                            file_path=PosixPath(...),
                            blob_path=PosixPath(...),
                        ),
                        CachedFileInfo(...),
                        ...
                    }),
                ),
                CachedRevisionInfo(...),
                ...
            }),
        ),
        CachedRepoInfo(...),
        ...
    }),
    warnings=[
        CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."),
        CorruptedCacheException(...),
        ...
    ],
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.scan_cache_dir.example-2">

You can also print a detailed report directly from the `hf` command line using:
```text
> hf cache ls
ID                          SIZE     LAST_ACCESSED LAST_MODIFIED REFS
--------------------------- -------- ------------- ------------- -----------
dataset/nyu-mll/glue          157.4M 2 days ago    2 days ago    main script
model/LiquidAI/LFM2-VL-1.6B     3.2G 4 days ago    4 days ago    main
model/microsoft/UserLM-8b      32.1G 4 days ago    4 days ago    main

Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G.
Got 1 warning(s) while scanning. Use -vvv to print details.
```

</ExampleCodeBlock>



> [!WARNING]
> Raises:
>
>     `CacheNotFound`
>       If the cache directory does not exist.
>
>     [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       If the cache directory is a file, instead of a directory.

Returns: a [~HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo) object.


</div>

## Data structures

All structures are built and returned by [scan_cache_dir()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.scan_cache_dir) and are immutable.

### HFCacheInfo[[huggingface_hub.HFCacheInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HFCacheInfo</name><anchor>huggingface_hub.HFCacheInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L331</source><parameters>[{"name": "size_on_disk", "val": ": int"}, {"name": "repos", "val": ": frozenset"}, {"name": "warnings", "val": ": list"}]</parameters><paramsdesc>- **size_on_disk** (`int`) --
  Sum of all valid repo sizes in the cache-system.
- **repos** (`frozenset[CachedRepoInfo]`) --
  Set of [~CachedRepoInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedRepoInfo) describing all valid cached repos found on the
  cache-system while scanning.
- **warnings** (`list[CorruptedCacheException]`) --
  List of [~CorruptedCacheException](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CorruptedCacheException) that occurred while scanning the cache.
  Those exceptions are captured so that the scan can continue. Corrupted repos
  are skipped from the scan.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about the entire cache-system.

This data structure is returned by [scan_cache_dir()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.scan_cache_dir) and is immutable.



> [!WARNING]
> Here `size_on_disk` is equal to the sum of all repo sizes (only blobs). However if
> some cached repos are corrupted, their sizes are not taken into account.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_revisions</name><anchor>huggingface_hub.HFCacheInfo.delete_revisions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L366</source><parameters>[{"name": "*revisions", "val": ": str"}]</parameters></docstring>
Prepare the strategy to delete one or more revisions cached locally.

Input revisions can be any revision hash. If a revision hash is not found in the
local cache, a warning is thrown but no error is raised. Revisions can be from
different cached repos since hashes are unique across repos,

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.delete_revisions.example">

Examples:
```py
>>> from huggingface_hub import scan_cache_dir
>>> cache_info = scan_cache_dir()
>>> delete_strategy = cache_info.delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa"
... )
>>> print(f"Will free {delete_strategy.expected_freed_size_str}.")
Will free 7.9K.
>>> delete_strategy.execute()
Cache deletion done. Saved 7.9K.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.delete_revisions.example-2">

```py
>>> from huggingface_hub import scan_cache_dir
>>> scan_cache_dir().delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa",
...     "e2983b237dccf3ab4937c97fa717319a9ca1a96d",
...     "6c0e6080953db56375760c0471a8c5f2929baf11",
... ).execute()
Cache deletion done. Saved 8.6G.
```

</ExampleCodeBlock>

> [!WARNING]
> `delete_revisions` returns a [DeleteCacheStrategy](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.DeleteCacheStrategy) object that needs to
> be executed. The [DeleteCacheStrategy](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.DeleteCacheStrategy) is not meant to be modified but
> allows having a dry run before actually executing the deletion.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>export_as_table</name><anchor>huggingface_hub.HFCacheInfo.export_as_table</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L466</source><parameters>[{"name": "verbosity", "val": ": int = 0"}]</parameters><paramsdesc>- **verbosity** (`int`, *optional*) --
  The verbosity level. Defaults to 0.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The table as a string.</retdesc></docstring>
Generate a table from the [HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo) object.

Pass `verbosity=0` to get a table with a single row per repo, with columns
"repo_id", "repo_type", "size_on_disk", "nb_files", "last_accessed", "last_modified", "refs", "local_path".

Pass `verbosity=1` to get a table with a row per repo and revision (thus multiple rows can appear for a single repo), with columns
"repo_id", "repo_type", "revision", "size_on_disk", "nb_files", "last_modified", "refs", "local_path".

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.export_as_table.example">

Example:
```py
>>> from huggingface_hub.utils import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(...)

>>> print(hf_cache_info.export_as_table())
REPO ID                                             REPO TYPE SIZE ON DISK NB FILES LAST_ACCESSED LAST_MODIFIED REFS LOCAL PATH
--------------------------------------------------- --------- ------------ -------- ------------- ------------- ---- --------------------------------------------------------------------------------------------------
roberta-base                                        model             2.7M        5 1 day ago     1 week ago    main ~/.cache/huggingface/hub/models--roberta-base
suno/bark                                           model             8.8K        1 1 week ago    1 week ago    main ~/.cache/huggingface/hub/models--suno--bark
t5-base                                             model           893.8M        4 4 days ago    7 months ago  main ~/.cache/huggingface/hub/models--t5-base
t5-large                                            model             3.0G        4 5 weeks ago   5 months ago  main ~/.cache/huggingface/hub/models--t5-large

>>> print(hf_cache_info.export_as_table(verbosity=1))
REPO ID                                             REPO TYPE REVISION                                 SIZE ON DISK NB FILES LAST_MODIFIED REFS LOCAL PATH
--------------------------------------------------- --------- ---------------------------------------- ------------ -------- ------------- ---- -----------------------------------------------------------------------------------------------------------------------------------------------------
roberta-base                                        model     e2da8e2f811d1448a5b465c236feacd80ffbac7b         2.7M        5 1 week ago    main ~/.cache/huggingface/hub/models--roberta-base/snapshots/e2da8e2f811d1448a5b465c236feacd80ffbac7b
suno/bark                                           model     70a8a7d34168586dc5d028fa9666aceade177992         8.8K        1 1 week ago    main ~/.cache/huggingface/hub/models--suno--bark/snapshots/70a8a7d34168586dc5d028fa9666aceade177992
t5-base                                             model     a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1       893.8M        4 7 months ago  main ~/.cache/huggingface/hub/models--t5-base/snapshots/a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1
t5-large                                            model     150ebc2c4b72291e770f58e6057481c8d2ed331a         3.0G        4 5 months ago  main ~/.cache/huggingface/hub/models--t5-large/snapshots/150ebc2c4b72291e770f58e6057481c8d2ed331a
```

</ExampleCodeBlock>








</div></div>

### CachedRepoInfo[[huggingface_hub.CachedRepoInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedRepoInfo</name><anchor>huggingface_hub.CachedRepoInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L176</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": typing.Literal['model', 'dataset', 'space']"}, {"name": "repo_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "nb_files", "val": ": int"}, {"name": "revisions", "val": ": frozenset"}, {"name": "last_accessed", "val": ": float"}, {"name": "last_modified", "val": ": float"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  Repo id of the repo on the Hub. Example: `"google/fleurs"`.
- **repo_type** (`Literal["dataset", "model", "space"]`) --
  Type of the cached repo.
- **repo_path** (`Path`) --
  Local path to the cached repo.
- **size_on_disk** (`int`) --
  Sum of the blob file sizes in the cached repo.
- **nb_files** (`int`) --
  Total number of blob files in the cached repo.
- **revisions** (`frozenset[CachedRevisionInfo]`) --
  Set of [~CachedRevisionInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedRevisionInfo) describing all revisions cached in the repo.
- **last_accessed** (`float`) --
  Timestamp of the last time a blob file of the repo has been accessed.
- **last_modified** (`float`) --
  Timestamp of the last time a blob file of the repo has been modified/created.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a cached repository.



> [!WARNING]
> `size_on_disk` is not necessarily the sum of all revisions sizes because of
> duplicated files. Besides, only blobs are taken into account, not the (negligible)
> size of folders and symlinks.

> [!WARNING]
> `last_accessed` and `last_modified` reliability can depend on the OS you are using.
> See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result)
> for more details.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedRepoInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L238</source><parameters>[]</parameters></docstring>

(property) Sum of the blob file sizes as a human-readable string.

Example: "42.2K".


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>refs</name><anchor>huggingface_hub.CachedRepoInfo.refs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L252</source><parameters>[]</parameters></docstring>

(property) Mapping between `refs` and revision data structures.


</div></div>

### CachedRevisionInfo[[huggingface_hub.CachedRevisionInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedRevisionInfo</name><anchor>huggingface_hub.CachedRevisionInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L105</source><parameters>[{"name": "commit_hash", "val": ": str"}, {"name": "snapshot_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "files", "val": ": frozenset"}, {"name": "refs", "val": ": frozenset"}, {"name": "last_modified", "val": ": float"}]</parameters><paramsdesc>- **commit_hash** (`str`) --
  Hash of the revision (unique).
  Example: `"9338f7b671827df886678df2bdd7cc7b4f36dffd"`.
- **snapshot_path** (`Path`) --
  Path to the revision directory in the `snapshots` folder. It contains the
  exact tree structure as the repo on the Hub.
- **files** -- (`frozenset[CachedFileInfo]`):
  Set of [~CachedFileInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedFileInfo) describing all files contained in the snapshot.
- **refs** (`frozenset[str]`) --
  Set of `refs` pointing to this revision. If the revision has no `refs`, it
  is considered detached.
  Example: `{"main", "2.4.0"}` or `{"refs/pr/1"}`.
- **size_on_disk** (`int`) --
  Sum of the blob file sizes that are symlink-ed by the revision.
- **last_modified** (`float`) --
  Timestamp of the last time the revision has been created/modified.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a revision.

A revision correspond to a folder in the `snapshots` folder and is populated with
the exact tree structure as the repo on the Hub but contains only symlinks. A
revision can be either referenced by 1 or more `refs` or be "detached" (no refs).



> [!WARNING]
> `last_accessed` cannot be determined correctly on a single revision as blob files
> are shared across revisions.

> [!WARNING]
> `size_on_disk` is not necessarily the sum of all file sizes because of possible
> duplicated files. Besides, only blobs are taken into account, not the (negligible)
> size of folders and symlinks.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedRevisionInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L158</source><parameters>[]</parameters></docstring>

(property) Sum of the blob file sizes as a human-readable string.

Example: "42.2K".


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>nb_files</name><anchor>huggingface_hub.CachedRevisionInfo.nb_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L167</source><parameters>[]</parameters></docstring>

(property) Total number of files in the revision.


</div></div>

### CachedFileInfo[[huggingface_hub.CachedFileInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedFileInfo</name><anchor>huggingface_hub.CachedFileInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L41</source><parameters>[{"name": "file_name", "val": ": str"}, {"name": "file_path", "val": ": Path"}, {"name": "blob_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "blob_last_accessed", "val": ": float"}, {"name": "blob_last_modified", "val": ": float"}]</parameters><paramsdesc>- **file_name** (`str`) --
  Name of the file. Example: `config.json`.
- **file_path** (`Path`) --
  Path of the file in the `snapshots` directory. The file path is a symlink
  referring to a blob in the `blobs` folder.
- **blob_path** (`Path`) --
  Path of the blob file. This is equivalent to `file_path.resolve()`.
- **size_on_disk** (`int`) --
  Size of the blob file in bytes.
- **blob_last_accessed** (`float`) --
  Timestamp of the last time the blob file has been accessed (from any
  revision).
- **blob_last_modified** (`float`) --
  Timestamp of the last time the blob file has been modified/created.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a single cached file.



> [!WARNING]
> `blob_last_accessed` and `blob_last_modified` reliability can depend on the OS you
> are using. See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result)
> for more details.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedFileInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L94</source><parameters>[]</parameters></docstring>

(property) Size of the blob file as a human-readable string.

Example: "42.2K".


</div></div>

### DeleteCacheStrategy[[huggingface_hub.DeleteCacheStrategy]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DeleteCacheStrategy</name><anchor>huggingface_hub.DeleteCacheStrategy</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L261</source><parameters>[{"name": "expected_freed_size", "val": ": int"}, {"name": "blobs", "val": ": frozenset"}, {"name": "refs", "val": ": frozenset"}, {"name": "repos", "val": ": frozenset"}, {"name": "snapshots", "val": ": frozenset"}]</parameters><paramsdesc>- **expected_freed_size** (`float`) --
  Expected freed size once strategy is executed.
- **blobs** (`frozenset[Path]`) --
  Set of blob file paths to be deleted.
- **refs** (`frozenset[Path]`) --
  Set of reference file paths to be deleted.
- **repos** (`frozenset[Path]`) --
  Set of entire repo paths to be deleted.
- **snapshots** (`frozenset[Path]`) --
  Set of snapshots to be deleted (directory of symlinks).</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding the strategy to delete cached revisions.

This object is not meant to be instantiated programmatically but to be returned by
[delete_revisions()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions). See documentation for usage example.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>expected_freed_size_str</name><anchor>huggingface_hub.DeleteCacheStrategy.expected_freed_size_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L286</source><parameters>[]</parameters></docstring>

(property) Expected size that will be freed as a human-readable string.

Example: "42.2K".


</div></div>

## Exceptions

### CorruptedCacheException[[huggingface_hub.CorruptedCacheException]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CorruptedCacheException</name><anchor>huggingface_hub.CorruptedCacheException</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L22</source><parameters>""</parameters></docstring>
Exception for any unexpected structure in the Huggingface cache-system.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/cache.md" />

### Jobs
https://huggingface.co/docs/huggingface_hub/main/package_reference/jobs.md

# Jobs

Check the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) documentation page for the reference of methods to manage your Jobs on the Hub.

- Run a Job: [run_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_job)
- Fetch logs: [fetch_job_logs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.fetch_job_logs)
- Inspect Job: [inspect_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.inspect_job)
- List Jobs: [list_jobs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_jobs)
- Cancel Job: [cancel_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.cancel_job)
- Run a UV Job: [run_uv_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_uv_job)

## Data structures

### JobInfo[[huggingface_hub.JobInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.JobInfo</name><anchor>huggingface_hub.JobInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_jobs_api.py#L59</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  Job ID.
- **created_at** (`datetime` or `None`) --
  When the Job was created.
- **docker_image** (`str` or `None`) --
  The Docker image from Docker Hub used for the Job.
  Can be None if space_id is present instead.
- **space_id** (`str` or `None`) --
  The Docker image from Hugging Face Spaces used for the Job.
  Can be None if docker_image is present instead.
- **command** (`list[str]` or `None`) --
  Command of the Job, e.g. `["python", "-c", "print('hello world')"]`
- **arguments** (`list[str]` or `None`) --
  Arguments passed to the command
- **environment** (`dict[str]` or `None`) --
  Environment variables of the Job as a dictionary.
- **secrets** (`dict[str]` or `None`) --
  Secret environment variables of the Job (encrypted).
- **flavor** (`str` or `None`) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/en/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  E.g. `"cpu-basic"`.
- **status** -- (`JobStatus` or `None`):
  Status of the Job, e.g. `JobStatus(stage="RUNNING", message=None)`
  See [JobStage](/docs/huggingface_hub/main/en/package_reference/jobs#huggingface_hub.JobStage) for possible stage values.
- **owner** -- (`JobOwner` or `None`):
  Owner of the Job, e.g. `JobOwner(id="5e9ecfc04957053f60648a3e", name="lhoestq", type="user")`</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a Job.



<ExampleCodeBlock anchor="huggingface_hub.JobInfo.example">

Example:

```python
>>> from huggingface_hub import run_job
>>> job = run_job(
...     image="python:3.12",
...     command=["python", "-c", "print('Hello from the cloud!')"]
... )
>>> job
JobInfo(id='687fb701029421ae5549d998', created_at=datetime.datetime(2025, 7, 22, 16, 6, 25, 79000, tzinfo=datetime.timezone.utc), docker_image='python:3.12', space_id=None, command=['python', '-c', "print('Hello from the cloud!')"], arguments=[], environment={}, secrets={}, flavor='cpu-basic', status=JobStatus(stage='RUNNING', message=None), owner=JobOwner(id='5e9ecfc04957053f60648a3e', name='lhoestq', type='user'), endpoint='https://huggingface.co', url='https://huggingface.co/jobs/lhoestq/687fb701029421ae5549d998')
>>> job.id
'687fb701029421ae5549d998'
>>> job.url
'https://huggingface.co/jobs/lhoestq/687fb701029421ae5549d998'
>>> job.status.stage
'RUNNING'
```

</ExampleCodeBlock>


</div>

### JobOwner[[huggingface_hub.JobOwner]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.JobOwner</name><anchor>huggingface_hub.JobOwner</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_jobs_api.py#L52</source><parameters>[{"name": "id", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

### JobStage[[huggingface_hub.JobStage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.JobStage</name><anchor>huggingface_hub.JobStage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_jobs_api.py#L25</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of possible stage of a Job on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.JobStage.example">

Value can be compared to a string:
```py
assert JobStage.COMPLETED == "COMPLETED"
```

</ExampleCodeBlock>
Possible values are: `COMPLETED`, `CANCELED`, `ERROR`, `DELETED`, `RUNNING`.
Taken from https://github.com/huggingface/moon-landing/blob/main/server/job_types/JobInfo.ts#L61 (private url).


</div>

### JobStatus[[huggingface_hub.JobStatus]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.JobStatus</name><anchor>huggingface_hub.JobStatus</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_jobs_api.py#L46</source><parameters>[{"name": "stage", "val": ": JobStage"}, {"name": "message", "val": ": typing.Optional[str]"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/jobs.md" />

### Repository Cards
https://huggingface.co/docs/huggingface_hub/main/package_reference/cards.md

# Repository Cards

The huggingface_hub library provides a Python interface to create, share, and update Model/Dataset Cards.
Visit the [dedicated documentation page](https://huggingface.co/docs/hub/models-cards) for a deeper view of what
Model Cards on the Hub are, and how they work under the hood. You can also check out our [Model Cards guide](../how-to-model-cards) to
get a feel for how you would use these utilities in your own projects.

## Repo Card[[huggingface_hub.RepoCard]]

The `RepoCard` object is the parent class of [ModelCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCard), [DatasetCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.DatasetCard) and `SpaceCard`.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.RepoCard</name><anchor>huggingface_hub.RepoCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L37</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__init__</name><anchor>huggingface_hub.RepoCard.__init__</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L42</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters><paramsdesc>- **content** (`str`) -- The content of the Markdown file.</paramsdesc><paramgroups>0</paramgroups></docstring>
Initialize a RepoCard from string content. The content should be a
Markdown file with a YAML block at the beginning and a Markdown body.



<ExampleCodeBlock anchor="huggingface_hub.RepoCard.__init__.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> text = '''
... ---
... language: en
... license: mit
... ---
...
... # My repo
... '''
>>> card = RepoCard(text)
>>> card.data.to_dict()
{'language': 'en', 'license': 'mit'}
>>> card.text
'\n# My repo\n'

```

</ExampleCodeBlock>
> [!TIP]
> Raises the following error:
>
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       when the content of the repo card metadata is not a dictionary.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.RepoCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L289</source><parameters>[{"name": "card_data", "val": ": CardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.CardData`) --
  A huggingface_hub.CardData instance containing the metadata you want to include in the YAML
  header of the repo card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.repocard.RepoCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard)</rettype><retdesc>A RepoCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a RepoCard from a template. By default, it uses the default template.

Templates are Jinja2 templates that can be customized by passing keyword arguments.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load</name><anchor>huggingface_hub.RepoCard.load</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L135</source><parameters>[{"name": "repo_id_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id_or_path** (`Union[str, Path]`) --
  The repo ID associated with a Hugging Face Hub repo or a local filepath.
- **repo_type** (`str`, *optional*) --
  The type of Hugging Face repo to push to. Defaults to None, which will use "model". Other options
  are "dataset" and "space". Not used when loading from a local filepath. If this is called from a child
  class, the default value will be the child class's `repo_type`.
- **token** (`str`, *optional*) --
  Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to the stored token.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.repocard.RepoCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard)</rettype><retdesc>The RepoCard (or subclass) initialized from the repo's
README.md file or filepath.</retdesc></docstring>
Initialize a RepoCard from a Hugging Face Hub repo's README.md or a local filepath.







<ExampleCodeBlock anchor="huggingface_hub.RepoCard.load.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> card = RepoCard.load("nateraw/food")
>>> assert card.data.tags == ["generated_from_trainer", "image-classification", "pytorch"]

```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.RepoCard.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L226</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "commit_description", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "parent_commit", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repo ID of the Hugging Face Hub repo to push to. Example: "nateraw/food".
- **token** (`str`, *optional*) --
  Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to
  the stored token.
- **repo_type** (`str`, *optional*, defaults to "model") --
  The type of Hugging Face repo to push to. Options are "model", "dataset", and "space". If this
  function is called by a child class, it will default to the child class's `repo_type`.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit.
- **commit_description** (`str`, *optional*) --
  The description of the generated commit.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **create_pr** (`bool`, *optional*) --
  Whether or not to create a Pull Request with this commit. Defaults to `False`.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed too concurrently.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>URL of the commit which updated the card metadata.</retdesc></docstring>
Push a RepoCard to a Hugging Face Hub repo.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>huggingface_hub.RepoCard.save</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L115</source><parameters>[{"name": "filepath", "val": ": typing.Union[pathlib.Path, str]"}]</parameters><paramsdesc>- **filepath** (`Union[Path, str]`) -- Filepath to the markdown file to save.</paramsdesc><paramgroups>0</paramgroups></docstring>
Save a RepoCard to a file.



<ExampleCodeBlock anchor="huggingface_hub.RepoCard.save.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> card = RepoCard("---\nlanguage: en\n---\n# This is a test repo card")
>>> card.save("/tmp/test.md")

```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>validate</name><anchor>huggingface_hub.RepoCard.validate</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L189</source><parameters>[{"name": "repo_type", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_type** (`str`, *optional*, defaults to "model") --
  The type of Hugging Face repo to push to. Options are "model", "dataset", and "space".
  If this function is called from a child class, the default will be the child class's `repo_type`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Validates card against Hugging Face Hub's card validation logic.
Using this function requires access to the internet, so it is only called
internally by [huggingface_hub.repocard.RepoCard.push_to_hub()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard.push_to_hub).



> [!TIP]
> Raises the following errors:
>
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if the card fails validation checks.
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the request to the Hub API fails for any other reason.


</div></div>

## Card Data[[huggingface_hub.CardData]]

The [CardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.CardData) object is the parent class of [ModelCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCardData) and [DatasetCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.DatasetCardData).

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CardData</name><anchor>huggingface_hub.CardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L165</source><parameters>[{"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Structure containing metadata from a RepoCard.

[CardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.CardData) is the parent class of [ModelCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCardData) and [DatasetCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.DatasetCardData).

Metadata can be exported as a dictionary or YAML. Export can be customized to alter the representation of the data
(example: flatten evaluation results). `CardData` behaves as a dictionary (can get, pop, set values) but do not
inherit from `dict` to allow this export step.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get</name><anchor>huggingface_hub.CardData.get</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L222</source><parameters>[{"name": "key", "val": ": str"}, {"name": "default", "val": ": typing.Any = None"}]</parameters></docstring>
Get value for a given metadata key.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pop</name><anchor>huggingface_hub.CardData.pop</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L227</source><parameters>[{"name": "key", "val": ": str"}, {"name": "default", "val": ": typing.Any = None"}]</parameters></docstring>
Pop value for a given metadata key.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>huggingface_hub.CardData.to_dict</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L178</source><parameters>[]</parameters><rettype>`dict`</rettype><retdesc>CardData represented as a dictionary ready to be dumped to a YAML
block for inclusion in a README.md file.</retdesc></docstring>
Converts CardData to a dict.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_yaml</name><anchor>huggingface_hub.CardData.to_yaml</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L198</source><parameters>[{"name": "line_break", "val": " = None"}, {"name": "original_order", "val": ": typing.Optional[list[str]] = None"}]</parameters><paramsdesc>- **line_break** (str, *optional*) --
  The line break to use when dumping to yaml.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>CardData represented as a YAML block.</retdesc></docstring>
Dumps CardData to a YAML block for inclusion in a README.md file.








</div></div>

## Model Cards

### ModelCard[[huggingface_hub.ModelCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelCard</name><anchor>huggingface_hub.ModelCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L333</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.ModelCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L338</source><parameters>[{"name": "card_data", "val": ": ModelCardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.ModelCardData`) --
  A huggingface_hub.ModelCardData instance containing the metadata you want to include in the YAML
  header of the model card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.ModelCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCard)</rettype><retdesc>A ModelCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a ModelCard from a template. By default, it uses the default template, which can be found here:
https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md

Templates are Jinja2 templates that can be customized by passing keyword arguments.







<ExampleCodeBlock anchor="huggingface_hub.ModelCard.from_template.example">

Example:
```python
>>> from huggingface_hub import ModelCard, ModelCardData, EvalResult

>>> # Using the Default Template
>>> card_data = ModelCardData(
...     language='en',
...     license='mit',
...     library_name='timm',
...     tags=['image-classification', 'resnet'],
...     datasets=['beans'],
...     metrics=['accuracy'],
... )
>>> card = ModelCard.from_template(
...     card_data,
...     model_description='This model does x + y...'
... )

>>> # Including Evaluation Results
>>> card_data = ModelCardData(
...     language='en',
...     tags=['image-classification', 'resnet'],
...     eval_results=[
...         EvalResult(
...             task_type='image-classification',
...             dataset_type='beans',
...             dataset_name='Beans',
...             metric_type='accuracy',
...             metric_value=0.9,
...         ),
...     ],
...     model_name='my-cool-model',
... )
>>> card = ModelCard.from_template(card_data)

>>> # Using a Custom Template
>>> card_data = ModelCardData(
...     language='en',
...     tags=['image-classification', 'resnet']
... )
>>> card = ModelCard.from_template(
...     card_data=card_data,
...     template_path='./src/huggingface_hub/templates/modelcard_template.md',
...     custom_template_var='custom value',  # will be replaced in template if it exists
... )

```

</ExampleCodeBlock>


</div></div>

### ModelCardData[[huggingface_hub.ModelCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelCardData</name><anchor>huggingface_hub.ModelCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L265</source><parameters>[{"name": "base_model", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "datasets", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "eval_results", "val": ": typing.Optional[list[huggingface_hub.repocard_data.EvalResult]] = None"}, {"name": "language", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "license", "val": ": typing.Optional[str] = None"}, {"name": "license_name", "val": ": typing.Optional[str] = None"}, {"name": "license_link", "val": ": typing.Optional[str] = None"}, {"name": "metrics", "val": ": typing.Optional[list[str]] = None"}, {"name": "model_name", "val": ": typing.Optional[str] = None"}, {"name": "pipeline_tag", "val": ": typing.Optional[str] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **base_model** (`str` or `list[str]`, *optional*) --
  The identifier of the base model from which the model derives. This is applicable for example if your model is a
  fine-tune or adapter of an existing model. The value must be the ID of a model on the Hub (or a list of IDs
  if your model derives from multiple models). Defaults to None.
- **datasets** (`Union[str, list[str]]`, *optional*) --
  Dataset or list of datasets that were used to train this model. Should be a dataset ID
  found on https://hf.co/datasets. Defaults to None.
- **eval_results** (`Union[list[EvalResult], EvalResult]`, *optional*) --
  List of `huggingface_hub.EvalResult` that define evaluation results of the model. If provided,
  `model_name` is used to as a name on PapersWithCode's leaderboards. Defaults to `None`.
- **language** (`Union[str, list[str]]`, *optional*) --
  Language of model's training data or metadata. It must be an ISO 639-1, 639-2 or
  639-3 code (two/three letters), or a special value like "code", "multilingual". Defaults to `None`.
- **library_name** (`str`, *optional*) --
  Name of library used by this model. Example: keras or any library from
  https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts.
  Defaults to None.
- **license** (`str`, *optional*) --
  License of this model. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses. Defaults to None.
- **license_name** (`str`, *optional*) --
  Name of the license of this model. Defaults to None. To be used in conjunction with `license_link`.
  Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a name. In that case, use `license` instead.
- **license_link** (`str`, *optional*) --
  Link to the license of this model. Defaults to None. To be used in conjunction with `license_name`.
  Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a link. In that case, use `license` instead.
- **metrics** (`list[str]`, *optional*) --
  List of metrics used to evaluate this model. Should be a metric name that can be found
  at https://hf.co/metrics. Example: 'accuracy'. Defaults to None.
- **model_name** (`str`, *optional*) --
  A name for this model. It is used along with
  `eval_results` to construct the `model-index` within the card's metadata. The name
  you supply here is what will be used on PapersWithCode's leaderboards. If None is provided
  then the repo name is used as a default. Defaults to None.
- **pipeline_tag** (`str`, *optional*) --
  The pipeline tag associated with the model. Example: "text-classification".
- **tags** (`list[str]`, *optional*) --
  List of tags to add to your model that can be used when filtering on the Hugging
  Face Hub. Defaults to None.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.
- **kwargs** (`dict`, *optional*) --
  Additional metadata that will be added to the model card. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>
Model Card Metadata that is used by Hugging Face Hub when included at the top of your README.md



<ExampleCodeBlock anchor="huggingface_hub.ModelCardData.example">

Example:
```python
>>> from huggingface_hub import ModelCardData
>>> card_data = ModelCardData(
...     language="en",
...     license="mit",
...     library_name="timm",
...     tags=['image-classification', 'resnet'],
... )
>>> card_data.to_dict()
{'language': 'en', 'license': 'mit', 'library_name': 'timm', 'tags': ['image-classification', 'resnet']}

```

</ExampleCodeBlock>


</div>

## Dataset Cards

Dataset cards are also known as Data Cards in the ML Community.

### DatasetCard[[huggingface_hub.DatasetCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetCard</name><anchor>huggingface_hub.DatasetCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L414</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.DatasetCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L419</source><parameters>[{"name": "card_data", "val": ": DatasetCardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.DatasetCardData`) --
  A huggingface_hub.DatasetCardData instance containing the metadata you want to include in the YAML
  header of the dataset card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.DatasetCard](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.DatasetCard)</rettype><retdesc>A DatasetCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a DatasetCard from a template. By default, it uses the default template, which can be found here:
https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md

Templates are Jinja2 templates that can be customized by passing keyword arguments.







<ExampleCodeBlock anchor="huggingface_hub.DatasetCard.from_template.example">

Example:
```python
>>> from huggingface_hub import DatasetCard, DatasetCardData

>>> # Using the Default Template
>>> card_data = DatasetCardData(
...     language='en',
...     license='mit',
...     annotations_creators='crowdsourced',
...     task_categories=['text-classification'],
...     task_ids=['sentiment-classification', 'text-scoring'],
...     multilinguality='monolingual',
...     pretty_name='My Text Classification Dataset',
... )
>>> card = DatasetCard.from_template(
...     card_data,
...     pretty_name=card_data.pretty_name,
... )

>>> # Using a Custom Template
>>> card_data = DatasetCardData(
...     language='en',
...     license='mit',
... )
>>> card = DatasetCard.from_template(
...     card_data=card_data,
...     template_path='./src/huggingface_hub/templates/datasetcard_template.md',
...     custom_template_var='custom value',  # will be replaced in template if it exists
... )

```

</ExampleCodeBlock>


</div></div>

### DatasetCardData[[huggingface_hub.DatasetCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetCardData</name><anchor>huggingface_hub.DatasetCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L394</source><parameters>[{"name": "language", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "license", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "annotations_creators", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "language_creators", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "multilinguality", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "size_categories", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "source_datasets", "val": ": typing.Optional[list[str]] = None"}, {"name": "task_categories", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "task_ids", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "paperswithcode_id", "val": ": typing.Optional[str] = None"}, {"name": "pretty_name", "val": ": typing.Optional[str] = None"}, {"name": "train_eval_index", "val": ": typing.Optional[dict] = None"}, {"name": "config_names", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **language** (`list[str]`, *optional*) --
  Language of dataset's data or metadata. It must be an ISO 639-1, 639-2 or
  639-3 code (two/three letters), or a special value like "code", "multilingual".
- **license** (`Union[str, list[str]]`, *optional*) --
  License(s) of this dataset. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses.
- **annotations_creators** (`Union[str, list[str]]`, *optional*) --
  How the annotations for the dataset were created.
  Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'no-annotation', 'other'.
- **language_creators** (`Union[str, list[str]]`, *optional*) --
  How the text-based data in the dataset was created.
  Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'other'
- **multilinguality** (`Union[str, list[str]]`, *optional*) --
  Whether the dataset is multilingual.
  Options are: 'monolingual', 'multilingual', 'translation', 'other'.
- **size_categories** (`Union[str, list[str]]`, *optional*) --
  The number of examples in the dataset. Options are: 'n<1K', '1K<n<10K', '10K<n<100K',
  '100K<n<1M', '1M<n<10M', '10M<n<100M', '100M<n<1B', '1B<n<10B', '10B<n<100B', '100B<n<1T', 'n>1T', and 'other'.
- **source_datasets** (`list[str]]`, *optional*) --
  Indicates whether the dataset is an original dataset or extended from another existing dataset.
  Options are: 'original' and 'extended'.
- **task_categories** (`Union[str, list[str]]`, *optional*) --
  What categories of task does the dataset support?
- **task_ids** (`Union[str, list[str]]`, *optional*) --
  What specific tasks does the dataset support?
- **paperswithcode_id** (`str`, *optional*) --
  ID of the dataset on PapersWithCode.
- **pretty_name** (`str`, *optional*) --
  A more human-readable name for the dataset. (ex. "Cats vs. Dogs")
- **train_eval_index** (`dict`, *optional*) --
  A dictionary that describes the necessary spec for doing evaluation on the Hub.
  If not provided, it will be gathered from the 'train-eval-index' key of the kwargs.
- **config_names** (`Union[str, list[str]]`, *optional*) --
  A list of the available dataset configs for the dataset.</paramsdesc><paramgroups>0</paramgroups></docstring>
Dataset Card Metadata that is used by Hugging Face Hub when included at the top of your README.md




</div>

## Space Cards

### SpaceCard[[huggingface_hub.SpaceCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceCard</name><anchor>huggingface_hub.SpaceCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L479</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>


</div>

### SpaceCardData[[huggingface_hub.SpaceCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceCardData</name><anchor>huggingface_hub.SpaceCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L474</source><parameters>[{"name": "title", "val": ": typing.Optional[str] = None"}, {"name": "sdk", "val": ": typing.Optional[str] = None"}, {"name": "sdk_version", "val": ": typing.Optional[str] = None"}, {"name": "python_version", "val": ": typing.Optional[str] = None"}, {"name": "app_file", "val": ": typing.Optional[str] = None"}, {"name": "app_port", "val": ": typing.Optional[int] = None"}, {"name": "license", "val": ": typing.Optional[str] = None"}, {"name": "duplicated_from", "val": ": typing.Optional[str] = None"}, {"name": "models", "val": ": typing.Optional[list[str]] = None"}, {"name": "datasets", "val": ": typing.Optional[list[str]] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **title** (`str`, *optional*) --
  Title of the Space.
- **sdk** (`str`, *optional*) --
  SDK of the Space (one of `gradio`, `streamlit`, `docker`, or `static`).
- **sdk_version** (`str`, *optional*) --
  Version of the used SDK (if Gradio/Streamlit sdk).
- **python_version** (`str`, *optional*) --
  Python version used in the Space (if Gradio/Streamlit sdk).
- **app_file** (`str`, *optional*) --
  Path to your main application file (which contains either gradio or streamlit Python code, or static html code).
  Path is relative to the root of the repository.
- **app_port** (`str`, *optional*) --
  Port on which your application is running. Used only if sdk is `docker`.
- **license** (`str`, *optional*) --
  License of this model. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses.
- **duplicated_from** (`str`, *optional*) --
  ID of the original Space if this is a duplicated Space.
- **models** (list`str`, *optional*) --
  List of models related to this Space. Should be a dataset ID found on https://hf.co/models.
- **datasets** (`list[str]`, *optional*) --
  List of datasets related to this Space. Should be a dataset ID found on https://hf.co/datasets.
- **tags** (`list[str]`, *optional*) --
  List of tags to add to your Space that can be used when filtering on the Hub.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.
- **kwargs** (`dict`, *optional*) --
  Additional metadata that will be added to the space card.</paramsdesc><paramgroups>0</paramgroups></docstring>
Space Card Metadata that is used by Hugging Face Hub when included at the top of your README.md

To get an exhaustive reference of Spaces configuration, please visit https://huggingface.co/docs/hub/spaces-config-reference#spaces-configuration-reference.



<ExampleCodeBlock anchor="huggingface_hub.SpaceCardData.example">

Example:
```python
>>> from huggingface_hub import SpaceCardData
>>> card_data = SpaceCardData(
...     title="Dreambooth Training",
...     license="mit",
...     sdk="gradio",
...     duplicated_from="multimodalart/dreambooth-training"
... )
>>> card_data.to_dict()
{'title': 'Dreambooth Training', 'sdk': 'gradio', 'license': 'mit', 'duplicated_from': 'multimodalart/dreambooth-training'}
```

</ExampleCodeBlock>


</div>

## Utilities

### EvalResult[[huggingface_hub.EvalResult]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.EvalResult</name><anchor>huggingface_hub.EvalResult</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L13</source><parameters>[{"name": "task_type", "val": ": str"}, {"name": "dataset_type", "val": ": str"}, {"name": "dataset_name", "val": ": str"}, {"name": "metric_type", "val": ": str"}, {"name": "metric_value", "val": ": typing.Any"}, {"name": "task_name", "val": ": typing.Optional[str] = None"}, {"name": "dataset_config", "val": ": typing.Optional[str] = None"}, {"name": "dataset_split", "val": ": typing.Optional[str] = None"}, {"name": "dataset_revision", "val": ": typing.Optional[str] = None"}, {"name": "dataset_args", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "metric_name", "val": ": typing.Optional[str] = None"}, {"name": "metric_config", "val": ": typing.Optional[str] = None"}, {"name": "metric_args", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "verified", "val": ": typing.Optional[bool] = None"}, {"name": "verify_token", "val": ": typing.Optional[str] = None"}, {"name": "source_name", "val": ": typing.Optional[str] = None"}, {"name": "source_url", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **task_type** (`str`) --
  The task identifier. Example: "image-classification".
- **dataset_type** (`str`) --
  The dataset identifier. Example: "common_voice". Use dataset id from https://hf.co/datasets.
- **dataset_name** (`str`) --
  A pretty name for the dataset. Example: "Common Voice (French)".
- **metric_type** (`str`) --
  The metric identifier. Example: "wer". Use metric id from https://hf.co/metrics.
- **metric_value** (`Any`) --
  The metric value. Example: 0.9 or "20.0 ± 1.2".
- **task_name** (`str`, *optional*) --
  A pretty name for the task. Example: "Speech Recognition".
- **dataset_config** (`str`, *optional*) --
  The name of the dataset configuration used in `load_dataset()`.
  Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info:
  https://hf.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
- **dataset_split** (`str`, *optional*) --
  The split used in `load_dataset()`. Example: "test".
- **dataset_revision** (`str`, *optional*) --
  The revision (AKA Git Sha) of the dataset used in `load_dataset()`.
  Example: 5503434ddd753f426f4b38109466949a1217c2bb
- **dataset_args** (`dict[str, Any]`, *optional*) --
  The arguments passed during `Metric.compute()`. Example for `bleu`: `{"max_order": 4}`
- **metric_name** (`str`, *optional*) --
  A pretty name for the metric. Example: "Test WER".
- **metric_config** (`str`, *optional*) --
  The name of the metric configuration used in `load_metric()`.
  Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
  See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
- **metric_args** (`dict[str, Any]`, *optional*) --
  The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4
- **verified** (`bool`, *optional*) --
  Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
- **verify_token** (`str`, *optional*) --
  A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.
- **source_name** (`str`, *optional*) --
  The name of the source of the evaluation result. Example: "Open LLM Leaderboard".
- **source_url** (`str`, *optional*) --
  The URL of the source of the evaluation result. Example: "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard".</paramsdesc><paramgroups>0</paramgroups></docstring>

Flattened representation of individual evaluation results found in model-index of Model Cards.

For more information on the model-index spec, see https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_equal_except_value</name><anchor>huggingface_hub.EvalResult.is_equal_except_value</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L145</source><parameters>[{"name": "other", "val": ": EvalResult"}]</parameters></docstring>

Return True if `self` and `other` describe exactly the same metric but with a
different value.


</div></div>

### model_index_to_eval_results[[huggingface_hub.repocard_data.model_index_to_eval_results]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.repocard_data.model_index_to_eval_results</name><anchor>huggingface_hub.repocard_data.model_index_to_eval_results</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L555</source><parameters>[{"name": "model_index", "val": ": list"}]</parameters><paramsdesc>- **model_index** (`list[dict[str, Any]]`) --
  A model index data structure, likely coming from a README.md file on the
  Hugging Face Hub.</paramsdesc><paramgroups>0</paramgroups><rettype>model_name (`str`)</rettype><retdesc>The name of the model as found in the model index. This is used as the
identifier for the model on leaderboards like PapersWithCode.
eval_results (`list[EvalResult]`):
A list of `huggingface_hub.EvalResult` objects containing the metrics
reported in the provided model_index.</retdesc></docstring>
Takes in a model index and returns the model name and a list of `huggingface_hub.EvalResult` objects.

A detailed spec of the model index can be found here:
https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1







<ExampleCodeBlock anchor="huggingface_hub.repocard_data.model_index_to_eval_results.example">

Example:
```python
>>> from huggingface_hub.repocard_data import model_index_to_eval_results
>>> # Define a minimal model index
>>> model_index = [
...     {
...         "name": "my-cool-model",
...         "results": [
...             {
...                 "task": {
...                     "type": "image-classification"
...                 },
...                 "dataset": {
...                     "type": "beans",
...                     "name": "Beans"
...                 },
...                 "metrics": [
...                     {
...                         "type": "accuracy",
...                         "value": 0.9
...                     }
...                 ]
...             }
...         ]
...     }
... ]
>>> model_name, eval_results = model_index_to_eval_results(model_index)
>>> model_name
'my-cool-model'
>>> eval_results[0].task_type
'image-classification'
>>> eval_results[0].metric_type
'accuracy'

```

</ExampleCodeBlock>


</div>

### eval_results_to_model_index[[huggingface_hub.repocard_data.eval_results_to_model_index]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.repocard_data.eval_results_to_model_index</name><anchor>huggingface_hub.repocard_data.eval_results_to_model_index</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L671</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "eval_results", "val": ": list"}]</parameters><paramsdesc>- **model_name** (`str`) --
  Name of the model (ex. "my-cool-model"). This is used as the identifier
  for the model on leaderboards like PapersWithCode.
- **eval_results** (`list[EvalResult]`) --
  List of `huggingface_hub.EvalResult` objects containing the metrics to be
  reported in the model-index.</paramsdesc><paramgroups>0</paramgroups><rettype>model_index (`list[dict[str, Any]]`)</rettype><retdesc>The eval_results converted to a model-index.</retdesc></docstring>
Takes in given model name and list of `huggingface_hub.EvalResult` and returns a
valid model-index that will be compatible with the format expected by the
Hugging Face Hub.







<ExampleCodeBlock anchor="huggingface_hub.repocard_data.eval_results_to_model_index.example">

Example:
```python
>>> from huggingface_hub.repocard_data import eval_results_to_model_index, EvalResult
>>> # Define minimal eval_results
>>> eval_results = [
...     EvalResult(
...         task_type="image-classification",  # Required
...         dataset_type="beans",  # Required
...         dataset_name="Beans",  # Required
...         metric_type="accuracy",  # Required
...         metric_value=0.9,  # Required
...     )
... ]
>>> eval_results_to_model_index("my-cool-model", eval_results)
[{'name': 'my-cool-model', 'results': [{'task': {'type': 'image-classification'}, 'dataset': {'name': 'Beans', 'type': 'beans'}, 'metrics': [{'type': 'accuracy', 'value': 0.9}]}]}]

```

</ExampleCodeBlock>


</div>

### metadata_eval_result[[huggingface_hub.metadata_eval_result]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.metadata_eval_result</name><anchor>huggingface_hub.metadata_eval_result</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L551</source><parameters>[{"name": "model_pretty_name", "val": ": str"}, {"name": "task_pretty_name", "val": ": str"}, {"name": "task_id", "val": ": str"}, {"name": "metrics_pretty_name", "val": ": str"}, {"name": "metrics_id", "val": ": str"}, {"name": "metrics_value", "val": ": typing.Any"}, {"name": "dataset_pretty_name", "val": ": str"}, {"name": "dataset_id", "val": ": str"}, {"name": "metrics_config", "val": ": typing.Optional[str] = None"}, {"name": "metrics_verified", "val": ": bool = False"}, {"name": "dataset_config", "val": ": typing.Optional[str] = None"}, {"name": "dataset_split", "val": ": typing.Optional[str] = None"}, {"name": "dataset_revision", "val": ": typing.Optional[str] = None"}, {"name": "metrics_verification_token", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model_pretty_name** (`str`) --
  The name of the model in natural language.
- **task_pretty_name** (`str`) --
  The name of a task in natural language.
- **task_id** (`str`) --
  Example: automatic-speech-recognition. A task id.
- **metrics_pretty_name** (`str`) --
  A name for the metric in natural language. Example: Test WER.
- **metrics_id** (`str`) --
  Example: wer. A metric id from https://hf.co/metrics.
- **metrics_value** (`Any`) --
  The value from the metric. Example: 20.0 or "20.0 ± 1.2".
- **dataset_pretty_name** (`str`) --
  The name of the dataset in natural language.
- **dataset_id** (`str`) --
  Example: common_voice. A dataset id from https://hf.co/datasets.
- **metrics_config** (`str`, *optional*) --
  The name of the metric configuration used in `load_metric()`.
  Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
- **metrics_verified** (`bool`, *optional*, defaults to `False`) --
  Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
- **dataset_config** (`str`, *optional*) --
  Example: fr. The name of the dataset configuration used in `load_dataset()`.
- **dataset_split** (`str`, *optional*) --
  Example: test. The name of the dataset split used in `load_dataset()`.
- **dataset_revision** (`str`, *optional*) --
  Example: 5503434ddd753f426f4b38109466949a1217c2bb. The name of the dataset dataset revision
  used in `load_dataset()`.
- **metrics_verification_token** (`bool`, *optional*) --
  A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict`</rettype><retdesc>a metadata dict with the result from a model evaluated on a dataset.</retdesc></docstring>

Creates a metadata dict with the result from a model evaluated on a dataset.







<ExampleCodeBlock anchor="huggingface_hub.metadata_eval_result.example">

Example:
```python
>>> from huggingface_hub import metadata_eval_result
>>> results = metadata_eval_result(
...         model_pretty_name="RoBERTa fine-tuned on ReactionGIF",
...         task_pretty_name="Text Classification",
...         task_id="text-classification",
...         metrics_pretty_name="Accuracy",
...         metrics_id="accuracy",
...         metrics_value=0.2662102282047272,
...         dataset_pretty_name="ReactionJPEG",
...         dataset_id="julien-c/reactionjpeg",
...         dataset_config="default",
...         dataset_split="test",
... )
>>> results == {
...     'model-index': [
...         {
...             'name': 'RoBERTa fine-tuned on ReactionGIF',
...             'results': [
...                 {
...                     'task': {
...                         'type': 'text-classification',
...                         'name': 'Text Classification'
...                     },
...                     'dataset': {
...                         'name': 'ReactionJPEG',
...                         'type': 'julien-c/reactionjpeg',
...                         'config': 'default',
...                         'split': 'test'
...                     },
...                     'metrics': [
...                         {
...                             'type': 'accuracy',
...                             'value': 0.2662102282047272,
...                             'name': 'Accuracy',
...                             'verified': False
...                         }
...                     ]
...                 }
...             ]
...         }
...     ]
... }
True

```

</ExampleCodeBlock>


</div>

### metadata_update[[huggingface_hub.metadata_update]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.metadata_update</name><anchor>huggingface_hub.metadata_update</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L679</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "metadata", "val": ": dict"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "overwrite", "val": ": bool = False"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "commit_description", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": bool = False"}, {"name": "parent_commit", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository.
- **metadata** (`dict`) --
  A dictionary containing the metadata to be updated.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if updating to a dataset or space,
  `None` or `"model"` if updating to a model. Default is `None`.
- **overwrite** (`bool`, *optional*, defaults to `False`) --
  If set to `True` an existing field can be overwritten, otherwise
  attempting to overwrite an existing field will cause an error.
- **token** (`str`, *optional*) --
  The Hugging Face authentication token.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Update metadata with huggingface_hub"`
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the
  `"main"` branch.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request from `revision` with that commit.
  Defaults to `False`.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed too concurrently.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>URL of the commit which updated the card metadata.</retdesc></docstring>

Updates the metadata in the README.md of a repository on the Hugging Face Hub.
If the README.md file doesn't exist yet, a new one is created with metadata and
the default ModelCard or DatasetCard template. For `space` repo, an error is thrown
as a Space cannot exist without a `README.md` file.







<ExampleCodeBlock anchor="huggingface_hub.metadata_update.example">

Example:
```python
>>> from huggingface_hub import metadata_update
>>> metadata = {'model-index': [{'name': 'RoBERTa fine-tuned on ReactionGIF',
...             'results': [{'dataset': {'name': 'ReactionGIF',
...                                      'type': 'julien-c/reactiongif'},
...                           'metrics': [{'name': 'Recall',
...                                        'type': 'recall',
...                                        'value': 0.7762102282047272}],
...                          'task': {'name': 'Text Classification',
...                                   'type': 'text-classification'}}]}]}
>>> url = metadata_update("hf-internal-testing/reactiongif-roberta-card", metadata)

```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/cards.md" />

### Downloading files
https://huggingface.co/docs/huggingface_hub/main/package_reference/file_download.md

# Downloading files

## Download a single file

### hf_hub_download[[huggingface_hub.hf_hub_download]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.hf_hub_download</name><anchor>huggingface_hub.hf_hub_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L825</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "library_version", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "local_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "user_agent", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "force_download", "val": ": bool = False"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}, {"name": "tqdm_class", "val": ": typing.Optional[type[tqdm.asyncio.tqdm_asyncio]] = None"}, {"name": "dry_run", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **subfolder** (`str`, *optional*) --
  An optional value corresponding to a folder inside the model repo.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **library_name** (`str`, *optional*) --
  The name of the library to which the object corresponds.
- **library_version** (`str`, *optional*) --
  The version of the library.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded file will be placed under this directory.
- **user_agent** (`dict`, `str`, *optional*) --
  The user-agent info in the form of a dictionary or a string.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in
  the local cache.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `requests.request`.
- **token** (`str`, `bool`, *optional*) --
  A token to be used for the download.
  - If `True`, the token is read from the HuggingFace config
    folder.
  - If a string, it's used as the authentication token.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **headers** (`dict`, *optional*) --
  Additional headers to be sent with the request.
- **tqdm_class** (`tqdm`, *optional*) --
  If provided, overwrites the default behavior for the progress bar. Passed
  argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior.
  Defaults to the custom HF progress bar that can be disabled by setting
  `HF_HUB_DISABLE_PROGRESS_BARS` environment variable.
- **dry_run** (`bool`, *optional*, defaults to `False`) --
  If `True`, perform a dry run without actually downloading the file. Returns a
  [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) object containing information about what would be downloaded.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo)</rettype><retdesc>- If `dry_run=False`: Local path of file or if networking is off, last version of file cached on disk.
- If `dry_run=True`: A [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) object containing download information.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- `~utils.RemoteEntryNotFoundError` -- 
  If the file to download cannot be found.
- [LocalEntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) -- 
  If network is disabled or unavailable and file is not found in cache.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` but the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- 
  If ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or `~utils.RemoteEntryNotFoundError` or [LocalEntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download a given file if it's not already present in the local cache.

The new cache file layout looks like this:
- The cache directory contains one subfolder per repo_id (namespaced by repo type)
- inside each repo folder:
  - refs is a list of the latest known revision => commit_hash pairs
  - blobs contains the actual file blobs (identified by their git-sha or sha256, depending on
    whether they're LFS files or not)
  - snapshots contains one subfolder per commit, each "commit" contains the subset of the files
    that have been resolved at that particular commit. Each filename is a symlink to the blob
    at that particular commit.

<ExampleCodeBlock anchor="huggingface_hub.hf_hub_download.example">

```
[  96]  .
└── [ 160]  models--julien-c--EsperBERTo-small
    ├── [ 160]  blobs
    │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
    │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
    │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
    ├── [  96]  refs
    │   └── [  40]  main
    └── [ 128]  snapshots
        ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
        │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
        │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
            ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
            └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
```

</ExampleCodeBlock>

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files. While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.












</div>

### hf_hub_url[[huggingface_hub.hf_hub_url]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.hf_hub_url</name><anchor>huggingface_hub.hf_hub_url</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L181</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) name and a repo name separated
  by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **subfolder** (`str`, *optional*) --
  An optional value corresponding to a folder inside the repo.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.</paramsdesc><paramgroups>0</paramgroups></docstring>
Construct the URL of a file from the given information.

The resolved address can either be a huggingface.co-hosted url, or a link to
Cloudfront (a Content Delivery Network, or CDN) for large files which are
more than a few MBs.



<ExampleCodeBlock anchor="huggingface_hub.hf_hub_url.example">

Example:

```python
>>> from huggingface_hub import hf_hub_url

>>> hf_hub_url(
...     repo_id="julien-c/EsperBERTo-small", filename="pytorch_model.bin"
... )
'https://huggingface.co/julien-c/EsperBERTo-small/resolve/main/pytorch_model.bin'
```

</ExampleCodeBlock>

> [!TIP]
> Notes:
>
>     Cloudfront is replicated over the globe so downloads are way faster for
>     the end user (and it also lowers our bandwidth costs).
>
>     Cloudfront aggressively caches files by default (default TTL is 24
>     hours), however this is not an issue here because we implement a
>     git-based versioning system on huggingface.co, which means that we store
>     the files on S3/Cloudfront in a content-addressable way (i.e., the file
>     name is its hash). Using content-addressable filenames means cache can't
>     ever be stale.
>
>     In terms of client-side caching from this library, we base our caching
>     on the objects' entity tag (`ETag`), which is an identifier of a
>     specific version of a resource [1]_. An object's ETag is: its git-sha1
>     if stored in git, or its sha256 if stored in git-lfs.

References:

-  [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag


</div>

## Download a snapshot of the repo[[huggingface_hub.snapshot_download]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.snapshot_download</name><anchor>huggingface_hub.snapshot_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_snapshot_download.py#L104</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "local_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "library_version", "val": ": typing.Optional[str] = None"}, {"name": "user_agent", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "force_download", "val": ": bool = False"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "max_workers", "val": ": int = 8"}, {"name": "tqdm_class", "val": ": typing.Optional[type[tqdm.asyncio.tqdm_asyncio]] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}, {"name": "dry_run", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded files will be placed under this directory.
- **library_name** (`str`, *optional*) --
  The name of the library to which the object corresponds.
- **library_version** (`str`, *optional*) --
  The version of the library.
- **user_agent** (`str`, `dict`, *optional*) --
  The user-agent info in the form of a dictionary or a string.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `httpx.request`.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in the local cache.
- **token** (`str`, `bool`, *optional*) --
  A token to be used for the download.
  - If `True`, the token is read from the HuggingFace config
    folder.
  - If a string, it's used as the authentication token.
- **headers** (`dict`, *optional*) --
  Additional headers to include in the request. Those headers take precedence over the others.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are downloaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not downloaded.
- **max_workers** (`int`, *optional*) --
  Number of concurrent threads to download files (1 thread = 1 file download).
  Defaults to 8.
- **tqdm_class** (`tqdm`, *optional*) --
  If provided, overwrites the default behavior for the progress bar. Passed
  argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior.
  Note that the `tqdm_class` is not passed to each individual download.
  Defaults to the custom HF progress bar that can be disabled by setting
  `HF_HUB_DISABLE_PROGRESS_BARS` environment variable.
- **dry_run** (`bool`, *optional*, defaults to `False`) --
  If `True`, perform a dry run without actually downloading the files. Returns a list of
  [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) objects containing information about what would be downloaded.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or list of [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo)</rettype><retdesc>- If `dry_run=False`: Local snapshot path.
- If `dry_run=True`: A list of [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) objects containing download information.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` and the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- if
  ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  if some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download repo files.

Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from
a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
`allow_patterns` and `ignore_patterns`.

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files. While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.

An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly
configured. It is also not possible to filter which files to download when cloning a repository using git.












</div>

## Get metadata about a file

### get_hf_file_metadata[[huggingface_hub.get_hf_file_metadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_hf_file_metadata</name><anchor>huggingface_hub.get_hf_file_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L1500</source><parameters>[{"name": "url", "val": ": str"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "timeout", "val": ": typing.Optional[float] = 10"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "library_version", "val": ": typing.Optional[str] = None"}, {"name": "user_agent", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **url** (`str`) --
  File url, for example returned by [hf_hub_url()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_url).
- **token** (`str` or `bool`, *optional*) --
  A token to be used for the download.
  - If `True`, the token is read from the HuggingFace config
    folder.
  - If `False` or `None`, no token is provided.
  - If a string, it's used as the authentication token.
- **timeout** (`float`, *optional*, defaults to 10) --
  How many seconds to wait for the server to send metadata before giving up.
- **library_name** (`str`, *optional*) --
  The name of the library to which the object corresponds.
- **library_version** (`str`, *optional*) --
  The version of the library.
- **user_agent** (`dict`, `str`, *optional*) --
  The user-agent info in the form of a dictionary or a string.
- **headers** (`dict`, *optional*) --
  Additional headers to be sent with the request.
- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.</paramsdesc><paramgroups>0</paramgroups><retdesc>A [HfFileMetadata](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.HfFileMetadata) object containing metadata such as location, etag, size and
commit_hash.</retdesc></docstring>
Fetch metadata of a file versioned on the Hub for a given url.






</div>

### HfFileMetadata[[huggingface_hub.HfFileMetadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HfFileMetadata</name><anchor>huggingface_hub.HfFileMetadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L127</source><parameters>[{"name": "commit_hash", "val": ": typing.Optional[str]"}, {"name": "etag", "val": ": typing.Optional[str]"}, {"name": "location", "val": ": str"}, {"name": "size", "val": ": typing.Optional[int]"}, {"name": "xet_file_data", "val": ": typing.Optional[huggingface_hub.utils._xet.XetFileData]"}]</parameters><paramsdesc>- **commit_hash** (`str`, *optional*) --
  The commit_hash related to the file.
- **etag** (`str`, *optional*) --
  Etag of the file on the server.
- **location** (`str`) --
  Location where to download the file. Can be a Hub url or not (CDN).
- **size** (`size`) --
  Size of the file. In case of an LFS file, contains the size of the actual
  LFS file, not the pointer.
- **xet_file_data** (`XetFileData`, *optional*) --
  Xet information for the file. This is only set if the file is stored using Xet storage.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a file versioned on the Hub.

Returned by [get_hf_file_metadata()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.get_hf_file_metadata) based on a URL.




</div>

## Caching

The methods displayed above are designed to work with a caching system that prevents
re-downloading files. The caching system was updated in v0.8.0 to become the central
cache-system shared across libraries that depend on the Hub.

Read the [cache-system guide](../guides/manage-cache) for a detailed presentation of caching at HF.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/file_download.md" />

### MCP Client
https://huggingface.co/docs/huggingface_hub/main/package_reference/mcp.md

# MCP Client

The `huggingface_hub` library now includes an [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient), designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via the [Model Context Protocol](https://modelcontextprotocol.io) (MCP). This client extends an [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) to seamlessly integrate Tool usage.

The [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient) connects to MCP servers (local `stdio` scripts or remote `http`/`sse` services) that expose tools. It feeds these tools to an LLM (via [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)). If the LLM decides to use a tool, [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient) manages the execution request to the MCP server and relays the Tool's output back to the LLM, often streaming results in real-time.

We also provide a higher-level [Agent](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.Agent) class. This 'Tiny Agent' simplifies creating conversational Agents by managing the chat loop and state, acting as a wrapper around [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient).



## MCP Client[[huggingface_hub.MCPClient]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.MCPClient</name><anchor>huggingface_hub.MCPClient</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/mcp_client.py#L55</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "provider", "val": ": typing.Union[typing.Literal['black-forest-labs', 'cerebras', 'clarifai', 'cohere', 'fal-ai', 'featherless-ai', 'fireworks-ai', 'groq', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'nscale', 'openai', 'publicai', 'replicate', 'sambanova', 'scaleway', 'together', 'wavespeed', 'zai-org'], typing.Literal['auto'], NoneType] = None"}, {"name": "base_url", "val": ": typing.Optional[str] = None"}, {"name": "api_key", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, `optional`) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint or other local or remote endpoint.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **base_url** (`str`, *optional*) --
  The base URL to run inference. Defaults to None.
- **api_key** (`str`, `optional`) --
  Token to use for authentication. Will default to the locally Hugging Face saved token if not provided. You can also use your own provider API key to interact directly with the provider's service.</paramsdesc><paramgroups>0</paramgroups></docstring>

Client for connecting to one or more MCP servers and processing chat completions with tools.

> [!WARNING]
> This class is experimental and might be subject to breaking changes in the future without prior notice.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_mcp_server</name><anchor>huggingface_hub.MCPClient.add_mcp_server</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/mcp_client.py#L123</source><parameters>[{"name": "type", "val": ": typing.Literal['stdio', 'sse', 'http']"}, {"name": "**params", "val": ": typing.Any"}]</parameters><paramsdesc>- **type** (`str`) --
  Type of the server to connect to. Can be one of:
  - "stdio": Standard input/output server (local)
  - "sse": Server-sent events (SSE) server
  - "http": StreamableHTTP server
- ****params** (`dict[str, Any]`) --
  Server parameters that can be either:
  - For stdio servers:
    - command (str): The command to run the MCP server
    - args (list[str], optional): Arguments for the command
    - env (dict[str, str], optional): Environment variables for the command
    - cwd (Union[str, Path, None], optional): Working directory for the command
    - allowed_tools (list[str], optional): List of tool names to allow from this server
  - For SSE servers:
    - url (str): The URL of the SSE server
    - headers (dict[str, Any], optional): Headers for the SSE connection
    - timeout (float, optional): Connection timeout
    - sse_read_timeout (float, optional): SSE read timeout
    - allowed_tools (list[str], optional): List of tool names to allow from this server
  - For StreamableHTTP servers:
    - url (str): The URL of the StreamableHTTP server
    - headers (dict[str, Any], optional): Headers for the StreamableHTTP connection
    - timeout (timedelta, optional): Connection timeout
    - sse_read_timeout (timedelta, optional): SSE read timeout
    - terminate_on_close (bool, optional): Whether to terminate on close
    - allowed_tools (list[str], optional): List of tool names to allow from this server</paramsdesc><paramgroups>0</paramgroups></docstring>
Connect to an MCP server




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup</name><anchor>huggingface_hub.MCPClient.cleanup</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/mcp_client.py#L109</source><parameters>[]</parameters></docstring>
Clean up resources

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>process_single_turn_with_tools</name><anchor>huggingface_hub.MCPClient.process_single_turn_with_tools</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/mcp_client.py#L248</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "exit_loop_tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "exit_if_first_chunk_no_tool", "val": ": bool = False"}]</parameters><paramsdesc>- **messages** (`list[dict]`) --
  List of message objects representing the conversation history
- **exit_loop_tools** (`list[ChatCompletionInputTool]`, *optional*) --
  List of tools that should exit the generator when called
- **exit_if_first_chunk_no_tool** (`bool`, *optional*) --
  Exit if no tool is present in the first chunks. Default to False.</paramsdesc><paramgroups>0</paramgroups><yielddesc>[ChatCompletionStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput) chunks or [ChatCompletionInputMessage](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputMessage) objects</yielddesc></docstring>
Process a query using `self.model` and available tools, yielding chunks and tool outputs.






</div></div>

## Agent[[huggingface_hub.Agent]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Agent</name><anchor>huggingface_hub.Agent</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/agent.py#L13</source><parameters>[{"name": "model", "val": ": Optional[str] = None"}, {"name": "servers", "val": ": Iterable[ServerConfig]"}, {"name": "provider", "val": ": Optional[PROVIDER_OR_POLICY_T] = None"}, {"name": "base_url", "val": ": Optional[str] = None"}, {"name": "api_key", "val": ": Optional[str] = None"}, {"name": "prompt", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint or other local or remote endpoint.
- **servers** (`Iterable[dict]`) --
  MCP servers to connect to. Each server is a dictionary containing a `type` key and a `config` key. The `type` key can be `"stdio"` or `"sse"`, and the `config` key is a dictionary of arguments for the server.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **base_url** (`str`, *optional*) --
  The base URL to run inference. Defaults to None.
- **api_key** (`str`, *optional*) --
  Token to use for authentication. Will default to the locally Hugging Face saved token if not provided. You can also use your own provider API key to interact directly with the provider's service.
- **prompt** (`str`, *optional*) --
  The system prompt to use for the agent. Defaults to the default system prompt in `constants.py`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Implementation of a Simple Agent, which is a simple while loop built right on top of an [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient).

> [!WARNING]
> This class is experimental and might be subject to breaking changes in the future without prior notice.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run</name><anchor>huggingface_hub.Agent.run</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_mcp/agent.py#L57</source><parameters>[{"name": "user_input", "val": ": str"}, {"name": "abort_event", "val": ": Optional[asyncio.Event] = None"}]</parameters><paramsdesc>- **user_input** (`str`) --
  The user input to run the agent with.
- **abort_event** (`asyncio.Event`, *optional*) --
  An event that can be used to abort the agent. If the event is set, the agent will stop running.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run the agent with the given user input.




</div></div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/mcp.md" />

### Overview
https://huggingface.co/docs/huggingface_hub/main/package_reference/overview.md

# Overview

This section contains an exhaustive and technical description of `huggingface_hub` classes and methods.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/overview.md" />

### Interacting with Discussions and Pull Requests
https://huggingface.co/docs/huggingface_hub/main/package_reference/community.md

# Interacting with Discussions and Pull Requests

Check the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) documentation page for the reference of methods enabling
interaction with Pull Requests and Discussions on the Hub.

- [get_repo_discussions()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_repo_discussions)
- [get_discussion_details()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details)
- [create_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_discussion)
- [create_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)
- [rename_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.rename_discussion)
- [comment_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.comment_discussion)
- [edit_discussion_comment()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.edit_discussion_comment)
- [change_discussion_status()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.change_discussion_status)
- [merge_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.merge_pull_request)

## Data structures[[huggingface_hub.Discussion]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Discussion</name><anchor>huggingface_hub.Discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L20</source><parameters>[{"name": "title", "val": ": str"}, {"name": "status", "val": ": typing.Literal['open', 'closed', 'merged', 'draft']"}, {"name": "num", "val": ": int"}, {"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": str"}, {"name": "author", "val": ": str"}, {"name": "is_pull_request", "val": ": bool"}, {"name": "created_at", "val": ": datetime"}, {"name": "endpoint", "val": ": str"}]</parameters><paramsdesc>- **title** (`str`) --
  The title of the Discussion / Pull Request
- **status** (`str`) --
  The status of the Discussion / Pull Request.
  It must be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )
  * `"draft"` (only for Pull Requests )
- **num** (`int`) --
  The number of the Discussion / Pull Request.
- **repo_id** (`str`) --
  The id (`"{namespace}/{repo_name}"`) of the repo on which
  the Discussion / Pull Request was open.
- **repo_type** (`str`) --
  The type of the repo on which the Discussion / Pull Request was open.
  Possible values are: `"model"`, `"dataset"`, `"space"`.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **is_pull_request** (`bool`) --
  Whether or not this is a Pull Request.
- **created_at** (`datetime`) --
  The `datetime` of creation of the Discussion / Pull Request.
- **endpoint** (`str`) --
  Endpoint of the Hub. Default is https://huggingface.co.
- **git_reference** (`str`, *optional*) --
  (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
- **url** (`str`) --
  (property) URL of the discussion on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Discussion or Pull Request on the Hub.

This dataclass is not intended to be instantiated directly.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionWithDetails</name><anchor>huggingface_hub.DiscussionWithDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L88</source><parameters>[{"name": "title", "val": ": str"}, {"name": "status", "val": ": typing.Literal['open', 'closed', 'merged', 'draft']"}, {"name": "num", "val": ": int"}, {"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": str"}, {"name": "author", "val": ": str"}, {"name": "is_pull_request", "val": ": bool"}, {"name": "created_at", "val": ": datetime"}, {"name": "endpoint", "val": ": str"}, {"name": "events", "val": ": list"}, {"name": "conflicting_files", "val": ": typing.Union[list[str], bool, NoneType]"}, {"name": "target_branch", "val": ": typing.Optional[str]"}, {"name": "merge_commit_oid", "val": ": typing.Optional[str]"}, {"name": "diff", "val": ": typing.Optional[str]"}]</parameters><paramsdesc>- **title** (`str`) --
  The title of the Discussion / Pull Request
- **status** (`str`) --
  The status of the Discussion / Pull Request.
  It can be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )
  * `"draft"` (only for Pull Requests )
- **num** (`int`) --
  The number of the Discussion / Pull Request.
- **repo_id** (`str`) --
  The id (`"{namespace}/{repo_name}"`) of the repo on which
  the Discussion / Pull Request was open.
- **repo_type** (`str`) --
  The type of the repo on which the Discussion / Pull Request was open.
  Possible values are: `"model"`, `"dataset"`, `"space"`.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **is_pull_request** (`bool`) --
  Whether or not this is a Pull Request.
- **created_at** (`datetime`) --
  The `datetime` of creation of the Discussion / Pull Request.
- **events** (`list` of [DiscussionEvent](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionEvent)) --
  The list of `DiscussionEvents` in this Discussion or Pull Request.
- **conflicting_files** (`Union[list[str], bool, None]`, *optional*) --
  A list of conflicting files if this is a Pull Request.
  `None` if `self.is_pull_request` is `False`.
  `True` if there are conflicting files but the list can't be retrieved.
- **target_branch** (`str`, *optional*) --
  The branch into which changes are to be merged if this is a
  Pull Request . `None`  if `self.is_pull_request` is `False`.
- **merge_commit_oid** (`str`, *optional*) --
  If this is a merged Pull Request , this is set to the OID / SHA of
  the merge commit, `None` otherwise.
- **diff** (`str`, *optional*) --
  The git diff if this is a Pull Request , `None` otherwise.
- **endpoint** (`str`) --
  Endpoint of the Hub. Default is https://huggingface.co.
- **git_reference** (`str`, *optional*) --
  (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
- **url** (`str`) --
  (property) URL of the discussion on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Subclass of [Discussion](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.Discussion).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionEvent</name><anchor>huggingface_hub.DiscussionEvent</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L155</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.</paramsdesc><paramgroups>0</paramgroups></docstring>

An event in a Discussion or Pull Request.

Use concrete classes:
* [DiscussionComment](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionComment)
* [DiscussionStatusChange](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionStatusChange)
* [DiscussionCommit](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionCommit)
* [DiscussionTitleChange](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionTitleChange)




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionComment</name><anchor>huggingface_hub.DiscussionComment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L188</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "content", "val": ": str"}, {"name": "edited", "val": ": bool"}, {"name": "hidden", "val": ": bool"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **content** (`str`) --
  The raw markdown content of the comment. Mentions, links and images are not rendered.
- **edited** (`bool`) --
  Whether or not this comment has been edited.
- **hidden** (`bool`) --
  Whether or not this comment has been hidden.</paramsdesc><paramgroups>0</paramgroups></docstring>
A comment in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionEvent).





</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionStatusChange</name><anchor>huggingface_hub.DiscussionStatusChange</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L243</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "new_status", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **new_status** (`str`) --
  The status of the Discussion / Pull Request after the change.
  It can be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )</paramsdesc><paramgroups>0</paramgroups></docstring>
A change of status in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionCommit</name><anchor>huggingface_hub.DiscussionCommit</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L271</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "summary", "val": ": str"}, {"name": "oid", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **summary** (`str`) --
  The summary of the commit.
- **oid** (`str`) --
  The OID / SHA of the commit, as a hexadecimal string.</paramsdesc><paramgroups>0</paramgroups></docstring>
A commit in a Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionTitleChange</name><anchor>huggingface_hub.DiscussionTitleChange</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L298</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "old_title", "val": ": str"}, {"name": "new_title", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **old_title** (`str`) --
  The previous title for the Discussion / Pull Request.
- **new_title** (`str`) --
  The new title.</paramsdesc><paramgroups>0</paramgroups></docstring>
A rename event in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/community.md" />

### OAuth and FastAPI
https://huggingface.co/docs/huggingface_hub/main/package_reference/oauth.md

# OAuth and FastAPI

OAuth is an open standard for access delegation, commonly used to grant applications limited access to a user's information without exposing their credentials. When combined with FastAPI it allows you to build secure APIs that allow users to log in using external identity providers like Google or GitHub.
In a usual scenario:
- FastAPI will define the API endpoints and handles the HTTP requests.
- OAuth is integrated using libraries like fastapi.security or external tools like Authlib.
- When a user wants to log in, FastAPI redirects them to the OAuth provider’s login page.
- After successful login, the provider redirects back with a token.
- FastAPI verifies this token and uses it to authorize the user or fetch user profile data.

This approach helps avoid handling passwords directly and offloads identity management to trusted providers.

# Hugging Face OAuth Integration in FastAPI

This module provides tools to integrate Hugging Face OAuth into a FastAPI application. It enables user authentication using the Hugging Face platform including mocked behavior for local development and real OAuth flow for Spaces.



## OAuth Overview

The `attach_huggingface_oauth` function adds login, logout, and callback endpoints to your FastAPI app. When used in a Space, it connects to the Hugging Face OAuth system. When used locally it will inject a mocked user. Click here to learn more about [adding a Sign-In with HF option to your Space](https://huggingface.co/docs/hub/en/spaces-oauth)


### How to use it?

```python
from huggingface_hub import attach_huggingface_oauth, parse_huggingface_oauth
from fastapi import FastAPI, Request

app = FastAPI()
attach_huggingface_oauth(app)

@app.get("/")
def greet_json(request: Request):
    oauth_info = parse_huggingface_oauth(request)
    if oauth_info is None:
        return {"msg": "Not logged in!"}
    return {"msg": f"Hello, {oauth_info.user_info.preferred_username}!"}
```

> [!TIP]
> You might also be interested in [a practical example that demonstrates OAuth in action](https://huggingface.co/spaces/Wauplin/fastapi-oauth/blob/main/app.py).
> For a more comprehensive implementation, check out [medoidai/GiveBackGPT](https://huggingface.co/spaces/medoidai/GiveBackGPT) Space which implements HF OAuth in a full-scale application.


### attach_huggingface_oauth[[huggingface_hub.attach_huggingface_oauth]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.attach_huggingface_oauth</name><anchor>huggingface_hub.attach_huggingface_oauth</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_oauth.py#L124</source><parameters>[{"name": "app", "val": ": fastapi.FastAPI"}, {"name": "route_prefix", "val": ": str = '/'"}]</parameters></docstring>

Add OAuth endpoints to a FastAPI app to enable OAuth login with Hugging Face.

How to use:
- Call this method on your FastAPI app to add the OAuth endpoints.
- Inside your route handlers, call `parse_huggingface_oauth(request)` to retrieve the OAuth info.
- If user is logged in, an [OAuthInfo](/docs/huggingface_hub/main/en/package_reference/oauth#huggingface_hub.OAuthInfo) object is returned with the user's info. If not, `None` is returned.
- In your app, make sure to add links to `/oauth/huggingface/login` and `/oauth/huggingface/logout` for the user to log in and out.

<ExampleCodeBlock anchor="huggingface_hub.attach_huggingface_oauth.example">

Example:
```py
from huggingface_hub import attach_huggingface_oauth, parse_huggingface_oauth

# Create a FastAPI app
app = FastAPI()

# Add OAuth endpoints to the FastAPI app
attach_huggingface_oauth(app)

# Add a route that greets the user if they are logged in
@app.get("/")
def greet_json(request: Request):
    # Retrieve the OAuth info from the request
    oauth_info = parse_huggingface_oauth(request)  # e.g. OAuthInfo dataclass
    if oauth_info is None:
        return {"msg": "Not logged in!"}
    return {"msg": f"Hello, {oauth_info.user_info.preferred_username}!"}
```

</ExampleCodeBlock>


</div>

### parse_huggingface_oauth[[huggingface_hub.parse_huggingface_oauth]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.parse_huggingface_oauth</name><anchor>huggingface_hub.parse_huggingface_oauth</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_oauth.py#L191</source><parameters>[{"name": "request", "val": ": fastapi.Request"}]</parameters></docstring>

Returns the information from a logged-in user as a [OAuthInfo](/docs/huggingface_hub/main/en/package_reference/oauth#huggingface_hub.OAuthInfo) object.

For flexibility and future-proofing, this method is very lax in its parsing and does not raise errors.
Missing fields are set to `None` without a warning.

Return `None`, if the user is not logged in (no info in session cookie).

See [attach_huggingface_oauth()](/docs/huggingface_hub/main/en/package_reference/oauth#huggingface_hub.attach_huggingface_oauth) for an example on how to use this method.


</div>

### OAuthOrgInfo[[huggingface_hub.OAuthOrgInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.OAuthOrgInfo</name><anchor>huggingface_hub.OAuthOrgInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_oauth.py#L23</source><parameters>[{"name": "sub", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "preferred_username", "val": ": str"}, {"name": "picture", "val": ": str"}, {"name": "is_enterprise", "val": ": bool"}, {"name": "can_pay", "val": ": typing.Optional[bool] = None"}, {"name": "role_in_org", "val": ": typing.Optional[str] = None"}, {"name": "security_restrictions", "val": ": typing.Optional[list[typing.Literal['ip', 'token-policy', 'mfa', 'sso']]] = None"}]</parameters><paramsdesc>- **sub** (`str`) --
  Unique identifier for the org. OpenID Connect field.
- **name** (`str`) --
  The org's full name. OpenID Connect field.
- **preferred_username** (`str`) --
  The org's username. OpenID Connect field.
- **picture** (`str`) --
  The org's profile picture URL. OpenID Connect field.
- **is_enterprise** (`bool`) --
  Whether the org is an enterprise org. Hugging Face field.
- **can_pay** (`Optional[bool]`, *optional*) --
  Whether the org has a payment method set up. Hugging Face field.
- **role_in_org** (`Optional[str]`, *optional*) --
  The user's role in the org. Hugging Face field.
- **security_restrictions** (`Optional[list[Literal["ip", "token-policy", "mfa", "sso"]]]`, *optional*) --
  Array of security restrictions that the user hasn't completed for this org. Possible values: "ip", "token-policy", "mfa", "sso". Hugging Face field.</paramsdesc><paramgroups>0</paramgroups></docstring>

Information about an organization linked to a user logged in with OAuth.




</div>

### OAuthUserInfo[[huggingface_hub.OAuthUserInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.OAuthUserInfo</name><anchor>huggingface_hub.OAuthUserInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_oauth.py#L57</source><parameters>[{"name": "sub", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "preferred_username", "val": ": str"}, {"name": "email_verified", "val": ": typing.Optional[bool]"}, {"name": "email", "val": ": typing.Optional[str]"}, {"name": "picture", "val": ": str"}, {"name": "profile", "val": ": str"}, {"name": "website", "val": ": typing.Optional[str]"}, {"name": "is_pro", "val": ": bool"}, {"name": "can_pay", "val": ": typing.Optional[bool]"}, {"name": "orgs", "val": ": typing.Optional[list[huggingface_hub._oauth.OAuthOrgInfo]]"}]</parameters><paramsdesc>- **sub** (`str`) --
  Unique identifier for the user, even in case of rename. OpenID Connect field.
- **name** (`str`) --
  The user's full name. OpenID Connect field.
- **preferred_username** (`str`) --
  The user's username. OpenID Connect field.
- **email_verified** (`Optional[bool]`, *optional*) --
  Indicates if the user's email is verified. OpenID Connect field.
- **email** (`Optional[str]`, *optional*) --
  The user's email address. OpenID Connect field.
- **picture** (`str`) --
  The user's profile picture URL. OpenID Connect field.
- **profile** (`str`) --
  The user's profile URL. OpenID Connect field.
- **website** (`Optional[str]`, *optional*) --
  The user's website URL. OpenID Connect field.
- **is_pro** (`bool`) --
  Whether the user is a pro user. Hugging Face field.
- **can_pay** (`Optional[bool]`, *optional*) --
  Whether the user has a payment method set up. Hugging Face field.
- **orgs** (`Optional[list[OrgInfo]]`, *optional*) --
  List of organizations the user is part of. Hugging Face field.</paramsdesc><paramgroups>0</paramgroups></docstring>

Information about a user logged in with OAuth.




</div>

### OAuthInfo[[huggingface_hub.OAuthInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.OAuthInfo</name><anchor>huggingface_hub.OAuthInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_oauth.py#L100</source><parameters>[{"name": "access_token", "val": ": str"}, {"name": "access_token_expires_at", "val": ": datetime"}, {"name": "user_info", "val": ": OAuthUserInfo"}, {"name": "state", "val": ": typing.Optional[str]"}, {"name": "scope", "val": ": str"}]</parameters><paramsdesc>- **access_token** (`str`) --
  The access token.
- **access_token_expires_at** (`datetime.datetime`) --
  The expiration date of the access token.
- **user_info** ([OAuthUserInfo](/docs/huggingface_hub/main/en/package_reference/oauth#huggingface_hub.OAuthUserInfo)) --
  The user information.
- **state** (`str`, *optional*) --
  State passed to the OAuth provider in the original request to the OAuth provider.
- **scope** (`str`) --
  Granted scope.</paramsdesc><paramgroups>0</paramgroups></docstring>

Information about the OAuth login.




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/oauth.md" />

### Filesystem API
https://huggingface.co/docs/huggingface_hub/main/package_reference/hf_file_system.md

# Filesystem API

The `HfFileSystem` class provides a pythonic file interface to the Hugging Face Hub based on [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/).

## HfFileSystem[[huggingface_hub.HfFileSystem]]

`HfFileSystem` is based on [fsspec](https://filesystem-spec.readthedocs.io/en/latest/), so it is compatible with most of the APIs that it offers. For more details, check out [our guide](../guides/hf_file_system) and fsspec's [API Reference](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem).

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HfFileSystem</name><anchor>huggingface_hub.HfFileSystem</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L116</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **token** (`str` or `bool`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.</paramsdesc><paramgroups>0</paramgroups></docstring>

Access a remote Hugging Face Hub repository as if were a local file system.

> [!WARNING]
> [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem) provides fsspec compatibility, which is useful for libraries that require it (e.g., reading
>     Hugging Face datasets directly with `pandas`). However, it introduces additional overhead due to this compatibility
>     layer. For better performance and reliability, it's recommended to use `HfApi` methods when possible.



<ExampleCodeBlock anchor="huggingface_hub.HfFileSystem.example">

Usage:

```python
>>> from huggingface_hub import HfFileSystem

>>> fs = HfFileSystem()

>>> # List files
>>> fs.glob("my-username/my-model/*.bin")
['my-username/my-model/pytorch_model.bin']
>>> fs.ls("datasets/my-username/my-dataset", detail=False)
['datasets/my-username/my-dataset/.gitattributes', 'datasets/my-username/my-dataset/README.md', 'datasets/my-username/my-dataset/data.json']

>>> # Read/write files
>>> with fs.open("my-username/my-model/pytorch_model.bin") as f:
...     data = f.read()
>>> with fs.open("my-username/my-model/pytorch_model.bin", "wb") as f:
...     f.write(data)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__init__</name><anchor>huggingface_hub.HfFileSystem.__init__</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L157</source><parameters>[{"name": "*args", "val": ""}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "block_size", "val": ": typing.Optional[int] = None"}, {"name": "**storage_options", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cp_file</name><anchor>huggingface_hub.HfFileSystem.cp_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L663</source><parameters>[{"name": "path1", "val": ": str"}, {"name": "path2", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path1** (`str`) --
  Source path to copy from.
- **path2** (`str`) --
  Destination path to copy to.
- **revision** (`str`, *optional*) --
  The git revision to copy from.</paramsdesc><paramgroups>0</paramgroups></docstring>

Copy a file within or between repositories.

> [!WARNING]
> Note: When possible, use `HfApi.upload_file()` for better performance.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>exists</name><anchor>huggingface_hub.HfFileSystem.exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L833</source><parameters>[{"name": "path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to check.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if file exists, False otherwise.</retdesc></docstring>

Check if a file exists.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.exists).

> [!WARNING]
> Note: When possible, use `HfApi.file_exists()` for better performance.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>find</name><anchor>huggingface_hub.HfFileSystem.find</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L600</source><parameters>[{"name": "path", "val": ": str"}, {"name": "maxdepth", "val": ": typing.Optional[int] = None"}, {"name": "withdirs", "val": ": bool = False"}, {"name": "detail", "val": ": bool = False"}, {"name": "refresh", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Root path to list files from.
- **maxdepth** (`int`, *optional*) --
  Maximum depth to descend into subdirectories.
- **withdirs** (`bool`, *optional*) --
  Include directory paths in the output. Defaults to False.
- **detail** (`bool`, *optional*) --
  If True, returns a dict mapping paths to file information. Defaults to False.
- **refresh** (`bool`, *optional*) --
  If True, bypass the cache and fetch the latest data. Defaults to False.
- **revision** (`str`, *optional*) --
  The git revision to list from.</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[list[str], dict[str, dict[str, Any]]]`</rettype><retdesc>List of paths or dict of file information.</retdesc></docstring>

List all files below path.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.find).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_file</name><anchor>huggingface_hub.HfFileSystem.get_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L917</source><parameters>[{"name": "rpath", "val": ""}, {"name": "lpath", "val": ""}, {"name": "callback", "val": " = <fsspec.callbacks.NoOpCallback object at 0x7f1786ea5ed0>"}, {"name": "outfile", "val": " = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **rpath** (`str`) --
  Remote path to download from.
- **lpath** (`str`) --
  Local path to download to.
- **callback** (`Callback`, *optional*) --
  Optional callback to track download progress. Defaults to no callback.
- **outfile** (`IO`, *optional*) --
  Optional file-like object to write to. If provided, `lpath` is ignored.</paramsdesc><paramgroups>0</paramgroups></docstring>

Copy single remote file to local.

> [!WARNING]
> Note: When possible, use `HfApi.hf_hub_download()` for better performance.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>glob</name><anchor>huggingface_hub.HfFileSystem.glob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L584</source><parameters>[{"name": "path", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path pattern to match.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[str]`</rettype><retdesc>List of paths matching the pattern.</retdesc></docstring>

Find files by glob-matching.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.glob).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>info</name><anchor>huggingface_hub.HfFileSystem.info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L735</source><parameters>[{"name": "path", "val": ": str"}, {"name": "refresh", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to get info for.
- **refresh** (`bool`, *optional*) --
  If True, bypass the cache and fetch the latest data. Defaults to False.
- **revision** (`str`, *optional*) --
  The git revision to get info from.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, Any]`</rettype><retdesc>Dictionary containing file information (type, size, commit info, etc.).</retdesc></docstring>

Get information about a file or directory.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.info).

> [!WARNING]
> Note: When possible, use `HfApi.get_paths_info()` or `HfApi.repo_info()`  for better performance.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>invalidate_cache</name><anchor>huggingface_hub.HfFileSystem.invalidate_cache</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L305</source><parameters>[{"name": "path", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **path** (`str`, *optional*) --
  Path to clear from cache. If not provided, clear the entire cache.</paramsdesc><paramgroups>0</paramgroups></docstring>

Clear the cache for a given path.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.invalidate_cache).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>isdir</name><anchor>huggingface_hub.HfFileSystem.isdir</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L858</source><parameters>[{"name": "path", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to check.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if path is a directory, False otherwise.</retdesc></docstring>

Check if a path is a directory.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.isdir).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>isfile</name><anchor>huggingface_hub.HfFileSystem.isfile</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L876</source><parameters>[{"name": "path", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to check.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if path is a file, False otherwise.</retdesc></docstring>

Check if a path is a file.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.isfile).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>ls</name><anchor>huggingface_hub.HfFileSystem.ls</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L410</source><parameters>[{"name": "path", "val": ": str"}, {"name": "detail", "val": ": bool = True"}, {"name": "refresh", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to the directory.
- **detail** (`bool`, *optional*) --
  If True, returns a list of dictionaries containing file information. If False,
  returns a list of file paths. Defaults to True.
- **refresh** (`bool`, *optional*) --
  If True, bypass the cache and fetch the latest data. Defaults to False.
- **revision** (`str`, *optional*) --
  The git revision to list from.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[Union[str, dict[str, Any]]]`</rettype><retdesc>List of file paths (if detail=False) or list of file information
dictionaries (if detail=True).</retdesc></docstring>

List the contents of a directory.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.ls).

> [!WARNING]
> Note: When possible, use `HfApi.list_repo_tree()` for better performance.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>modified</name><anchor>huggingface_hub.HfFileSystem.modified</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L719</source><parameters>[{"name": "path", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to the file.</paramsdesc><paramgroups>0</paramgroups><rettype>`datetime`</rettype><retdesc>Last commit date of the file.</retdesc></docstring>

Get the last modified time of a file.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.modified).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resolve_path</name><anchor>huggingface_hub.HfFileSystem.resolve_path</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L210</source><parameters>[{"name": "path", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **path** (`str`) --
  Path to resolve.
- **revision** (`str`, *optional*) --
  The revision of the repo to resolve. Defaults to the revision specified in the path.</paramsdesc><paramgroups>0</paramgroups><rettype>`HfFileSystemResolvedPath`</rettype><retdesc>Resolved path information containing `repo_type`, `repo_id`, `revision` and `path_in_repo`.</retdesc><raises>- ``ValueError`` -- 
  If path contains conflicting revision information.
- ``NotImplementedError`` -- 
  If trying to list repositories.</raises><raisederrors>``ValueError`` or ``NotImplementedError``</raisederrors></docstring>

Resolve a Hugging Face file system path into its components.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>rm</name><anchor>huggingface_hub.HfFileSystem.rm</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L364</source><parameters>[{"name": "path", "val": ": str"}, {"name": "recursive", "val": ": bool = False"}, {"name": "maxdepth", "val": ": typing.Optional[int] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to delete.
- **recursive** (`bool`, *optional*) --
  If True, delete directory and all its contents. Defaults to False.
- **maxdepth** (`int`, *optional*) --
  Maximum number of subdirectories to visit when deleting recursively.
- **revision** (`str`, *optional*) --
  The git revision to delete from.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete files from a repository.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.rm).

> [!WARNING]
> Note: When possible, use `HfApi.delete_file()` for better performance.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>url</name><anchor>huggingface_hub.HfFileSystem.url</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L894</source><parameters>[{"name": "path", "val": ": str"}]</parameters><paramsdesc>- **path** (`str`) --
  Path to get URL for.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>HTTP URL to access the file or directory on the Hub.</retdesc></docstring>

Get the HTTP URL of the given path.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>walk</name><anchor>huggingface_hub.HfFileSystem.walk</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L568</source><parameters>[{"name": "path", "val": ": str"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Root path to list files from.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterator[tuple[str, list[str], list[str]]]`</rettype><retdesc>An iterator of (path, list of directory names, list of file names) tuples.</retdesc></docstring>

Return all files below the given path.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.walk).








</div></div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/hf_file_system.md" />

### Inference Endpoints
https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_endpoints.md

# Inference Endpoints

Inference Endpoints provides a secure production solution to easily deploy models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models). This page is a reference for `huggingface_hub`'s integration with Inference Endpoints. For more information about the Inference Endpoints product, check out its [official documentation](https://huggingface.co/docs/inference-endpoints/index).

> [!TIP]
> Check out the [related guide](../guides/inference_endpoints) to learn how to use `huggingface_hub` to manage your Inference Endpoints programmatically.

Inference Endpoints can be fully managed via API. The endpoints are documented with [Swagger](https://api.endpoints.huggingface.cloud/). The [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) class is a simple wrapper built on top on this API.

## Methods

A subset of the Inference Endpoint features are implemented in [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi):

- [get_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_inference_endpoint) and [list_inference_endpoints()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_inference_endpoints) to get information about your Inference Endpoints
- [create_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint), [update_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint) and [delete_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint) to deploy and manage Inference Endpoints
- [pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint) and [resume_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint) to pause and resume an Inference Endpoint
- [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint) to manually scale an Endpoint to 0 replicas

## InferenceEndpoint[[huggingface_hub.InferenceEndpoint]]

The main dataclass is [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint). It contains information about a deployed `InferenceEndpoint`, including its configuration and current state. Once deployed, you can run inference on the Endpoint using the  [InferenceEndpoint.client](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.client) and [InferenceEndpoint.async_client](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.async_client) properties that respectively return an [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) and an [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) object.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpoint</name><anchor>huggingface_hub.InferenceEndpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L38</source><parameters>[{"name": "namespace", "val": ": str"}, {"name": "raw", "val": ": dict"}, {"name": "_token", "val": ": typing.Union[str, bool, NoneType]"}, {"name": "_api", "val": ": HfApi"}]</parameters><paramsdesc>- **name** (`str`) --
  The unique name of the Inference Endpoint.
- **namespace** (`str`) --
  The namespace where the Inference Endpoint is located.
- **repository** (`str`) --
  The name of the model repository deployed on this Inference Endpoint.
- **status** ([InferenceEndpointStatus](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointStatus)) --
  The current status of the Inference Endpoint.
- **url** (`str`, *optional*) --
  The URL of the Inference Endpoint, if available. Only a deployed Inference Endpoint will have a URL.
- **framework** (`str`) --
  The machine learning framework used for the model.
- **revision** (`str`) --
  The specific model revision deployed on the Inference Endpoint.
- **task** (`str`) --
  The task associated with the deployed model.
- **created_at** (`datetime.datetime`) --
  The timestamp when the Inference Endpoint was created.
- **updated_at** (`datetime.datetime`) --
  The timestamp of the last update of the Inference Endpoint.
- **type** ([InferenceEndpointType](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointType)) --
  The type of the Inference Endpoint (public, protected, private).
- **raw** (`dict`) --
  The raw dictionary data returned from the API.
- **token** (`str` or `bool`, *optional*) --
  Authentication token for the Inference Endpoint, if set when requesting the API. Will default to the
  locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a deployed Inference Endpoint.



<ExampleCodeBlock anchor="huggingface_hub.InferenceEndpoint.example">

Example:
```python
>>> from huggingface_hub import get_inference_endpoint
>>> endpoint = get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)

# Pause endpoint to save $$$
>>> endpoint.pause()

# ...
# Resume and wait for deployment
>>> endpoint.resume()
>>> endpoint.wait()
>>> endpoint.client.text_to_image(...)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_raw</name><anchor>huggingface_hub.InferenceEndpoint.from_raw</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L121</source><parameters>[{"name": "raw", "val": ": dict"}, {"name": "namespace", "val": ": str"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType] = None"}, {"name": "api", "val": ": typing.Optional[ForwardRef('HfApi')] = None"}]</parameters></docstring>
Initialize object from raw dictionary.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>client</name><anchor>huggingface_hub.InferenceEndpoint.client</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L140</source><parameters>[]</parameters><rettype>[InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)</rettype><retdesc>an inference client pointing to the deployed endpoint.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- If the Inference Endpoint is not yet deployed.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError)</raisederrors></docstring>
Returns a client to make predictions on this Inference Endpoint.










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>async_client</name><anchor>huggingface_hub.InferenceEndpoint.async_client</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L162</source><parameters>[]</parameters><rettype>[AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)</rettype><retdesc>an asyncio-compatible inference client pointing to the deployed endpoint.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- If the Inference Endpoint is not yet deployed.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError)</raisederrors></docstring>
Returns a client to make predictions on this Inference Endpoint.










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete</name><anchor>huggingface_hub.InferenceEndpoint.delete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L385</source><parameters>[]</parameters></docstring>
Delete the Inference Endpoint.

This operation is not reversible. If you don't want to be charged for an Inference Endpoint, it is preferable
to pause it with [InferenceEndpoint.pause()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause) or scale it to zero with [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero).

This is an alias for [HfApi.delete_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fetch</name><anchor>huggingface_hub.InferenceEndpoint.fetch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L237</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Fetch latest information about the Inference Endpoint.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause</name><anchor>huggingface_hub.InferenceEndpoint.pause</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L328</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Pause the Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using [InferenceEndpoint.resume()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).
This is different from scaling the Inference Endpoint to zero with [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero), which
would be automatically restarted when a request is made to it.

This is an alias for [HfApi.pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint). The current object is mutated in place with the
latest data from the server.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume</name><anchor>huggingface_hub.InferenceEndpoint.resume</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L346</source><parameters>[{"name": "running_ok", "val": ": bool = True"}]</parameters><paramsdesc>- **running_ok** (`bool`, *optional*) --
  If `True`, the method will not raise an error if the Inference Endpoint is already running. Defaults to
  `True`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Resume the Inference Endpoint.

This is an alias for [HfApi.resume_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint). The current object is mutated in place with the
latest data from the server.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_to_zero</name><anchor>huggingface_hub.InferenceEndpoint.scale_to_zero</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L367</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resumed on the next request to it, with a
cold start delay. This is different from pausing the Inference Endpoint with [InferenceEndpoint.pause()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause), which
would require a manual resume with [InferenceEndpoint.resume()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).

This is an alias for [HfApi.scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint). The current object is mutated in place with the
latest data from the server.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update</name><anchor>huggingface_hub.InferenceEndpoint.update</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L248</source><parameters>[{"name": "accelerator", "val": ": typing.Optional[str] = None"}, {"name": "instance_size", "val": ": typing.Optional[str] = None"}, {"name": "instance_type", "val": ": typing.Optional[str] = None"}, {"name": "min_replica", "val": ": typing.Optional[int] = None"}, {"name": "max_replica", "val": ": typing.Optional[int] = None"}, {"name": "scale_to_zero_timeout", "val": ": typing.Optional[int] = None"}, {"name": "repository", "val": ": typing.Optional[str] = None"}, {"name": "framework", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "task", "val": ": typing.Optional[str] = None"}, {"name": "custom_image", "val": ": typing.Optional[dict] = None"}, {"name": "secrets", "val": ": typing.Optional[dict[str, str]] = None"}]</parameters><paramsdesc>- **accelerator** (`str`, *optional*) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`, *optional*) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`, *optional*) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero.

- **repository** (`str`, *optional*) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`, *optional*) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Update the Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, or both. All arguments are
optional but at least one must be provided.

This is an alias for [HfApi.update_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint). The current object is mutated in place with the
latest data from the server.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wait</name><anchor>huggingface_hub.InferenceEndpoint.wait</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L184</source><parameters>[{"name": "timeout", "val": ": typing.Optional[int] = None"}, {"name": "refresh_every", "val": ": int = 5"}]</parameters><paramsdesc>- **timeout** (`int`, *optional*) --
  The maximum time to wait for the Inference Endpoint to be deployed, in seconds. If `None`, will wait
  indefinitely.
- **refresh_every** (`int`, *optional*) --
  The time to wait between each fetch of the Inference Endpoint status, in seconds. Defaults to 5s.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- 
  If the Inference Endpoint ended up in a failed state.
- `InferenceEndpointTimeoutError` -- 
  If the Inference Endpoint is not deployed after `timeout` seconds.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) or `InferenceEndpointTimeoutError`</raisederrors></docstring>
Wait for the Inference Endpoint to be deployed.

Information from the server will be fetched every 1s. If the Inference Endpoint is not deployed after `timeout`
seconds, a `InferenceEndpointTimeoutError` will be raised. The [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) will be mutated in place with the latest
data.












</div></div>

## InferenceEndpointStatus[[huggingface_hub.InferenceEndpointStatus]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointStatus</name><anchor>huggingface_hub.InferenceEndpointStatus</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L20</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>
An enumeration.

</div>

## InferenceEndpointType[[huggingface_hub.InferenceEndpointType]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointType</name><anchor>huggingface_hub.InferenceEndpointType</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L31</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>
An enumeration.

</div>

## InferenceEndpointError[[huggingface_hub.InferenceEndpointError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointError</name><anchor>huggingface_hub.InferenceEndpointError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L102</source><parameters>""</parameters></docstring>
Generic exception when dealing with Inference Endpoints.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/inference_endpoints.md" />

### Authentication
https://huggingface.co/docs/huggingface_hub/main/package_reference/authentication.md

# Authentication

The `huggingface_hub` library allows users to programmatically manage authentication to the Hub. This includes logging in, logging out, switching between tokens, and listing available tokens.

For more details about authentication, check out [this section](../quick-start#authentication).

## login[[huggingface_hub.login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.login</name><anchor>huggingface_hub.login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L59</source><parameters>[{"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "add_to_git_credential", "val": ": bool = False"}, {"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **token** (`str`, *optional*) --
  User access token to generate from https://huggingface.co/settings/token.
- **add_to_git_credential** (`bool`, defaults to `False`) --
  If `True`, token will be set as git credential. If no git credential helper
  is configured, a warning will be displayed to the user. If `token` is `None`,
  the value of `add_to_git_credential` is ignored and will be prompted again
  to the end user.
- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If an organization token is passed. Only personal account tokens are valid
  to log in.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If token is invalid.
- [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) -- 
  If running in a notebook but `ipywidgets` is not installed.</raises><raisederrors>``ValueError`` or ``ImportError``</raisederrors></docstring>
Login the machine to access the Hub.

The `token` is persisted in cache and set as a git credential. Once done, the machine
is logged in and the access token will be available across all `huggingface_hub`
components. If `token` is not provided, it will be prompted to the user either with
a widget (in a notebook) or via the terminal.

To log in from outside of a script, one can also use `hf auth login` which is
a cli command that wraps [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login).

> [!TIP]
> [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) is a drop-in replacement method for [notebook_login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.notebook_login) as it wraps and
> extends its capabilities.

> [!TIP]
> When the token is not passed, [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) will automatically detect if the script runs
> in a notebook or not. However, this detection might not be accurate due to the
> variety of notebooks that exists nowadays. If that is the case, you can always force
> the UI by using [notebook_login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.notebook_login) or [interpreter_login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.interpreter_login).








</div>

## interpreter_login[[huggingface_hub.interpreter_login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.interpreter_login</name><anchor>huggingface_hub.interpreter_login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L231</source><parameters>[{"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups></docstring>

Displays a prompt to log in to the HF website and store the token.

This is equivalent to [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) without passing a token when not run in a notebook.
[interpreter_login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.interpreter_login) is useful if you want to force the use of the terminal prompt
instead of a notebook widget.

For more details, see [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login).




</div>

## notebook_login[[huggingface_hub.notebook_login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.notebook_login</name><anchor>huggingface_hub.notebook_login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L293</source><parameters>[{"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups></docstring>

Displays a widget to log in to the HF website and store the token.

This is equivalent to [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login) without passing a token when run in a notebook.
[notebook_login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.notebook_login) is useful if you want to force the use of the notebook widget
instead of a prompt in the terminal.

For more details, see [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login).




</div>

## logout[[huggingface_hub.logout]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.logout</name><anchor>huggingface_hub.logout</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L119</source><parameters>[{"name": "token_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **token_name** (`str`, *optional*) --
  Name of the access token to logout from. If `None`, will log out from all saved access tokens.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If the access token name is not found.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Logout the machine from the Hub.

Token is deleted from the machine and removed from git credential.








</div>

## auth_switch[[huggingface_hub.auth_switch]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.auth_switch</name><anchor>huggingface_hub.auth_switch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L161</source><parameters>[{"name": "token_name", "val": ": str"}, {"name": "add_to_git_credential", "val": ": bool = False"}]</parameters><paramsdesc>- **token_name** (`str`) --
  Name of the access token to switch to.
- **add_to_git_credential** (`bool`, defaults to `False`) --
  If `True`, token will be set as git credential. If no git credential helper
  is configured, a warning will be displayed to the user. If `token` is `None`,
  the value of `add_to_git_credential` is ignored and will be prompted again
  to the end user.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If the access token name is not found.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Switch to a different access token.








</div>

## auth_list[[huggingface_hub.auth_list]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.auth_list</name><anchor>huggingface_hub.auth_list</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L190</source><parameters>[]</parameters></docstring>
List all stored access tokens.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/authentication.md" />

### WARNING
https://huggingface.co/docs/huggingface_hub/main/package_reference/cli.md

# WARNING
# This entire file has been generated by Typer based on the `hf` CLI implementation.
# To re-generate the code, run `make style` or `python ./utils/generate_cli_reference.py --update`.
# WARNING
-->

# `hf`

Hugging Face Hub CLI

**Usage**:

```console
$ hf [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--install-completion`: Install completion for the current shell.
* `--show-completion`: Show completion for the current shell, to copy it or customize the installation.
* `--help`: Show this message and exit.

**Commands**:

* `auth`: Manage authentication (login, logout, etc.).
* `cache`: Manage local cache directory.
* `download`: Download files from the Hub.
* `endpoints`: Manage Hugging Face Inference Endpoints.
* `env`: Print information about the environment.
* `jobs`: Run and manage Jobs on the Hub.
* `lfs-enable-largefiles`: Configure your repository to enable upload...
* `lfs-multipart-upload`: Upload large files to the Hub.
* `repo`: Manage repos on the Hub.
* `repo-files`: Manage files in a repo on the Hub.
* `upload`: Upload a file or a folder to the Hub.
* `upload-large-folder`: Upload a large folder to the Hub.
* `version`: Print information about the hf version.

## `hf auth`

Manage authentication (login, logout, etc.).

**Usage**:

```console
$ hf auth [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `list`: List all stored access tokens
* `login`: Login using a token from...
* `logout`: Logout from a specific token
* `switch`: Switch between access tokens
* `whoami`: Find out which huggingface.co account you...

### `hf auth list`

List all stored access tokens

**Usage**:

```console
$ hf auth list [OPTIONS]
```

**Options**:

* `--help`: Show this message and exit.

### `hf auth login`

Login using a token from huggingface.co/settings/tokens

**Usage**:

```console
$ hf auth login [OPTIONS]
```

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--add-to-git-credential / --no-add-to-git-credential`: Save to git credential helper. Useful only if you plan to run git commands directly.  [default: no-add-to-git-credential]
* `--help`: Show this message and exit.

### `hf auth logout`

Logout from a specific token

**Usage**:

```console
$ hf auth logout [OPTIONS]
```

**Options**:

* `--token-name TEXT`: Name of token to logout
* `--help`: Show this message and exit.

### `hf auth switch`

Switch between access tokens

**Usage**:

```console
$ hf auth switch [OPTIONS]
```

**Options**:

* `--token-name TEXT`: Name of the token to switch to
* `--add-to-git-credential / --no-add-to-git-credential`: Save to git credential helper. Useful only if you plan to run git commands directly.  [default: no-add-to-git-credential]
* `--help`: Show this message and exit.

### `hf auth whoami`

Find out which huggingface.co account you are logged in as.

**Usage**:

```console
$ hf auth whoami [OPTIONS]
```

**Options**:

* `--help`: Show this message and exit.

## `hf cache`

Manage local cache directory.

**Usage**:

```console
$ hf cache [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `ls`: List cached repositories or revisions.
* `prune`: Remove detached revisions from the cache.
* `rm`: Remove cached repositories or revisions.
* `verify`: Verify checksums for a single repo...

### `hf cache ls`

List cached repositories or revisions.

**Usage**:

```console
$ hf cache ls [OPTIONS]
```

**Options**:

* `--cache-dir TEXT`: Cache directory to scan (defaults to Hugging Face cache).
* `--revisions / --no-revisions`: Include revisions in the output instead of aggregated repositories.  [default: no-revisions]
* `-f, --filter TEXT`: Filter entries (e.g. 'size>1GB', 'type=model', 'accessed>7d'). Can be used multiple times.
* `--format [table|json|csv]`: Output format.  [default: table]
* `-q, --quiet`: Print only IDs (repo IDs or revision hashes).
* `--sort [accessed|accessed:asc|accessed:desc|modified|modified:asc|modified:desc|name|name:asc|name:desc|size|size:asc|size:desc]`: Sort entries by key. Supported keys: 'accessed', 'modified', 'name', 'size'. Append ':asc' or ':desc' to explicitly set the order (e.g., 'modified:asc'). Defaults: 'accessed', 'modified', 'size' default to 'desc' (newest/biggest first); 'name' defaults to 'asc' (alphabetical).
* `--limit INTEGER`: Limit the number of results returned. Returns only the top N entries after sorting.
* `--help`: Show this message and exit.

### `hf cache prune`

Remove detached revisions from the cache.

**Usage**:

```console
$ hf cache prune [OPTIONS]
```

**Options**:

* `--cache-dir TEXT`: Cache directory to scan (defaults to Hugging Face cache).
* `-y, --yes`: Skip confirmation prompt.
* `--dry-run / --no-dry-run`: Preview deletions without removing anything.  [default: no-dry-run]
* `--help`: Show this message and exit.

### `hf cache rm`

Remove cached repositories or revisions.

**Usage**:

```console
$ hf cache rm [OPTIONS] TARGETS...
```

**Arguments**:

* `TARGETS...`: One or more repo IDs (e.g. model/bert-base-uncased) or revision hashes to delete.  [required]

**Options**:

* `--cache-dir TEXT`: Cache directory to scan (defaults to Hugging Face cache).
* `-y, --yes`: Skip confirmation prompt.
* `--dry-run / --no-dry-run`: Preview deletions without removing anything.  [default: no-dry-run]
* `--help`: Show this message and exit.

### `hf cache verify`

Verify checksums for a single repo revision from cache or a local directory.

Examples:
  - Verify main revision in cache: `hf cache verify gpt2`
  - Verify specific revision: `hf cache verify gpt2 --revision refs/pr/1`
  - Verify dataset: `hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset`
  - Verify local dir: `hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo`

**Usage**:

```console
$ hf cache verify [OPTIONS] REPO_ID
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--cache-dir TEXT`: Cache directory to use when verifying files from cache (defaults to Hugging Face cache).
* `--local-dir TEXT`: If set, verify files under this directory instead of the cache.
* `--fail-on-missing-files`: Fail if some files exist on the remote but are missing locally.
* `--fail-on-extra-files`: Fail if some files exist locally but are not present on the remote revision.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

## `hf download`

Download files from the Hub.

**Usage**:

```console
$ hf download [OPTIONS] REPO_ID [FILENAMES]...
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `[FILENAMES]...`: Files to download (e.g. `config.json`, `data/metadata.jsonl`).

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--include TEXT`: Glob patterns to include from files to download. eg: *.json
* `--exclude TEXT`: Glob patterns to exclude from files to download.
* `--cache-dir TEXT`: Directory where to save files.
* `--local-dir TEXT`: If set, the downloaded file will be placed under this directory. Check out https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder for more details.
* `--force-download / --no-force-download`: If True, the files will be downloaded even if they are already cached.  [default: no-force-download]
* `--dry-run / --no-dry-run`: If True, perform a dry run without actually downloading the file.  [default: no-dry-run]
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--quiet / --no-quiet`: If True, progress bars are disabled and only the path to the download files is printed.  [default: no-quiet]
* `--max-workers INTEGER`: Maximum number of workers to use for downloading files. Default is 8.  [default: 8]
* `--help`: Show this message and exit.

## `hf endpoints`

Manage Hugging Face Inference Endpoints.

**Usage**:

```console
$ hf endpoints [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `catalog`: Interact with the Inference Endpoints...
* `delete`: Delete an Inference Endpoint permanently.
* `deploy`: Deploy an Inference Endpoint from a Hub...
* `describe`: Get information about an existing endpoint.
* `list-catalog`: List available Catalog models.
* `ls`: Lists all Inference Endpoints for the...
* `pause`: Pause an Inference Endpoint.
* `resume`: Resume an Inference Endpoint.
* `scale-to-zero`: Scale an Inference Endpoint to zero.
* `update`: Update an existing endpoint.

### `hf endpoints catalog`

Interact with the Inference Endpoints catalog.

**Usage**:

```console
$ hf endpoints catalog [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `deploy`: Deploy an Inference Endpoint from the...
* `ls`: List available Catalog models.

#### `hf endpoints catalog deploy`

Deploy an Inference Endpoint from the Model Catalog.

**Usage**:

```console
$ hf endpoints catalog deploy [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--repo TEXT`: The name of the model repository associated with the Inference Endpoint (e.g. 'openai/gpt-oss-120b').  [required]
* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf endpoints catalog ls`

List available Catalog models.

**Usage**:

```console
$ hf endpoints catalog ls [OPTIONS]
```

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints delete`

Delete an Inference Endpoint permanently.

**Usage**:

```console
$ hf endpoints delete [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--yes`: Skip confirmation prompts.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints deploy`

Deploy an Inference Endpoint from a Hub repository.

**Usage**:

```console
$ hf endpoints deploy [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--repo TEXT`: The name of the model repository associated with the Inference Endpoint (e.g. 'openai/gpt-oss-120b').  [required]
* `--framework TEXT`: The machine learning framework used for the model (e.g. 'vllm').  [required]
* `--accelerator TEXT`: The hardware accelerator to be used for inference (e.g. 'cpu').  [required]
* `--instance-size TEXT`: The size or type of the instance to be used for hosting the model (e.g. 'x4').  [required]
* `--instance-type TEXT`: The cloud instance type where the Inference Endpoint will be deployed (e.g. 'intel-icl').  [required]
* `--region TEXT`: The cloud region in which the Inference Endpoint will be created (e.g. 'us-east-1').  [required]
* `--vendor TEXT`: The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. 'aws').  [required]
* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--task TEXT`: The task on which to deploy the model (e.g. 'text-classification').
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints describe`

Get information about an existing endpoint.

**Usage**:

```console
$ hf endpoints describe [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints list-catalog`

List available Catalog models.

**Usage**:

```console
$ hf endpoints list-catalog [OPTIONS]
```

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints ls`

Lists all Inference Endpoints for the given namespace.

**Usage**:

```console
$ hf endpoints ls [OPTIONS]
```

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints pause`

Pause an Inference Endpoint.

**Usage**:

```console
$ hf endpoints pause [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints resume`

Resume an Inference Endpoint.

**Usage**:

```console
$ hf endpoints resume [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--fail-if-already-running`: If `True`, the method will raise an error if the Inference Endpoint is already running.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints scale-to-zero`

Scale an Inference Endpoint to zero.

**Usage**:

```console
$ hf endpoints scale-to-zero [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf endpoints update`

Update an existing endpoint.

**Usage**:

```console
$ hf endpoints update [OPTIONS] NAME
```

**Arguments**:

* `NAME`: Endpoint name.  [required]

**Options**:

* `--namespace TEXT`: The namespace associated with the Inference Endpoint. Defaults to the current user's namespace.
* `--repo TEXT`: The name of the model repository associated with the Inference Endpoint (e.g. 'openai/gpt-oss-120b').
* `--accelerator TEXT`: The hardware accelerator to be used for inference (e.g. 'cpu').
* `--instance-size TEXT`: The size or type of the instance to be used for hosting the model (e.g. 'x4').
* `--instance-type TEXT`: The cloud instance type where the Inference Endpoint will be deployed (e.g. 'intel-icl').
* `--framework TEXT`: The machine learning framework used for the model (e.g. 'custom').
* `--revision TEXT`: The specific model revision to deploy on the Inference Endpoint (e.g. '6c0e6080953db56375760c0471a8c5f2929baf11').
* `--task TEXT`: The task on which to deploy the model (e.g. 'text-classification').
* `--min-replica INTEGER`: The minimum number of replicas (instances) to keep running for the Inference Endpoint.
* `--max-replica INTEGER`: The maximum number of replicas (instances) to scale to for the Inference Endpoint.
* `--scale-to-zero-timeout INTEGER`: The duration in minutes before an inactive endpoint is scaled to zero.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

## `hf env`

Print information about the environment.

**Usage**:

```console
$ hf env [OPTIONS]
```

**Options**:

* `--help`: Show this message and exit.

## `hf jobs`

Run and manage Jobs on the Hub.

**Usage**:

```console
$ hf jobs [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `cancel`: Cancel a Job
* `inspect`: Display detailed information on one or...
* `logs`: Fetch the logs of a Job
* `ps`: List Jobs
* `run`: Run a Job
* `scheduled`: Create and manage scheduled Jobs on the Hub.
* `uv`: Run UV scripts (Python with inline...

### `hf jobs cancel`

Cancel a Job

**Usage**:

```console
$ hf jobs cancel [OPTIONS] JOB_ID
```

**Arguments**:

* `JOB_ID`: Job ID  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf jobs inspect`

Display detailed information on one or more Jobs

**Usage**:

```console
$ hf jobs inspect [OPTIONS] JOB_IDS...
```

**Arguments**:

* `JOB_IDS...`: The jobs to inspect  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf jobs logs`

Fetch the logs of a Job

**Usage**:

```console
$ hf jobs logs [OPTIONS] JOB_ID
```

**Arguments**:

* `JOB_ID`: Job ID  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf jobs ps`

List Jobs

**Usage**:

```console
$ hf jobs ps [OPTIONS]
```

**Options**:

* `-a, --all`: Show all Jobs (default shows just running)
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `-f, --filter TEXT`: Filter output based on conditions provided (format: key=value)
* `--format TEXT`: Format output using a custom template
* `--help`: Show this message and exit.

### `hf jobs run`

Run a Job

**Usage**:

```console
$ hf jobs run [OPTIONS] IMAGE COMMAND...
```

**Arguments**:

* `IMAGE`: The Docker image to use.  [required]
* `COMMAND...`: The command to run.  [required]

**Options**:

* `-e, --env TEXT`: Set environment variables. E.g. --env ENV=value
* `-s, --secrets TEXT`: Set secret environment variables. E.g. --secrets SECRET=value or `--secrets HF_TOKEN` to pass your Hugging Face token.
* `--env-file TEXT`: Read in a file of environment variables.
* `--secrets-file TEXT`: Read in a file of secret environment variables.
* `--flavor [cpu-basic|cpu-upgrade|cpu-xl|zero-a10g|t4-small|t4-medium|l4x1|l4x4|l40sx1|l40sx4|l40sx8|a10g-small|a10g-large|a10g-largex2|a10g-largex4|a100-large|h100|h100x8]`: Flavor for the hardware, as in HF Spaces. Defaults to `cpu-basic`. Possible values: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8.
* `--timeout TEXT`: Max duration: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
* `-d, --detach`: Run the Job in the background and print the Job ID.
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

### `hf jobs scheduled`

Create and manage scheduled Jobs on the Hub.

**Usage**:

```console
$ hf jobs scheduled [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `delete`: Delete a scheduled Job
* `inspect`: Display detailed information on one or...
* `ps`: List scheduled Jobs
* `resume`: Resume (unpause) a scheduled Job
* `run`: Schedule a Job
* `suspend`: Suspend (pause) a scheduled Job
* `uv`: Schedule UV scripts on HF infrastructure

#### `hf jobs scheduled delete`

Delete a scheduled Job

**Usage**:

```console
$ hf jobs scheduled delete [OPTIONS] SCHEDULED_JOB_ID
```

**Arguments**:

* `SCHEDULED_JOB_ID`: Scheduled Job ID  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf jobs scheduled inspect`

Display detailed information on one or more scheduled Jobs

**Usage**:

```console
$ hf jobs scheduled inspect [OPTIONS] SCHEDULED_JOB_IDS...
```

**Arguments**:

* `SCHEDULED_JOB_IDS...`: The scheduled jobs to inspect  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf jobs scheduled ps`

List scheduled Jobs

**Usage**:

```console
$ hf jobs scheduled ps [OPTIONS]
```

**Options**:

* `-a, --all`: Show all scheduled Jobs (default hides suspended)
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `-f, --filter TEXT`: Filter output based on conditions provided (format: key=value)
* `--format TEXT`: Format output using a custom template
* `--help`: Show this message and exit.

#### `hf jobs scheduled resume`

Resume (unpause) a scheduled Job

**Usage**:

```console
$ hf jobs scheduled resume [OPTIONS] SCHEDULED_JOB_ID
```

**Arguments**:

* `SCHEDULED_JOB_ID`: Scheduled Job ID  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf jobs scheduled run`

Schedule a Job

**Usage**:

```console
$ hf jobs scheduled run [OPTIONS] SCHEDULE IMAGE COMMAND...
```

**Arguments**:

* `SCHEDULE`: One of annually, yearly, monthly, weekly, daily, hourly, or a CRON schedule expression.  [required]
* `IMAGE`: The Docker image to use.  [required]
* `COMMAND...`: The command to run.  [required]

**Options**:

* `--suspend / --no-suspend`: Suspend (pause) the scheduled Job
* `--concurrency / --no-concurrency`: Allow multiple instances of this Job to run concurrently
* `-e, --env TEXT`: Set environment variables. E.g. --env ENV=value
* `-s, --secrets TEXT`: Set secret environment variables. E.g. --secrets SECRET=value or `--secrets HF_TOKEN` to pass your Hugging Face token.
* `--env-file TEXT`: Read in a file of environment variables.
* `--secrets-file TEXT`: Read in a file of secret environment variables.
* `--flavor [cpu-basic|cpu-upgrade|cpu-xl|zero-a10g|t4-small|t4-medium|l4x1|l4x4|l40sx1|l40sx4|l40sx8|a10g-small|a10g-large|a10g-largex2|a10g-largex4|a100-large|h100|h100x8]`: Flavor for the hardware, as in HF Spaces. Defaults to `cpu-basic`. Possible values: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8.
* `--timeout TEXT`: Max duration: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf jobs scheduled suspend`

Suspend (pause) a scheduled Job

**Usage**:

```console
$ hf jobs scheduled suspend [OPTIONS] SCHEDULED_JOB_ID
```

**Arguments**:

* `SCHEDULED_JOB_ID`: Scheduled Job ID  [required]

**Options**:

* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

#### `hf jobs scheduled uv`

Schedule UV scripts on HF infrastructure

**Usage**:

```console
$ hf jobs scheduled uv [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `run`: Run a UV script (local file or URL) on HF...

##### `hf jobs scheduled uv run`

Run a UV script (local file or URL) on HF infrastructure

**Usage**:

```console
$ hf jobs scheduled uv run [OPTIONS] SCHEDULE SCRIPT [SCRIPT_ARGS]...
```

**Arguments**:

* `SCHEDULE`: One of annually, yearly, monthly, weekly, daily, hourly, or a CRON schedule expression.  [required]
* `SCRIPT`: UV script to run (local file or URL)  [required]
* `[SCRIPT_ARGS]...`: Arguments for the script

**Options**:

* `--suspend / --no-suspend`: Suspend (pause) the scheduled Job
* `--concurrency / --no-concurrency`: Allow multiple instances of this Job to run concurrently
* `--image TEXT`: Use a custom Docker image with `uv` installed.
* `--repo TEXT`: Repository name for the script (creates ephemeral if not specified)
* `--flavor [cpu-basic|cpu-upgrade|cpu-xl|zero-a10g|t4-small|t4-medium|l4x1|l4x4|l40sx1|l40sx4|l40sx8|a10g-small|a10g-large|a10g-largex2|a10g-largex4|a100-large|h100|h100x8]`: Flavor for the hardware, as in HF Spaces. Defaults to `cpu-basic`. Possible values: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8.
* `-e, --env TEXT`: Set environment variables. E.g. --env ENV=value
* `-s, --secrets TEXT`: Set secret environment variables. E.g. --secrets SECRET=value or `--secrets HF_TOKEN` to pass your Hugging Face token.
* `--env-file TEXT`: Read in a file of environment variables.
* `--secrets-file TEXT`: Read in a file of secret environment variables.
* `--timeout TEXT`: Max duration: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--with TEXT`: Run with the given packages installed
* `-p, --python TEXT`: The Python interpreter to use for the run environment
* `--help`: Show this message and exit.

### `hf jobs uv`

Run UV scripts (Python with inline dependencies) on HF infrastructure

**Usage**:

```console
$ hf jobs uv [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `run`: Run a UV script (local file or URL) on HF...

#### `hf jobs uv run`

Run a UV script (local file or URL) on HF infrastructure

**Usage**:

```console
$ hf jobs uv run [OPTIONS] SCRIPT [SCRIPT_ARGS]...
```

**Arguments**:

* `SCRIPT`: UV script to run (local file or URL)  [required]
* `[SCRIPT_ARGS]...`: Arguments for the script

**Options**:

* `--image TEXT`: Use a custom Docker image with `uv` installed.
* `--repo TEXT`: Repository name for the script (creates ephemeral if not specified)
* `--flavor [cpu-basic|cpu-upgrade|cpu-xl|zero-a10g|t4-small|t4-medium|l4x1|l4x4|l40sx1|l40sx4|l40sx8|a10g-small|a10g-large|a10g-largex2|a10g-largex4|a100-large|h100|h100x8]`: Flavor for the hardware, as in HF Spaces. Defaults to `cpu-basic`. Possible values: cpu-basic, cpu-upgrade, cpu-xl, t4-small, t4-medium, l4x1, l4x4, l40sx1, l40sx4, l40sx8, a10g-small, a10g-large, a10g-largex2, a10g-largex4, a100-large, h100, h100x8.
* `-e, --env TEXT`: Set environment variables. E.g. --env ENV=value
* `-s, --secrets TEXT`: Set secret environment variables. E.g. --secrets SECRET=value or `--secrets HF_TOKEN` to pass your Hugging Face token.
* `--env-file TEXT`: Read in a file of environment variables.
* `--secrets-file TEXT`: Read in a file of secret environment variables.
* `--timeout TEXT`: Max duration: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
* `-d, --detach`: Run the Job in the background and print the Job ID.
* `--namespace TEXT`: The namespace where the job will be running. Defaults to the current user's namespace.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--with TEXT`: Run with the given packages installed
* `-p, --python TEXT`: The Python interpreter to use for the run environment
* `--help`: Show this message and exit.

## `hf lfs-enable-largefiles`

Configure your repository to enable upload of files > 5GB.

**Usage**:

```console
$ hf lfs-enable-largefiles [OPTIONS] PATH
```

**Arguments**:

* `PATH`: Local path to repository you want to configure.  [required]

**Options**:

* `--help`: Show this message and exit.

## `hf lfs-multipart-upload`

Upload large files to the Hub.

**Usage**:

```console
$ hf lfs-multipart-upload [OPTIONS]
```

**Options**:

* `--help`: Show this message and exit.

## `hf repo`

Manage repos on the Hub.

**Usage**:

```console
$ hf repo [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `branch`: Manage branches for a repo on the Hub.
* `create`: Create a new repo on the Hub.
* `delete`: Delete a repo from the Hub.
* `move`: Move a repository from a namespace to...
* `settings`: Update the settings of a repository.
* `tag`: Manage tags for a repo on the Hub.

### `hf repo branch`

Manage branches for a repo on the Hub.

**Usage**:

```console
$ hf repo branch [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `create`: Create a new branch for a repo on the Hub.
* `delete`: Delete a branch from a repo on the Hub.

#### `hf repo branch create`

Create a new branch for a repo on the Hub.

**Usage**:

```console
$ hf repo branch create [OPTIONS] REPO_ID BRANCH
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `BRANCH`: The name of the branch to create.  [required]

**Options**:

* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--exist-ok / --no-exist-ok`: If set to True, do not raise an error if branch already exists.  [default: no-exist-ok]
* `--help`: Show this message and exit.

#### `hf repo branch delete`

Delete a branch from a repo on the Hub.

**Usage**:

```console
$ hf repo branch delete [OPTIONS] REPO_ID BRANCH
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `BRANCH`: The name of the branch to delete.  [required]

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

### `hf repo create`

Create a new repo on the Hub.

**Usage**:

```console
$ hf repo create [OPTIONS] REPO_ID
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--space-sdk TEXT`: Hugging Face Spaces SDK type. Required when --type is set to 'space'.
* `--private / --no-private`: Whether to create a private repo if repo doesn't exist on the Hub. Ignored if the repo already exists.  [default: no-private]
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--exist-ok / --no-exist-ok`: Do not raise an error if repo already exists.  [default: no-exist-ok]
* `--resource-group-id TEXT`: Resource group in which to create the repo. Resource groups is only available for Enterprise Hub organizations.
* `--help`: Show this message and exit.

### `hf repo delete`

Delete a repo from the Hub. this is an irreversible operation.

**Usage**:

```console
$ hf repo delete [OPTIONS] REPO_ID
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--missing-ok / --no-missing-ok`: If set to True, do not raise an error if repo does not exist.  [default: no-missing-ok]
* `--help`: Show this message and exit.

### `hf repo move`

Move a repository from a namespace to another namespace.

**Usage**:

```console
$ hf repo move [OPTIONS] FROM_ID TO_ID
```

**Arguments**:

* `FROM_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `TO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

### `hf repo settings`

Update the settings of a repository.

**Usage**:

```console
$ hf repo settings [OPTIONS] REPO_ID
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--gated [auto|manual|false]`: The gated status for the repository.
* `--private / --no-private`: Whether the repository should be private.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

### `hf repo tag`

Manage tags for a repo on the Hub.

**Usage**:

```console
$ hf repo tag [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `create`: Create a tag for a repo.
* `delete`: Delete a tag for a repo.
* `list`: List tags for a repo.

#### `hf repo tag create`

Create a tag for a repo.

**Usage**:

```console
$ hf repo tag create [OPTIONS] REPO_ID TAG
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `TAG`: The name of the tag to create.  [required]

**Options**:

* `-m, --message TEXT`: The description of the tag to create.
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

#### `hf repo tag delete`

Delete a tag for a repo.

**Usage**:

```console
$ hf repo tag delete [OPTIONS] REPO_ID TAG
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `TAG`: The name of the tag to delete.  [required]

**Options**:

* `-y, --yes`: Answer Yes to prompt automatically
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

#### `hf repo tag list`

List tags for a repo.

**Usage**:

```console
$ hf repo tag list [OPTIONS] REPO_ID
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]

**Options**:

* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--help`: Show this message and exit.

## `hf repo-files`

Manage files in a repo on the Hub.

**Usage**:

```console
$ hf repo-files [OPTIONS] COMMAND [ARGS]...
```

**Options**:

* `--help`: Show this message and exit.

**Commands**:

* `delete`

### `hf repo-files delete`

**Usage**:

```console
$ hf repo-files delete [OPTIONS] REPO_ID PATTERNS...
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `PATTERNS...`: Glob patterns to match files to delete.  [required]

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--commit-message TEXT`: The summary / title / first line of the generated commit.
* `--commit-description TEXT`: The description of the generated commit.
* `--create-pr / --no-create-pr`: Whether to create a new Pull Request for these changes.  [default: no-create-pr]
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--help`: Show this message and exit.

## `hf upload`

Upload a file or a folder to the Hub.

**Usage**:

```console
$ hf upload [OPTIONS] REPO_ID [LOCAL_PATH] [PATH_IN_REPO]
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `[LOCAL_PATH]`: Local path to the file or folder to upload. Wildcard patterns are supported. Defaults to current directory.
* `[PATH_IN_REPO]`: Path of the file or folder in the repo. Defaults to the relative path of the file or folder.

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--private / --no-private`: Whether to create a private repo if repo doesn't exist on the Hub. Ignored if the repo already exists.  [default: no-private]
* `--include TEXT`: Glob patterns to match files to upload.
* `--exclude TEXT`: Glob patterns to exclude from files to upload.
* `--delete TEXT`: Glob patterns for file to be deleted from the repo while committing.
* `--commit-message TEXT`: The summary / title / first line of the generated commit.
* `--commit-description TEXT`: The description of the generated commit.
* `--create-pr / --no-create-pr`: Whether to upload content as a new Pull Request.  [default: no-create-pr]
* `--every FLOAT`: f set, a background job is scheduled to create commits every `every` minutes.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--quiet / --no-quiet`: Disable progress bars and warnings; print only the returned path.  [default: no-quiet]
* `--help`: Show this message and exit.

## `hf upload-large-folder`

Upload a large folder to the Hub. Recommended for resumable uploads.

**Usage**:

```console
$ hf upload-large-folder [OPTIONS] REPO_ID LOCAL_PATH
```

**Arguments**:

* `REPO_ID`: The ID of the repo (e.g. `username/repo-name`).  [required]
* `LOCAL_PATH`: Local path to the folder to upload.  [required]

**Options**:

* `--repo-type [model|dataset|space]`: The type of repository (model, dataset, or space).  [default: model]
* `--revision TEXT`: Git revision id which can be a branch name, a tag, or a commit hash.
* `--private / --no-private`: Whether to create a private repo if repo doesn't exist on the Hub. Ignored if the repo already exists.  [default: no-private]
* `--include TEXT`: Glob patterns to match files to upload.
* `--exclude TEXT`: Glob patterns to exclude from files to upload.
* `--token TEXT`: A User Access Token generated from https://huggingface.co/settings/tokens.
* `--num-workers INTEGER`: Number of workers to use to hash, upload and commit files.
* `--no-report / --no-no-report`: Whether to disable regular status report.  [default: no-no-report]
* `--no-bars / --no-no-bars`: Whether to disable progress bars.  [default: no-no-bars]
* `--help`: Show this message and exit.

## `hf version`

Print information about the hf version.

**Usage**:

```console
$ hf version [OPTIONS]
```

**Options**:

* `--help`: Show this message and exit.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/cli.md" />

### Managing your Space runtime
https://huggingface.co/docs/huggingface_hub/main/package_reference/space_runtime.md

# Managing your Space runtime

Check the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) documentation page for the reference of methods to manage your Space on the Hub.

- Duplicate a Space: [duplicate_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.duplicate_space)
- Fetch current runtime: [get_space_runtime()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_space_runtime)
- Manage secrets: [add_space_secret()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.add_space_secret) and [delete_space_secret()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_space_secret)
- Manage hardware: [request_space_hardware()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.request_space_hardware)
- Manage state: [pause_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_space), [restart_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.restart_space), [set_space_sleep_time()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.set_space_sleep_time)

## Data structures

### SpaceRuntime[[huggingface_hub.SpaceRuntime]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceRuntime</name><anchor>huggingface_hub.SpaceRuntime</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L103</source><parameters>[{"name": "data", "val": ": dict"}]</parameters><paramsdesc>- **stage** (`str`) --
  Current stage of the space. Example: RUNNING.
- **hardware** (`str` or `None`) --
  Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
  is `BUILDING` for the first time.
- **requested_hardware** (`str` or `None`) --
  Requested hardware. Can be different from `hardware` especially if the request
  has just been made. Example: "t4-medium". Can be `None` if no hardware has
  been requested yet.
- **sleep_time** (`int` or `None`) --
  Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the
  Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48
  hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
- **raw** (`dict`) --
  Raw response from the server. Contains more information about the Space
  runtime like number of replicas, number of cpu, memory size,...</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about the current runtime of a Space.




</div>

### SpaceHardware[[huggingface_hub.SpaceHardware]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceHardware</name><anchor>huggingface_hub.SpaceHardware</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L48</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of hardwares available to run your Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceHardware.example">

Value can be compared to a string:
```py
assert SpaceHardware.CPU_BASIC == "cpu-basic"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface-internal/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts (private url).


</div>

### SpaceStage[[huggingface_hub.SpaceStage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceStage</name><anchor>huggingface_hub.SpaceStage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L23</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of possible stage of a Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceStage.example">

Value can be compared to a string:
```py
assert SpaceStage.BUILDING == "BUILDING"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url).


</div>

### SpaceStorage[[huggingface_hub.SpaceStorage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceStorage</name><anchor>huggingface_hub.SpaceStorage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L85</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of persistent storage available for your Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceStorage.example">

Value can be compared to a string:
```py
assert SpaceStorage.SMALL == "small"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url).


</div>

### SpaceVariable[[huggingface_hub.SpaceVariable]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceVariable</name><anchor>huggingface_hub.SpaceVariable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L143</source><parameters>[{"name": "key", "val": ": str"}, {"name": "values", "val": ": dict"}]</parameters><paramsdesc>- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **value** (`str`) --
  Variable value. Example: `"the_model_repo_id"`.
- **description** (`str` or None) --
  Description of the variable. Example: `"Model Repo ID of the implemented model"`.
- **updatedAt** (`datetime` or None) --
  datetime of the last update of the variable (if the variable has been updated at least once).</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about the current variables of a Space.




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/space_runtime.md" />

### Inference types
https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_types.md

# Inference types

This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub.
Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization
due to Python requirements.
Visit [@huggingface.js/tasks](https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks)
to find the JSON schemas for each task.

This part of the lib is still under development and will be improved in future releases.



## audio_classification[[huggingface_hub.AudioClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationInput</name><anchor>huggingface_hub.AudioClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None"}]</parameters></docstring>
Inputs for Audio Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationOutputElement</name><anchor>huggingface_hub.AudioClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L37</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs for Audio Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationParameters</name><anchor>huggingface_hub.AudioClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Audio Classification

</div>

## audio_to_audio[[huggingface_hub.AudioToAudioInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioToAudioInput</name><anchor>huggingface_hub.AudioToAudioInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_to_audio.py#L12</source><parameters>[{"name": "inputs", "val": ": typing.Any"}]</parameters></docstring>
Inputs for Audio to Audio inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioToAudioOutputElement</name><anchor>huggingface_hub.AudioToAudioOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_to_audio.py#L20</source><parameters>[{"name": "blob", "val": ": typing.Any"}, {"name": "content_type", "val": ": str"}, {"name": "label", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Audio To Audio task
A generated audio file with its label.


</div>

## automatic_speech_recognition[[huggingface_hub.AutomaticSpeechRecognitionGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionGenerationParameters</name><anchor>huggingface_hub.AutomaticSpeechRecognitionGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionInput</name><anchor>huggingface_hub.AutomaticSpeechRecognitionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L85</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None"}]</parameters></docstring>
Inputs for Automatic Speech Recognition inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionOutput</name><anchor>huggingface_hub.AutomaticSpeechRecognitionOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L105</source><parameters>[{"name": "text", "val": ": str"}, {"name": "chunks", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None"}]</parameters></docstring>
Outputs of inference for the Automatic Speech Recognition task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionOutputChunk</name><anchor>huggingface_hub.AutomaticSpeechRecognitionOutputChunk</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L97</source><parameters>[{"name": "text", "val": ": str"}, {"name": "timestamp", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionParameters</name><anchor>huggingface_hub.AutomaticSpeechRecognitionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None"}, {"name": "return_timestamps", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Automatic Speech Recognition

</div>

## chat_completion[[huggingface_hub.ChatCompletionInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInput</name><anchor>huggingface_hub.ChatCompletionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L125</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Chat Completion Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputFunctionDefinition</name><anchor>huggingface_hub.ChatCompletionInputFunctionDefinition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L27</source><parameters>[{"name": "name", "val": ": str"}, {"name": "parameters", "val": ": typing.Any"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputFunctionName</name><anchor>huggingface_hub.ChatCompletionInputFunctionName</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L106</source><parameters>[{"name": "name", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputJSONSchema</name><anchor>huggingface_hub.ChatCompletionInputJSONSchema</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L49</source><parameters>[{"name": "name", "val": ": str"}, {"name": "description", "val": ": typing.Optional[str] = None"}, {"name": "schema", "val": ": typing.Optional[dict[str, object]] = None"}, {"name": "strict", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputMessage</name><anchor>huggingface_hub.ChatCompletionInputMessage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L41</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Union[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = None"}, {"name": "name", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputMessageChunk</name><anchor>huggingface_hub.ChatCompletionInputMessageChunk</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L20</source><parameters>[{"name": "type", "val": ": ChatCompletionInputMessageChunkType"}, {"name": "image_url", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = None"}, {"name": "text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatJSONObject</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatJSONObject</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L84</source><parameters>[{"name": "type", "val": ": typing.Literal['json_object']"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatJSONSchema</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatJSONSchema</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L78</source><parameters>[{"name": "type", "val": ": typing.Literal['json_schema']"}, {"name": "json_schema", "val": ": ChatCompletionInputJSONSchema"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatText</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatText</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L73</source><parameters>[{"name": "type", "val": ": typing.Literal['text']"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputStreamOptions</name><anchor>huggingface_hub.ChatCompletionInputStreamOptions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L96</source><parameters>[{"name": "include_usage", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputTool</name><anchor>huggingface_hub.ChatCompletionInputTool</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L119</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionDefinition"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputToolCall</name><anchor>huggingface_hub.ChatCompletionInputToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L34</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionDefinition"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputToolChoiceClass</name><anchor>huggingface_hub.ChatCompletionInputToolChoiceClass</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L111</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionName"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputURL</name><anchor>huggingface_hub.ChatCompletionInputURL</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L12</source><parameters>[{"name": "url", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutput</name><anchor>huggingface_hub.ChatCompletionOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L263</source><parameters>[{"name": "choices", "val": ": list"}, {"name": "created", "val": ": int"}, {"name": "id", "val": ": str"}, {"name": "model", "val": ": str"}, {"name": "system_fingerprint", "val": ": str"}, {"name": "usage", "val": ": ChatCompletionOutputUsage"}]</parameters></docstring>
Chat Completion Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputComplete</name><anchor>huggingface_hub.ChatCompletionOutputComplete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L248</source><parameters>[{"name": "finish_reason", "val": ": str"}, {"name": "index", "val": ": int"}, {"name": "message", "val": ": ChatCompletionOutputMessage"}, {"name": "logprobs", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputFunctionDefinition</name><anchor>huggingface_hub.ChatCompletionOutputFunctionDefinition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L225</source><parameters>[{"name": "arguments", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputLogprob</name><anchor>huggingface_hub.ChatCompletionOutputLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L213</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}, {"name": "top_logprobs", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputLogprobs</name><anchor>huggingface_hub.ChatCompletionOutputLogprobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L220</source><parameters>[{"name": "content", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputMessage</name><anchor>huggingface_hub.ChatCompletionOutputMessage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L239</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "reasoning", "val": ": typing.Optional[str] = None"}, {"name": "tool_call_id", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputToolCall</name><anchor>huggingface_hub.ChatCompletionOutputToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L232</source><parameters>[{"name": "function", "val": ": ChatCompletionOutputFunctionDefinition"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputTopLogprob</name><anchor>huggingface_hub.ChatCompletionOutputTopLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L207</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputUsage</name><anchor>huggingface_hub.ChatCompletionOutputUsage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L256</source><parameters>[{"name": "completion_tokens", "val": ": int"}, {"name": "prompt_tokens", "val": ": int"}, {"name": "total_tokens", "val": ": int"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutput</name><anchor>huggingface_hub.ChatCompletionStreamOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L335</source><parameters>[{"name": "choices", "val": ": list"}, {"name": "created", "val": ": int"}, {"name": "id", "val": ": str"}, {"name": "model", "val": ": str"}, {"name": "system_fingerprint", "val": ": str"}, {"name": "usage", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None"}]</parameters></docstring>
Chat Completion Stream Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputChoice</name><anchor>huggingface_hub.ChatCompletionStreamOutputChoice</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L320</source><parameters>[{"name": "delta", "val": ": ChatCompletionStreamOutputDelta"}, {"name": "index", "val": ": int"}, {"name": "finish_reason", "val": ": typing.Optional[str] = None"}, {"name": "logprobs", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputDelta</name><anchor>huggingface_hub.ChatCompletionStreamOutputDelta</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L293</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "reasoning", "val": ": typing.Optional[str] = None"}, {"name": "tool_call_id", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputDeltaToolCall</name><anchor>huggingface_hub.ChatCompletionStreamOutputDeltaToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L285</source><parameters>[{"name": "function", "val": ": ChatCompletionStreamOutputFunction"}, {"name": "id", "val": ": str"}, {"name": "index", "val": ": int"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputFunction</name><anchor>huggingface_hub.ChatCompletionStreamOutputFunction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L279</source><parameters>[{"name": "arguments", "val": ": str"}, {"name": "name", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputLogprob</name><anchor>huggingface_hub.ChatCompletionStreamOutputLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L308</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}, {"name": "top_logprobs", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputLogprobs</name><anchor>huggingface_hub.ChatCompletionStreamOutputLogprobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L315</source><parameters>[{"name": "content", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputTopLogprob</name><anchor>huggingface_hub.ChatCompletionStreamOutputTopLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L302</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputUsage</name><anchor>huggingface_hub.ChatCompletionStreamOutputUsage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L328</source><parameters>[{"name": "completion_tokens", "val": ": int"}, {"name": "prompt_tokens", "val": ": int"}, {"name": "total_tokens", "val": ": int"}]</parameters></docstring>


</div>

## depth_estimation[[huggingface_hub.DepthEstimationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DepthEstimationInput</name><anchor>huggingface_hub.DepthEstimationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/depth_estimation.py#L12</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters></docstring>
Inputs for Depth Estimation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DepthEstimationOutput</name><anchor>huggingface_hub.DepthEstimationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/depth_estimation.py#L22</source><parameters>[{"name": "depth", "val": ": typing.Any"}, {"name": "predicted_depth", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Depth Estimation task

</div>

## document_question_answering[[huggingface_hub.DocumentQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringInput</name><anchor>huggingface_hub.DocumentQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L56</source><parameters>[{"name": "inputs", "val": ": DocumentQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Document Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringInputData</name><anchor>huggingface_hub.DocumentQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L12</source><parameters>[{"name": "image", "val": ": typing.Any"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (document, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringOutputElement</name><anchor>huggingface_hub.DocumentQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L66</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}]</parameters></docstring>
Outputs of inference for the Document Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringParameters</name><anchor>huggingface_hub.DocumentQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L22</source><parameters>[{"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters></docstring>
Additional inference parameters for Document Question Answering

</div>

## feature_extraction[[huggingface_hub.FeatureExtractionInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FeatureExtractionInput</name><anchor>huggingface_hub.FeatureExtractionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/feature_extraction.py#L15</source><parameters>[{"name": "inputs", "val": ": typing.Union[list[str], str]"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None"}]</parameters></docstring>
Feature Extraction Input.
Auto-generated from TEI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.


</div>

## fill_mask[[huggingface_hub.FillMaskInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskInput</name><anchor>huggingface_hub.FillMaskInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L26</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None"}]</parameters></docstring>
Inputs for Fill Mask inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskOutputElement</name><anchor>huggingface_hub.FillMaskOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L36</source><parameters>[{"name": "score", "val": ": float"}, {"name": "sequence", "val": ": str"}, {"name": "token", "val": ": int"}, {"name": "token_str", "val": ": typing.Any"}, {"name": "fill_mask_output_token_str", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Fill Mask task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskParameters</name><anchor>huggingface_hub.FillMaskParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L12</source><parameters>[{"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Fill Mask

</div>

## image_classification[[huggingface_hub.ImageClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationInput</name><anchor>huggingface_hub.ImageClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None"}]</parameters></docstring>
Inputs for Image Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationOutputElement</name><anchor>huggingface_hub.ImageClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L37</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Image Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationParameters</name><anchor>huggingface_hub.ImageClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Image Classification

</div>

## image_segmentation[[huggingface_hub.ImageSegmentationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationInput</name><anchor>huggingface_hub.ImageSegmentationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L29</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None"}]</parameters></docstring>
Inputs for Image Segmentation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationOutputElement</name><anchor>huggingface_hub.ImageSegmentationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L41</source><parameters>[{"name": "label", "val": ": str"}, {"name": "mask", "val": ": str"}, {"name": "score", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Outputs of inference for the Image Segmentation task
A predicted mask / segment


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationParameters</name><anchor>huggingface_hub.ImageSegmentationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L15</source><parameters>[{"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Additional inference parameters for Image Segmentation

</div>

## image_to_image[[huggingface_hub.ImageToImageInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageInput</name><anchor>huggingface_hub.ImageToImageInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L44</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None"}]</parameters></docstring>
Inputs for Image To Image inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageOutput</name><anchor>huggingface_hub.ImageToImageOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L56</source><parameters>[{"name": "image", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Image To Image task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageParameters</name><anchor>huggingface_hub.ImageToImageParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L22</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}]</parameters></docstring>
Additional inference parameters for Image To Image

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageTargetSize</name><anchor>huggingface_hub.ImageToImageTargetSize</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L12</source><parameters>[{"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}]</parameters></docstring>
The size in pixels of the output image. This parameter is only supported by some
providers and for specific models. It will be ignored when unsupported.


</div>

## image_to_text[[huggingface_hub.ImageToTextGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextGenerationParameters</name><anchor>huggingface_hub.ImageToTextGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextInput</name><anchor>huggingface_hub.ImageToTextInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L85</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None"}]</parameters></docstring>
Inputs for Image To Text inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextOutput</name><anchor>huggingface_hub.ImageToTextOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L95</source><parameters>[{"name": "generated_text", "val": ": typing.Any"}, {"name": "image_to_text_output_generated_text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Image To Text task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextParameters</name><anchor>huggingface_hub.ImageToTextParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Image To Text

</div>

## image_to_video[[huggingface_hub.ImageToVideoInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoInput</name><anchor>huggingface_hub.ImageToVideoInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L44</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoParameters] = None"}]</parameters></docstring>
Inputs for Image To Video inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoOutput</name><anchor>huggingface_hub.ImageToVideoOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L56</source><parameters>[{"name": "video", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Image To Video task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoParameters</name><anchor>huggingface_hub.ImageToVideoParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L20</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}]</parameters></docstring>
Additional inference parameters for Image To Video

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoTargetSize</name><anchor>huggingface_hub.ImageToVideoTargetSize</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L12</source><parameters>[{"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}]</parameters></docstring>
The size in pixel of the output video frames.

</div>

## object_detection[[huggingface_hub.ObjectDetectionBoundingBox]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionBoundingBox</name><anchor>huggingface_hub.ObjectDetectionBoundingBox</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L32</source><parameters>[{"name": "xmax", "val": ": int"}, {"name": "xmin", "val": ": int"}, {"name": "ymax", "val": ": int"}, {"name": "ymin", "val": ": int"}]</parameters></docstring>
The predicted bounding box. Coordinates are relative to the top left corner of the input
image.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionInput</name><anchor>huggingface_hub.ObjectDetectionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L20</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None"}]</parameters></docstring>
Inputs for Object Detection inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionOutputElement</name><anchor>huggingface_hub.ObjectDetectionOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L48</source><parameters>[{"name": "box", "val": ": ObjectDetectionBoundingBox"}, {"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Object Detection task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionParameters</name><anchor>huggingface_hub.ObjectDetectionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L12</source><parameters>[{"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Additional inference parameters for Object Detection

</div>

## question_answering[[huggingface_hub.QuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringInput</name><anchor>huggingface_hub.QuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L54</source><parameters>[{"name": "inputs", "val": ": QuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringInputData</name><anchor>huggingface_hub.QuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L12</source><parameters>[{"name": "context", "val": ": str"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (context, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringOutputElement</name><anchor>huggingface_hub.QuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L64</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}]</parameters></docstring>
Outputs of inference for the Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringParameters</name><anchor>huggingface_hub.QuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L22</source><parameters>[{"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Question Answering

</div>

## sentence_similarity[[huggingface_hub.SentenceSimilarityInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SentenceSimilarityInput</name><anchor>huggingface_hub.SentenceSimilarityInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/sentence_similarity.py#L22</source><parameters>[{"name": "inputs", "val": ": SentenceSimilarityInputData"}, {"name": "parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters></docstring>
Inputs for Sentence similarity inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SentenceSimilarityInputData</name><anchor>huggingface_hub.SentenceSimilarityInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/sentence_similarity.py#L12</source><parameters>[{"name": "sentences", "val": ": list"}, {"name": "source_sentence", "val": ": str"}]</parameters></docstring>


</div>

## summarization[[huggingface_hub.SummarizationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationInput</name><anchor>huggingface_hub.SummarizationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None"}]</parameters></docstring>
Inputs for Summarization inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationOutput</name><anchor>huggingface_hub.SummarizationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L37</source><parameters>[{"name": "summary_text", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Summarization task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationParameters</name><anchor>huggingface_hub.SummarizationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for summarization.

</div>

## table_question_answering[[huggingface_hub.TableQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringInput</name><anchor>huggingface_hub.TableQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L40</source><parameters>[{"name": "inputs", "val": ": TableQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Table Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringInputData</name><anchor>huggingface_hub.TableQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L12</source><parameters>[{"name": "question", "val": ": str"}, {"name": "table", "val": ": dict"}]</parameters></docstring>
One (table, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringOutputElement</name><anchor>huggingface_hub.TableQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L50</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "cells", "val": ": list"}, {"name": "coordinates", "val": ": list"}, {"name": "aggregator", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Table Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringParameters</name><anchor>huggingface_hub.TableQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L25</source><parameters>[{"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Table Question Answering

</div>

## text2text_generation[[huggingface_hub.Text2TextGenerationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationInput</name><anchor>huggingface_hub.Text2TextGenerationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None"}]</parameters></docstring>
Inputs for Text2text Generation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationOutput</name><anchor>huggingface_hub.Text2TextGenerationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L37</source><parameters>[{"name": "generated_text", "val": ": typing.Any"}, {"name": "text2_text_generation_output_generated_text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Text2text Generation task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationParameters</name><anchor>huggingface_hub.Text2TextGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for Text2text Generation

</div>

## text_classification[[huggingface_hub.TextClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationInput</name><anchor>huggingface_hub.TextClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None"}]</parameters></docstring>
Inputs for Text Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationOutputElement</name><anchor>huggingface_hub.TextClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L35</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Text Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationParameters</name><anchor>huggingface_hub.TextClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text Classification

</div>

## text_generation[[huggingface_hub.TextGenerationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInput</name><anchor>huggingface_hub.TextGenerationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L76</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Text Generation Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInputGenerateParameters</name><anchor>huggingface_hub.TextGenerationInputGenerateParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L25</source><parameters>[{"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInputGrammarType</name><anchor>huggingface_hub.TextGenerationInputGrammarType</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L15</source><parameters>[{"name": "type", "val": ": TypeEnum"}, {"name": "value", "val": ": typing.Any"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutput</name><anchor>huggingface_hub.TextGenerationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L129</source><parameters>[{"name": "generated_text", "val": ": str"}, {"name": "details", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None"}]</parameters></docstring>
Text Generation Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputBestOfSequence</name><anchor>huggingface_hub.TextGenerationOutputBestOfSequence</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L107</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_text", "val": ": str"}, {"name": "generated_tokens", "val": ": int"}, {"name": "prefill", "val": ": list"}, {"name": "tokens", "val": ": list"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputDetails</name><anchor>huggingface_hub.TextGenerationOutputDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L118</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_tokens", "val": ": int"}, {"name": "prefill", "val": ": list"}, {"name": "tokens", "val": ": list"}, {"name": "best_of_sequences", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputPrefillToken</name><anchor>huggingface_hub.TextGenerationOutputPrefillToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L92</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputToken</name><anchor>huggingface_hub.TextGenerationOutputToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L99</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "special", "val": ": bool"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutput</name><anchor>huggingface_hub.TextGenerationStreamOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L157</source><parameters>[{"name": "index", "val": ": int"}, {"name": "token", "val": ": TextGenerationStreamOutputToken"}, {"name": "details", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = None"}, {"name": "generated_text", "val": ": typing.Optional[str] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None"}]</parameters></docstring>
Text Generation Stream Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutputStreamDetails</name><anchor>huggingface_hub.TextGenerationStreamOutputStreamDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L141</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_tokens", "val": ": int"}, {"name": "input_length", "val": ": int"}, {"name": "seed", "val": ": typing.Optional[int] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutputToken</name><anchor>huggingface_hub.TextGenerationStreamOutputToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L149</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "special", "val": ": bool"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

## text_to_audio[[huggingface_hub.TextToAudioGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioGenerationParameters</name><anchor>huggingface_hub.TextToAudioGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioInput</name><anchor>huggingface_hub.TextToAudioInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L83</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None"}]</parameters></docstring>
Inputs for Text To Audio inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioOutput</name><anchor>huggingface_hub.TextToAudioOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L93</source><parameters>[{"name": "audio", "val": ": typing.Any"}, {"name": "sampling_rate", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Text To Audio task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioParameters</name><anchor>huggingface_hub.TextToAudioParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None"}]</parameters></docstring>
Additional inference parameters for Text To Audio

</div>

## text_to_image[[huggingface_hub.TextToImageInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageInput</name><anchor>huggingface_hub.TextToImageInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L36</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None"}]</parameters></docstring>
Inputs for Text To Image inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageOutput</name><anchor>huggingface_hub.TextToImageOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L46</source><parameters>[{"name": "image", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Text To Image task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageParameters</name><anchor>huggingface_hub.TextToImageParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L12</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text To Image

</div>

## text_to_speech[[huggingface_hub.TextToSpeechGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechGenerationParameters</name><anchor>huggingface_hub.TextToSpeechGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechInput</name><anchor>huggingface_hub.TextToSpeechInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L83</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None"}]</parameters></docstring>
Inputs for Text To Speech inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechOutput</name><anchor>huggingface_hub.TextToSpeechOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L93</source><parameters>[{"name": "audio", "val": ": typing.Any"}, {"name": "sampling_rate", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Outputs of inference for the Text To Speech task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechParameters</name><anchor>huggingface_hub.TextToSpeechParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None"}]</parameters></docstring>
Additional inference parameters for Text To Speech

</div>

## text_to_video[[huggingface_hub.TextToVideoInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoInput</name><anchor>huggingface_hub.TextToVideoInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L32</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None"}]</parameters></docstring>
Inputs for Text To Video inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoOutput</name><anchor>huggingface_hub.TextToVideoOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L42</source><parameters>[{"name": "video", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Text To Video task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoParameters</name><anchor>huggingface_hub.TextToVideoParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L12</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text To Video

</div>

## token_classification[[huggingface_hub.TokenClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationInput</name><anchor>huggingface_hub.TokenClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None"}]</parameters></docstring>
Inputs for Token Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationOutputElement</name><anchor>huggingface_hub.TokenClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L37</source><parameters>[{"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}, {"name": "word", "val": ": str"}, {"name": "entity", "val": ": typing.Optional[str] = None"}, {"name": "entity_group", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Token Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationParameters</name><anchor>huggingface_hub.TokenClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L15</source><parameters>[{"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Token Classification

</div>

## translation[[huggingface_hub.TranslationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationInput</name><anchor>huggingface_hub.TranslationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L35</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None"}]</parameters></docstring>
Inputs for Translation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationOutput</name><anchor>huggingface_hub.TranslationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L45</source><parameters>[{"name": "translation_text", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Translation task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationParameters</name><anchor>huggingface_hub.TranslationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for Translation

</div>

## video_classification[[huggingface_hub.VideoClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationInput</name><anchor>huggingface_hub.VideoClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L29</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None"}]</parameters></docstring>
Inputs for Video Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationOutputElement</name><anchor>huggingface_hub.VideoClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L39</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Video Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationParameters</name><anchor>huggingface_hub.VideoClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L15</source><parameters>[{"name": "frame_sampling_rate", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Video Classification

</div>

## visual_question_answering[[huggingface_hub.VisualQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringInput</name><anchor>huggingface_hub.VisualQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L33</source><parameters>[{"name": "inputs", "val": ": VisualQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Visual Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringInputData</name><anchor>huggingface_hub.VisualQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L12</source><parameters>[{"name": "image", "val": ": typing.Any"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (image, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringOutputElement</name><anchor>huggingface_hub.VisualQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L43</source><parameters>[{"name": "score", "val": ": float"}, {"name": "answer", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Visual Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringParameters</name><anchor>huggingface_hub.VisualQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L22</source><parameters>[{"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Visual Question Answering

</div>

## zero_shot_classification[[huggingface_hub.ZeroShotClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationInput</name><anchor>huggingface_hub.ZeroShotClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L29</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotClassificationParameters"}]</parameters></docstring>
Inputs for Zero Shot Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationOutputElement</name><anchor>huggingface_hub.ZeroShotClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L39</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationParameters</name><anchor>huggingface_hub.ZeroShotClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L12</source><parameters>[{"name": "candidate_labels", "val": ": list"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "multi_label", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Zero Shot Classification

</div>

## zero_shot_image_classification[[huggingface_hub.ZeroShotImageClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationInput</name><anchor>huggingface_hub.ZeroShotImageClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L24</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotImageClassificationParameters"}]</parameters></docstring>
Inputs for Zero Shot Image Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationOutputElement</name><anchor>huggingface_hub.ZeroShotImageClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L34</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Image Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationParameters</name><anchor>huggingface_hub.ZeroShotImageClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L12</source><parameters>[{"name": "candidate_labels", "val": ": list"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Additional inference parameters for Zero Shot Image Classification

</div>

## zero_shot_object_detection[[huggingface_hub.ZeroShotObjectDetectionBoundingBox]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionBoundingBox</name><anchor>huggingface_hub.ZeroShotObjectDetectionBoundingBox</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L28</source><parameters>[{"name": "xmax", "val": ": int"}, {"name": "xmin", "val": ": int"}, {"name": "ymax", "val": ": int"}, {"name": "ymin", "val": ": int"}]</parameters></docstring>
The predicted bounding box. Coordinates are relative to the top left corner of the input
image.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionInput</name><anchor>huggingface_hub.ZeroShotObjectDetectionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L18</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotObjectDetectionParameters"}]</parameters></docstring>
Inputs for Zero Shot Object Detection inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionOutputElement</name><anchor>huggingface_hub.ZeroShotObjectDetectionOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L40</source><parameters>[{"name": "box", "val": ": ZeroShotObjectDetectionBoundingBox"}, {"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Object Detection task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionParameters</name><anchor>huggingface_hub.ZeroShotObjectDetectionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L10</source><parameters>[{"name": "candidate_labels", "val": ": list"}]</parameters></docstring>
Additional inference parameters for Zero Shot Object Detection

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/inference_types.md" />

### Utilities
https://huggingface.co/docs/huggingface_hub/main/package_reference/utilities.md

# Utilities

## Configure logging[[huggingface_hub.utils.logging.get_verbosity]]

The `huggingface_hub` package exposes a `logging` utility to control the logging level of the package itself.
You can import it as such:

```py
from huggingface_hub import logging
```

Then, you may define the verbosity in order to update the amount of logs you'll see:

```python
from huggingface_hub import logging

logging.set_verbosity_error()
logging.set_verbosity_warning()
logging.set_verbosity_info()
logging.set_verbosity_debug()

logging.set_verbosity(...)
```

The levels should be understood as follows:

- `error`: only show critical logs about usage which may result in an error or unexpected behavior.
- `warning`: show logs that aren't critical but usage may result in unintended behavior.
  Additionally, important informative logs may be shown.
- `info`: show most logs, including some verbose logging regarding what is happening under the hood.
  If something is behaving in an unexpected manner, we recommend switching the verbosity level to this in order
  to get more information.
- `debug`: show all logs, including some internal logs which may be used to track exactly what's happening
  under the hood.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.get_verbosity</name><anchor>huggingface_hub.utils.logging.get_verbosity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L105</source><parameters>[]</parameters><retdesc>Logging level, e.g., `huggingface_hub.logging.DEBUG` and
`huggingface_hub.logging.INFO`.</retdesc></docstring>
Return the current level for the HuggingFace Hub's root logger.



> [!TIP]
> HuggingFace Hub has following logging levels:
>
> - `huggingface_hub.logging.CRITICAL`, `huggingface_hub.logging.FATAL`
> - `huggingface_hub.logging.ERROR`
> - `huggingface_hub.logging.WARNING`, `huggingface_hub.logging.WARN`
> - `huggingface_hub.logging.INFO`
> - `huggingface_hub.logging.DEBUG`


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity</name><anchor>huggingface_hub.utils.logging.set_verbosity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L124</source><parameters>[{"name": "verbosity", "val": ": int"}]</parameters><paramsdesc>- **verbosity** (`int`) --
  Logging level, e.g., `huggingface_hub.logging.DEBUG` and
  `huggingface_hub.logging.INFO`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the level for the HuggingFace Hub's root logger.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_info</name><anchor>huggingface_hub.utils.logging.set_verbosity_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L136</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.INFO`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_debug</name><anchor>huggingface_hub.utils.logging.set_verbosity_debug</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L150</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.DEBUG`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_warning</name><anchor>huggingface_hub.utils.logging.set_verbosity_warning</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L143</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.WARNING`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_error</name><anchor>huggingface_hub.utils.logging.set_verbosity_error</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L157</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.ERROR`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.disable_propagation</name><anchor>huggingface_hub.utils.logging.disable_propagation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L164</source><parameters>[]</parameters></docstring>

Disable propagation of the library log outputs. Note that log propagation is
disabled by default.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.enable_propagation</name><anchor>huggingface_hub.utils.logging.enable_propagation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L172</source><parameters>[]</parameters></docstring>

Enable propagation of the library log outputs. Please disable the
HuggingFace Hub's default handler to prevent double logging if the root
logger has been configured.


</div>

### Repo-specific helper methods[[huggingface_hub.utils.logging.get_logger]]

The methods exposed below are relevant when modifying modules from the `huggingface_hub` library itself.
Using these shouldn't be necessary if you use `huggingface_hub` and you don't modify them.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.get_logger</name><anchor>huggingface_hub.utils.logging.get_logger</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L80</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the logger to get, usually the filename</paramsdesc><paramgroups>0</paramgroups></docstring>

Returns a logger with the specified name. This function is not supposed
to be directly accessed by library users.



<ExampleCodeBlock anchor="huggingface_hub.utils.logging.get_logger.example">

Example:

```python
>>> from huggingface_hub import get_logger

>>> logger = get_logger(__file__)
>>> logger.set_verbosity_info()
```

</ExampleCodeBlock>


</div>

## Configure progress bars

Progress bars are a useful tool to display information to the user while a long-running task is being executed (e.g.
when downloading or uploading files). `huggingface_hub` exposes a `tqdm` wrapper to display progress bars in a
consistent way across the library.

By default, progress bars are enabled. You can disable them globally by setting `HF_HUB_DISABLE_PROGRESS_BARS`
environment variable. You can also enable/disable them using [enable_progress_bars()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.utils.enable_progress_bars) and
[disable_progress_bars()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.utils.disable_progress_bars). If set, the environment variable has priority on the helpers.

```py
>>> from huggingface_hub import snapshot_download
>>> from huggingface_hub.utils import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars

>>> # Disable progress bars globally
>>> disable_progress_bars()

>>> # Progress bar will not be shown !
>>> snapshot_download("gpt2")

>>> are_progress_bars_disabled()
True

>>> # Re-enable progress bars globally
>>> enable_progress_bars()
```

### Group-specific control of progress bars

You can also enable or disable progress bars for specific groups. This allows you to manage progress bar visibility more granularly within different parts of your application or library. When a progress bar is disabled for a group, all subgroups under it are also affected unless explicitly overridden.

```py
# Disable progress bars for a specific group
>>> disable_progress_bars("peft.foo")
>>> assert not are_progress_bars_disabled("peft")
>>> assert not are_progress_bars_disabled("peft.something")
>>> assert are_progress_bars_disabled("peft.foo")
>>> assert are_progress_bars_disabled("peft.foo.bar")

# Re-enable progress bars for a subgroup
>>> enable_progress_bars("peft.foo.bar")
>>> assert are_progress_bars_disabled("peft.foo")
>>> assert not are_progress_bars_disabled("peft.foo.bar")

# Use groups with tqdm
# No progress bar for `name="peft.foo"`
>>> for _ in tqdm(range(5), name="peft.foo"):
...     pass

# Progress bar will be shown for `name="peft.foo.bar"`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
...     pass
100%|███████████████████████████████████████| 5/5 [00:00<00:00, 117817.53it/s]
```

### are_progress_bars_disabled[[huggingface_hub.utils.are_progress_bars_disabled]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.are_progress_bars_disabled</name><anchor>huggingface_hub.utils.are_progress_bars_disabled</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L172</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The group name to check; if None, checks the global setting.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if progress bars are disabled, False otherwise.</retdesc></docstring>

Check if progress bars are disabled globally or for a specific group.

This function returns whether progress bars are disabled for a given group or globally.
It checks the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable first, then the programmatic
settings.








</div>

### disable_progress_bars[[huggingface_hub.utils.disable_progress_bars]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.disable_progress_bars</name><anchor>huggingface_hub.utils.disable_progress_bars</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L108</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the group for which to disable the progress bars. If None,
  progress bars are disabled globally.</paramsdesc><paramgroups>0</paramgroups><raises>- ``Warning`` -- If the environment variable precludes changes.</raises><raisederrors>``Warning``</raisederrors></docstring>

Disable progress bars either globally or for a specified group.

This function updates the state of progress bars based on a group name.
If no group name is provided, all progress bars are disabled. The operation
respects the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable's setting.








</div>

### enable_progress_bars[[huggingface_hub.utils.enable_progress_bars]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.enable_progress_bars</name><anchor>huggingface_hub.utils.enable_progress_bars</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L140</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the group for which to enable the progress bars. If None,
  progress bars are enabled globally.</paramsdesc><paramgroups>0</paramgroups><raises>- ``Warning`` -- If the environment variable precludes changes.</raises><raisederrors>``Warning``</raisederrors></docstring>

Enable progress bars either globally or for a specified group.

This function sets the progress bars to enabled for the specified group or globally
if no group is specified. The operation is subject to the `HF_HUB_DISABLE_PROGRESS_BARS`
environment setting.








</div>

## Configuring the HTTP Backend[[huggingface_hub.set_client_factory]]

<Tip>

In `huggingface_hub` v0.x, HTTP requests were handled with `requests`, and configuration was done via `configure_http_backend`. Since we now use `httpx`, configuration works differently: you must provide a factory function that takes no arguments and returns an `httpx.Client`. You can review the [default implementation here](https://github.com/huggingface/huggingface_hub/blob/v1.0-release/src/huggingface_hub/utils/_http.py) to see which parameters are used by default.

</Tip>


In some setups, you may need to control how HTTP requests are made, for example when working behind a proxy. The `huggingface_hub` library allows you to configure this globally with [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory). After configuration, all requests to the Hub will use your custom settings. Since `huggingface_hub` relies on `httpx.Client` under the hood, you can check the [`httpx` documentation](https://www.python-httpx.org/advanced/clients/) for details on available parameters.

If you are building a third-party library and need to make direct requests to the Hub, use [get_session()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.get_session) to obtain a correctly configured `httpx` client. Replace any direct `httpx.get(...)` calls with `get_session().get(...)` to ensure proper behavior.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.set_client_factory</name><anchor>huggingface_hub.set_client_factory</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L158</source><parameters>[{"name": "client_factory", "val": ": typing.Callable[[], httpx.Client]"}]</parameters></docstring>

Set the HTTP client factory to be used by `huggingface_hub`.

The client factory is a method that returns a `httpx.Client` object. On the first call to `get_client` the client factory
will be used to create a new `httpx.Client` object that will be shared between all calls made by `huggingface_hub`.

This can be useful if you are running your scripts in a specific environment requiring custom configuration (e.g. custom proxy or certifications).

Use `get_client` to get a correctly configured `httpx.Client`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_session</name><anchor>huggingface_hub.get_session</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L194</source><parameters>[]</parameters></docstring>

Get a `httpx.Client` object, using the transport factory from the user.

This client is shared between all calls made by `huggingface_hub`. Therefore you should not close it manually.

Use [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory) to customize the `httpx.Client`.


</div>

In rare cases, you may want to manually close the current session (for example, after a transient `SSLError`). You can do this with [close_session()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.close_session). A new session will automatically be created on the next call to [get_session()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.get_session).

Sessions are always closed automatically when the process exits.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.close_session</name><anchor>huggingface_hub.close_session</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L225</source><parameters>[]</parameters></docstring>

Close the global `httpx.Client` used by `huggingface_hub`.

If a Client is closed, it will be recreated on the next call to [get_session()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.get_session).

Can be useful if e.g. an SSL certificate has been updated.


</div>

For async code, use [set_async_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_async_client_factory) to configure an `httpx.AsyncClient` and [get_async_session()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.get_async_session) to retrieve one.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.set_async_client_factory</name><anchor>huggingface_hub.set_async_client_factory</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L175</source><parameters>[{"name": "async_client_factory", "val": ": typing.Callable[[], httpx.AsyncClient]"}]</parameters></docstring>

Set the HTTP async client factory to be used by `huggingface_hub`.

The async client factory is a method that returns a `httpx.AsyncClient` object.
This can be useful if you are running your scripts in a specific environment requiring custom configuration (e.g. custom proxy or certifications).
Use `get_async_client` to get a correctly configured `httpx.AsyncClient`.

<Tip warning={true}>

Contrary to the `httpx.Client` that is shared between all calls made by `huggingface_hub`, the `httpx.AsyncClient` is not shared.
It is recommended to use an async context manager to ensure the client is properly closed when the context is exited.

</Tip>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_async_session</name><anchor>huggingface_hub.get_async_session</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L209</source><parameters>[]</parameters></docstring>

Return a `httpx.AsyncClient` object, using the transport factory from the user.

Use [set_async_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_async_client_factory) to customize the `httpx.AsyncClient`.

<Tip warning={true}>

Contrary to the `httpx.Client` that is shared between all calls made by `huggingface_hub`, the `httpx.AsyncClient` is not shared.
It is recommended to use an async context manager to ensure the client is properly closed when the context is exited.

</Tip>


</div>

<Tip>

Unlike the synchronous client, the lifecycle of the async client is not managed automatically. Use an async context manager to handle it properly.

</Tip>

## Handle HTTP errors

`huggingface_hub` defines its own HTTP errors to refine the `HTTPError` raised by
`requests` with additional information sent back by the server.

### Raise for status[[huggingface_hub.hf_raise_for_status]]

[hf_raise_for_status()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.hf_raise_for_status) is meant to be the central method to "raise for status" from any
request made to the Hub. It wraps the base `requests.raise_for_status` to provide
additional information. Any `HTTPError` thrown is converted into a `HfHubHTTPError`.

```py
import requests
from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError

response = requests.post(...)
try:
    hf_raise_for_status(response)
except HfHubHTTPError as e:
    print(str(e)) # formatted message
    e.request_id, e.server_message # details returned by server

    # Complete the error message with additional information once it's raised
    e.append_to_message("\n`create_commit` expects the repository to exist.")
    raise
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.hf_raise_for_status</name><anchor>huggingface_hub.hf_raise_for_status</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L519</source><parameters>[{"name": "response", "val": ": Response"}, {"name": "endpoint_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **response** (`Response`) --
  Response from the server.
- **endpoint_name** (`str`, *optional*) --
  Name of the endpoint that has been called. If provided, the error message will be more complete.</paramsdesc><paramgroups>0</paramgroups></docstring>

Internal version of `response.raise_for_status()` that will refine a potential HTTPError.
Raised exception will be an instance of [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError).

This helper is meant to be the unique method to raise_for_status when making a call to the Hugging Face Hub.



> [!WARNING]
> Raises when the request has failed:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>         If the repository to download from cannot be found. This may be because it
>         doesn't exist, because `repo_type` is not set correctly, or because the repo
>         is `private` and you do not have access.
>     - [GatedRepoError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.GatedRepoError)
>         If the repository exists but is gated and the user is not on the authorized
>         list.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>         If the repository exists but the revision couldn't be found.
>     - [EntryNotFoundError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.EntryNotFoundError)
>         If the repository exists but the entry (e.g. the requested file) couldn't be
>         find.
>     - [BadRequestError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.BadRequestError)
>         If request failed with a HTTP 400 BadRequest error.
>     - [HfHubHTTPError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)
>         If request failed for a reason not listed above.


</div>

### HTTP errors

Here is a list of HTTP errors thrown in `huggingface_hub`.

#### HfHubHTTPError[[huggingface_hub.errors.HfHubHTTPError]]

`HfHubHTTPError` is the parent class for any HF Hub HTTP error. It takes care of parsing
the server response and format the error message to provide as much information to the
user as possible.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.HfHubHTTPError</name><anchor>huggingface_hub.errors.HfHubHTTPError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L40</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

HTTPError to inherit from for any custom HTTP Error raised in HF Hub.

Any HTTPError is converted at least into a `HfHubHTTPError`. If some information is
sent back by the server, it will be added to the error message.

Added details:
- Request id from "X-Request-Id" header if exists. If not, fallback to "X-Amzn-Trace-Id" header if exists.
- Server error message from the header "X-Error-Message".
- Server error message if we can found one in the response body.

<ExampleCodeBlock anchor="huggingface_hub.errors.HfHubHTTPError.example">

Example:
```py
    import httpx
    from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError

    response = get_session().post(...)
    try:
        hf_raise_for_status(response)
    except HfHubHTTPError as e:
        print(str(e)) # formatted message
        e.request_id, e.server_message # details returned by server

        # Complete the error message with additional information once it's raised
        e.append_to_message("
ate_commit` expects the repository to exist.")
        raise
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>append_to_message</name><anchor>huggingface_hub.errors.HfHubHTTPError.append_to_message</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L83</source><parameters>[{"name": "additional_message", "val": ": str"}]</parameters></docstring>
Append additional information to the `HfHubHTTPError` initial message.

</div></div>

#### RepositoryNotFoundError[[huggingface_hub.errors.RepositoryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RepositoryNotFoundError</name><anchor>huggingface_hub.errors.RepositoryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L181</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with an invalid repository name, or
with a private repo name the user does not have access to.

<ExampleCodeBlock anchor="huggingface_hub.errors.RepositoryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import model_info
>>> model_info("<non_existent_repository>")
(...)
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: PvMw_VjBMjVdMz53WKIzP)

Repository Not Found for url: https://huggingface.co/api/models/%3Cnon_existent_repository%3E.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
Invalid username or password.
```

</ExampleCodeBlock>


</div>

#### GatedRepoError[[huggingface_hub.errors.GatedRepoError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.GatedRepoError</name><anchor>huggingface_hub.errors.GatedRepoError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L202</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a gated repository for which the user is not on the
authorized list.

Note: derives from `RepositoryNotFoundError` to ensure backward compatibility.

<ExampleCodeBlock anchor="huggingface_hub.errors.GatedRepoError.example">

Example:

```py
>>> from huggingface_hub import model_info
>>> model_info("<gated_repository>")
(...)
huggingface_hub.errors.GatedRepoError: 403 Client Error. (Request ID: ViT1Bf7O_026LGSQuVqfa)

Cannot access gated repo for url https://huggingface.co/api/models/ardent-figment/gated-model.
Access to model ardent-figment/gated-model is restricted and you are not in the authorized list.
Visit https://huggingface.co/ardent-figment/gated-model to ask for access.
```

</ExampleCodeBlock>


</div>

#### RevisionNotFoundError[[huggingface_hub.errors.RevisionNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RevisionNotFoundError</name><anchor>huggingface_hub.errors.RevisionNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L245</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with a valid repository but an invalid
revision.

<ExampleCodeBlock anchor="huggingface_hub.errors.RevisionNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', 'config.json', revision='<non-existent-revision>')
(...)
huggingface_hub.errors.RevisionNotFoundError: 404 Client Error. (Request ID: Mwhe_c3Kt650GcdKEFomX)

Revision Not Found for url: https://huggingface.co/bert-base-cased/resolve/%3Cnon-existent-revision%3E/config.json.
```

</ExampleCodeBlock>


</div>

#### BadRequestError[[huggingface_hub.errors.BadRequestError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.BadRequestError</name><anchor>huggingface_hub.errors.BadRequestError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L320</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised by `hf_raise_for_status` when the server returns a HTTP 400 error.

<ExampleCodeBlock anchor="huggingface_hub.errors.BadRequestError.example">

Example:

```py
>>> resp = httpx.post("hf.co/api/check", ...)
>>> hf_raise_for_status(resp, endpoint_name="check")
huggingface_hub.errors.BadRequestError: Bad request for check endpoint: {details} (Request ID: XXX)
```

</ExampleCodeBlock>


</div>

#### EntryNotFoundError[[huggingface_hub.errors.EntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.EntryNotFoundError</name><anchor>huggingface_hub.errors.EntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L264</source><parameters>""</parameters></docstring>

Raised when entry not found, either locally or remotely.

<ExampleCodeBlock anchor="huggingface_hub.errors.EntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-existent-file>')
(...)
huggingface_hub.errors.RemoteEntryNotFoundError (...)
>>> hf_hub_download('bert-base-cased', '<non-existent-file>', local_files_only=True)
(...)
huggingface_hub.utils.errors.LocalEntryNotFoundError (...)
```

</ExampleCodeBlock>


</div>

#### RemoteEntryNotFoundError[[huggingface_hub.errors.RemoteEntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RemoteEntryNotFoundError</name><anchor>huggingface_hub.errors.RemoteEntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L282</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with a valid repository and revision
but an invalid filename.

<ExampleCodeBlock anchor="huggingface_hub.errors.RemoteEntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-existent-file>')
(...)
huggingface_hub.errors.EntryNotFoundError: 404 Client Error. (Request ID: 53pNl6M0MxsnG5Sw8JA6x)

Entry Not Found for url: https://huggingface.co/bert-base-cased/resolve/main/%3Cnon-existent-file%3E.
```

</ExampleCodeBlock>


</div>

#### LocalEntryNotFoundError[[huggingface_hub.errors.LocalEntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.LocalEntryNotFoundError</name><anchor>huggingface_hub.errors.LocalEntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L300</source><parameters>[{"name": "message", "val": ": str"}]</parameters></docstring>

Raised when trying to access a file or snapshot that is not on the disk when network is
disabled or unavailable (connection issue). The entry may exist on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.errors.LocalEntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-cached-file>',  local_files_only=True)
(...)
huggingface_hub.errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
```

</ExampleCodeBlock>


</div>

#### OfflineModeIsEnabled[[huggingface_hub.errors.OfflineModeIsEnabled]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.OfflineModeIsEnabled</name><anchor>huggingface_hub.errors.OfflineModeIsEnabled</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L36</source><parameters>""</parameters></docstring>
Raised when a request is made but `HF_HUB_OFFLINE=1` is set as environment variable.

</div>

## Telemetry[[huggingface_hub.utils.send_telemetry]]

`huggingface_hub` includes a helper to send telemetry data. This information helps us debug issues and prioritize new features.
Users can disable telemetry collection at any time by setting the `HF_HUB_DISABLE_TELEMETRY=1` environment variable.
Telemetry is also disabled in offline mode (i.e. when setting HF_HUB_OFFLINE=1).

If you are maintainer of a third-party library, sending telemetry data is as simple as making a call to `send_telemetry`.
Data is sent in a separate thread to reduce as much as possible the impact for users.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.send_telemetry</name><anchor>huggingface_hub.utils.send_telemetry</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_telemetry.py#L20</source><parameters>[{"name": "topic", "val": ": str"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "library_version", "val": ": typing.Optional[str] = None"}, {"name": "user_agent", "val": ": typing.Union[dict, str, NoneType] = None"}]</parameters><paramsdesc>- **topic** (`str`) --
  Name of the topic that is monitored. The topic is directly used to build the URL. If you want to monitor
  subtopics, just use "/" separation. Examples: "gradio", "transformers/examples",...
- **library_name** (`str`, *optional*) --
  The name of the library that is making the HTTP request. Will be added to the user-agent header.
- **library_version** (`str`, *optional*) --
  The version of the library that is making the HTTP request. Will be added to the user-agent header.
- **user_agent** (`str`, `dict`, *optional*) --
  The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sends telemetry that helps track usage of different HF libraries.

This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants
to share additional information, and we respect your privacy. You can disable telemetry collection by setting the
`HF_HUB_DISABLE_TELEMETRY=1` as environment variable. Telemetry is also disabled in offline mode (i.e. when setting
`HF_HUB_OFFLINE=1`).

Telemetry collection is run in a separate thread to minimize impact for the user.



<ExampleCodeBlock anchor="huggingface_hub.utils.send_telemetry.example">

Example:
```py
>>> from huggingface_hub.utils import send_telemetry

# Send telemetry without library information
>>> send_telemetry("ping")

# Send telemetry to subtopic with library information
>>> send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1")

# Send telemetry with additional data
>>> send_telemetry(
...     topic="examples",
...     library_name="transformers",
...     library_version="4.26.0",
...     user_agent={"pipeline": "text_classification", "framework": "flax"},
... )
```

</ExampleCodeBlock>


</div>

## Validators

`huggingface_hub` includes custom validators to validate method arguments automatically.
Validation is inspired by the work done in [Pydantic](https://pydantic-docs.helpmanual.io/)
to validate type hints but with more limited features.

### Generic decorator

[validate_hf_hub_args()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.utils.validate_hf_hub_args) is a generic decorator to encapsulate
methods that have arguments following `huggingface_hub`'s naming. By default, all
arguments that has a validator implemented will be validated.

If an input is not valid, a [HFValidationError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HFValidationError) is thrown. Only
the first non-valid value throws an error and stops the validation process.

Usage:

```py
>>> from huggingface_hub.utils import validate_hf_hub_args

>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
...     print(repo_id)

>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id

>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.

>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

#### validate_hf_hub_args[[huggingface_hub.utils.validate_hf_hub_args]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.validate_hf_hub_args</name><anchor>huggingface_hub.utils.validate_hf_hub_args</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_validators.py#L42</source><parameters>[{"name": "fn", "val": ": ~CallableT"}]</parameters><raises>- [HFValidationError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HFValidationError) -- 
  If an input is not valid.</raises><raisederrors>[HFValidationError](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.HFValidationError)</raisederrors></docstring>
Validate values received as argument for any public method of `huggingface_hub`.

The goal of this decorator is to harmonize validation of arguments reused
everywhere. By default, all defined validators are tested.

Validators:
- [validate_repo_id()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.utils.validate_repo_id): `repo_id` must be `"repo_name"`
  or `"namespace/repo_name"`. Namespace is a username or an organization.
- `~utils.smoothly_deprecate_legacy_arguments`: Ignore `proxies` when downloading files (should be set globally).

<ExampleCodeBlock anchor="huggingface_hub.utils.validate_hf_hub_args.example">

Example:
```py
>>> from huggingface_hub.utils import validate_hf_hub_args

>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
...     print(repo_id)

>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id

>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.

>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

</ExampleCodeBlock>






</div>

#### HFValidationError[[huggingface_hub.errors.HFValidationError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.HFValidationError</name><anchor>huggingface_hub.errors.HFValidationError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L157</source><parameters>""</parameters></docstring>
Generic exception thrown by `huggingface_hub` validators.

Inherits from [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError).


</div>

### Argument validators

Validators can also be used individually. Here is a list of all arguments that can be
validated.

#### repo_id[[huggingface_hub.utils.validate_repo_id]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.validate_repo_id</name><anchor>huggingface_hub.utils.validate_repo_id</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_validators.py#L94</source><parameters>[{"name": "repo_id", "val": ": str"}]</parameters></docstring>
Validate `repo_id` is valid.

This is not meant to replace the proper validation made on the Hub but rather to
avoid local inconsistencies whenever possible (example: passing `repo_type` in the
`repo_id` is forbidden).

Rules:
- Between 1 and 96 characters.
- Either "repo_name" or "namespace/repo_name"
- [a-zA-Z0-9] or "-", "_", "."
- "--" and ".." are forbidden

Valid: `"foo"`, `"foo/bar"`, `"123"`, `"Foo-BAR_foo.bar123"`

Not valid: `"datasets/foo/bar"`, `".repo_id"`, `"foo--bar"`, `"foo.git"`

<ExampleCodeBlock anchor="huggingface_hub.utils.validate_repo_id.example">

Example:
```py
>>> from huggingface_hub.utils import validate_repo_id
>>> validate_repo_id(repo_id="valid_repo_id")
>>> validate_repo_id(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

</ExampleCodeBlock>

Discussed in https://github.com/huggingface/huggingface_hub/issues/1008.
In moon-landing (internal repository):
- https://github.com/huggingface/moon-landing/blob/main/server/lib/Names.ts#L27
- https://github.com/huggingface/moon-landing/blob/main/server/views/components/NewRepoForm/NewRepoForm.svelte#L138


</div>

#### smoothly_deprecate_legacy_arguments[[huggingface_hub.utils._validators.smoothly_deprecate_legacy_arguments]]

Not exactly a validator, but ran as well.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils._validators.smoothly_deprecate_legacy_arguments</name><anchor>huggingface_hub.utils._validators.smoothly_deprecate_legacy_arguments</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_validators.py#L148</source><parameters>[{"name": "fn_name", "val": ": str"}, {"name": "kwargs", "val": ": dict"}]</parameters></docstring>
Smoothly deprecate legacy arguments in the `huggingface_hub` codebase.

This function ignores some deprecated arguments from the kwargs and warns the user they are ignored.
The goal is to avoid breaking existing code while guiding the user to the new way of doing things.

List of deprecated arguments:
- `proxies`:
  To set up proxies, user must either use the HTTP_PROXY environment variable or configure the `httpx.Client`
  manually using the [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory) function.

  In huggingface_hub 0.x, `proxies` was a dictionary directly passed to `requests.request`.
  In huggingface_hub 1.x, we migrated to `httpx` which does not support `proxies` the same way.
  In particular, it is not possible to configure proxies on a per-request basis. The solution is to configure
  it globally using the [set_client_factory()](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.set_client_factory) function or using the HTTP_PROXY environment variable.

  For more details, see:
  - https://www.python-httpx.org/advanced/proxies/
  - https://www.python-httpx.org/compatibility/#proxy-keys.

- `resume_download`: deprecated without replacement. `huggingface_hub` always resumes downloads whenever possible.
- `force_filename`: deprecated without replacement. Filename is always the same as on the Hub.
- `local_dir_use_symlinks`: deprecated without replacement. Downloading to a local directory does not use symlinks anymore.


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/utilities.md" />

### Inference
https://huggingface.co/docs/huggingface_hub/main/package_reference/inference_client.md

# Inference

Inference is the process of using a trained model to make predictions on new data. Because this process can be compute-intensive, running on a dedicated or external service can be an interesting option.
The `huggingface_hub` library provides a unified interface to run inference across multiple services for models hosted on the Hugging Face Hub:

1.  [Inference Providers](https://huggingface.co/docs/inference-providers/index): a streamlined, unified access to hundreds of machine learning models, powered by our serverless inference partners. This new approach builds on our previous Serverless Inference API, offering more models, improved performance, and greater reliability thanks to world-class providers. Refer to the [documentation](https://huggingface.co/docs/inference-providers/index#partners) for a list of supported providers.
2.  [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice.
3.  Local endpoints: you can also run inference with local inference servers like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://ollama.com/), [vLLM](https://github.com/vllm-project/vllm), [LiteLLM](https://docs.litellm.ai/docs/simple_proxy), or [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) by connecting the client to these local endpoints.

These services can be called with the [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) object. Please refer to [this guide](../guides/inference)
for more information on how to use it.

## Inference Client[[huggingface_hub.InferenceClient]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceClient</name><anchor>huggingface_hub.InferenceClient</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L123</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "provider", "val": ": typing.Union[typing.Literal['black-forest-labs', 'cerebras', 'clarifai', 'cohere', 'fal-ai', 'featherless-ai', 'fireworks-ai', 'groq', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'nscale', 'openai', 'publicai', 'replicate', 'sambanova', 'scaleway', 'together', 'wavespeed', 'zai-org'], typing.Literal['auto'], NoneType] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "timeout", "val": ": typing.Optional[float] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "cookies", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "bill_to", "val": ": typing.Optional[str] = None"}, {"name": "base_url", "val": ": typing.Optional[str] = None"}, {"name": "api_key", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, `optional`) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is
  automatically selected for the task.
  Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2
  arguments are mutually exclusive. If a URL is passed as `model` or `base_url` for chat completion, the `(/v1)/chat/completions` suffix path will be appended to the URL.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Can be `"black-forest-labs"`, `"cerebras"`, `"clarifai"`, `"cohere"`, `"fal-ai"`, `"featherless-ai"`, `"fireworks-ai"`, `"groq"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"nscale"`, `"openai"`, `"publicai"`, `"replicate"`, `"sambanova"`, `"scaleway"`, `"together"`, `"wavespeed"` or `"zai-org"`.
  Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **token** (`str`, *optional*) --
  Hugging Face token. Will default to the locally saved token if not provided.
  Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2
  arguments are mutually exclusive and have the exact same behavior.
- **timeout** (`float`, `optional`) --
  The maximum number of seconds to wait for a response from the server. Defaults to None, meaning it will loop until the server is available.
- **headers** (`dict[str, str]`, `optional`) --
  Additional headers to send to the server. By default only the authorization and user-agent headers are sent.
  Values in this dictionary will override the default values.
- **bill_to** (`str`, `optional`) --
  The billing account to use for the requests. By default the requests are billed on the user's account.
  Requests can only be billed to an organization the user is a member of, and which has subscribed to Enterprise Hub.
- **cookies** (`dict[str, str]`, `optional`) --
  Additional cookies to send to the server.
- **base_url** (`str`, `optional`) --
  Base URL to run inference. This is a duplicated argument from `model` to make [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
- **api_key** (`str`, `optional`) --
  Token to use for authentication. This is a duplicated argument from `token` to make [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>

Initialize a new Inference Client.

[InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) aims to provide a unified experience to perform inference. The client can be used
seamlessly with either the (free) Inference API, self-hosted Inference Endpoints, or third-party Inference Providers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_classification</name><anchor>huggingface_hub.InferenceClient.audio_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L300</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio classification will be used.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"AudioClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioClassificationOutputElement]`</rettype><retdesc>List of [AudioClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AudioClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform audio classification on the provided audio content.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.audio_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.audio_classification("audio.flac")
[
    AudioClassificationOutputElement(score=0.4976358711719513, label='hap'),
    AudioClassificationOutputElement(score=0.3677836060523987, label='neu'),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_to_audio</name><anchor>huggingface_hub.InferenceClient.audio_to_audio</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L357</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content for the model. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model can be any model which takes an audio file and returns another audio file. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio_to_audio will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioToAudioOutputElement]`</rettype><retdesc>A list of [AudioToAudioOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AudioToAudioOutputElement) items containing audios label, content-type, and audio content in blob.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Performs multiple tasks related to audio-to-audio depending on the model (eg: speech enhancement, source separation).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.audio_to_audio.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item.blob)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>automatic_speech_recognition</name><anchor>huggingface_hub.InferenceClient.automatic_speech_recognition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L409</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file.
- **model** (`str`, *optional*) --
  The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for ASR will be used.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[AutomaticSpeechRecognitionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AutomaticSpeechRecognitionOutput)</rettype><retdesc>An item containing the transcribed text and optionally the timestamp chunks.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform automatic speech recognition (ASR or audio-to-text) on the given audio content.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.automatic_speech_recognition.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.automatic_speech_recognition("hello_world.flac").text
"hello world"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>chat_completion</name><anchor>huggingface_hub.InferenceClient.chat_completion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L535</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "stream", "val": ": bool = False"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **messages** (List of [ChatCompletionInputMessage](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputMessage)) --
  Conversation history consisting of roles and content pairs.
- **model** (`str`, *optional*) --
  The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
  See https://huggingface.co/tasks/text-generation for more details.
  If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
  custom URL while setting `model` in the request payload, you must set `base_url` when initializing [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient).
- **frequency_penalty** (`float`, *optional*) --
  Penalizes new tokens based on their existing frequency
  in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
- **logit_bias** (`list[float]`, *optional*) --
  Adjusts the likelihood of specific tokens appearing in the generated output.
- **logprobs** (`bool`, *optional*) --
  Whether to return log probabilities of the output tokens or not. If true, returns the log
  probabilities of each output token returned in the content of message.
- **max_tokens** (`int`, *optional*) --
  Maximum number of tokens allowed in the response. Defaults to 100.
- **n** (`int`, *optional*) --
  The number of completions to generate for each prompt.
- **presence_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
  text so far, increasing the model's likelihood to talk about new topics.
- **response_format** (`ChatCompletionInputGrammarType()`, *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **seed** (Optional`int`, *optional*) --
  Seed for reproducible control flow. Defaults to None.
- **stop** (`list[str]`, *optional*) --
  Up to four strings which trigger the end of the response.
  Defaults to None.
- **stream** (`bool`, *optional*) --
  Enable realtime streaming of responses. Defaults to False.
- **stream_options** ([ChatCompletionInputStreamOptions](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputStreamOptions), *optional*) --
  Options for streaming completions.
- **temperature** (`float`, *optional*) --
  Controls randomness of the generations. Lower values ensure
  less random completions. Range: [0, 2]. Defaults to 1.0.
- **top_logprobs** (`int`, *optional*) --
  An integer between 0 and 5 specifying the number of most likely tokens to return at each token
  position, each with an associated log probability. logprobs must be set to true if this parameter is
  used.
- **top_p** (`float`, *optional*) --
  Fraction of the most likely next words to sample from.
  Must be between 0 and 1. Defaults to 1.0.
- **tool_choice** ([ChatCompletionInputToolChoiceClass](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputToolChoiceClass) or `ChatCompletionInputToolChoiceEnum()`, *optional*) --
  The tool to use for the completion. Defaults to "auto".
- **tool_prompt** (`str`, *optional*) --
  A prompt to be appended before the tools.
- **tools** (List of [ChatCompletionInputTool](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputTool), *optional*) --
  A list of tools the model may call. Currently, only functions are supported as a tool. Use this to
  provide a list of functions the model may generate JSON inputs for.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[ChatCompletionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) or Iterable of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput)</rettype><retdesc>Generated text returned from the server:
- if `stream=False`, the generated text is returned as a [ChatCompletionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) (default).
- if `stream=True`, the generated text is returned token by token as a sequence of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput).</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

A method for completing conversations using a specified language model.

> [!TIP]
> The `client.chat_completion` method is aliased as `client.chat.completions.create` for compatibility with OpenAI's client.
> Inputs and outputs are strictly the same and using either syntax will yield the same results.
> Check out the [Inference guide](https://huggingface.co/docs/huggingface_hub/guides/inference#openai-compatibility)
> for more details about OpenAI's compatibility.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example">

Example:

```py
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputComplete(
            finish_reason='eos_token',
            index=0,
            message=ChatCompletionOutputMessage(
                role='assistant',
                content='The capital of France is Paris.',
                name=None,
                tool_calls=None
            ),
            logprobs=None
        )
    ],
    created=1719907176,
    id='',
    model='meta-llama/Meta-Llama-3-8B-Instruct',
    object='text_completion',
    system_fingerprint='2.0.4-sha-f426a33',
    usage=ChatCompletionOutputUsage(
        completion_tokens=8,
        prompt_tokens=17,
        total_tokens=25
    )
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-2">

Example using streaming:
```py
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> for token in client.chat_completion(messages, max_tokens=10, stream=True):
...     print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-3">

Example using OpenAI's syntax:
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient

# instead of `client = OpenAI(...)`
client = InferenceClient(
    base_url=...,
    api_key=...,
)

output = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)
```

</ExampleCodeBlock>

Example using a third-party provider directly with extra (provider-specific) parameters. Usage will be billed on your Together AI account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
...     extra_body={"safety_model": "Meta-Llama/Llama-Guard-7b"},
... )
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="sambanova",  # Use Sambanova provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-6">

Example using Image + Text as input:
```py
>>> from huggingface_hub import InferenceClient

# provide a remote URL
>>> image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
# or a base64-encoded image
>>> image_path = "/path/to/image.jpeg"
>>> with open(image_path, "rb") as f:
...     base64_image = base64.b64encode(f.read()).decode("utf-8")
>>> image_url = f"data:image/jpeg;base64,{base64_image}"

>>> client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
>>> output = client.chat.completions.create(
...     messages=[
...         {
...             "role": "user",
...             "content": [
...                 {
...                     "type": "image_url",
...                     "image_url": {"url": image_url},
...                 },
...                 {
...                     "type": "text",
...                     "text": "Describe this image in one sentence.",
...                 },
...             ],
...         },
...     ],
... )
>>> output
The image depicts the iconic Statue of Liberty situated in New York Harbor, New York, on a clear day.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-7">

Example using tools:
```py
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "system",
...         "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
...     },
...     {
...         "role": "user",
...         "content": "What's the weather like the next 3 days in San Francisco, CA?",
...     },
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     {
...         "type": "function",
...         "function": {
...             "name": "get_n_day_weather_forecast",
...             "description": "Get an N-day weather forecast",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                     "num_days": {
...                         "type": "integer",
...                         "description": "The number of days to forecast",
...                     },
...                 },
...                 "required": ["location", "format", "num_days"],
...             },
...         },
...     },
... ]

>>> response = client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-8">

Example using response_format:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "user",
...         "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
...     },
... ]
>>> response_format = {
...     "type": "json",
...     "value": {
...         "properties": {
...             "location": {"type": "string"},
...             "activity": {"type": "string"},
...             "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...             "animals": {"type": "array", "items": {"type": "string"}},
...         },
...         "required": ["location", "activity", "animals_seen", "animals"],
...     },
... }
>>> response = client.chat_completion(
...     messages=messages,
...     response_format=response_format,
...     max_tokens=500,
... )
>>> response.choices[0].message.content
'{

y": "bike ride",
": ["puppy", "cat", "raccoon"],
_seen": 3,
n": "park"}'
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>document_question_answering</name><anchor>huggingface_hub.InferenceClient.document_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L937</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO]`) --
  The input image for the context. It can be raw bytes, an image file, or a URL to an online image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the document question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended document question answering model will be used.
  Defaults to None.
- **doc_stride** (`int`, *optional*) --
  If the words in the document are too long to fit with the question for the model, it will be split in
  several chunks with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer
- **lang** (`str`, *optional*) --
  Language to use while running OCR. Defaults to english.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using doc_stride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Can return less than top_k
  answers if there are not enough options available within the context.
- **word_boxes** (`list[Union[list[float], str`, *optional*) --
  A list of words and bounding boxes (normalized 0->1000). If provided, the inference will skip the OCR
  step and use the provided bounding boxes instead.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[DocumentQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [DocumentQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.DocumentQuestionAnsweringOutputElement) items containing the predicted label, associated probability, word ids, and page number.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Answer questions on document images.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.document_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.document_question_answering(image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", question="What is the invoice number?")
[DocumentQuestionAnsweringOutputElement(answer='us-001', end=16, score=0.9999666213989258, start=16)]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>feature_extraction</name><anchor>huggingface_hub.InferenceClient.feature_extraction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1024</source><parameters>[{"name": "text", "val": ": str"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[typing.Literal['Left', 'Right']] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (*str*) --
  The text to embed.
- **model** (*str*, *optional*) --
  The model to use for the feature extraction task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended feature extraction model will be used.
  Defaults to None.
- **normalize** (*bool*, *optional*) --
  Whether to normalize the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **prompt_name** (*str*, *optional*) --
  The name of the prompt that should be used by for encoding. If not set, no prompt will be applied.
  Must be a key in the *Sentence Transformers* configuration *prompts* dictionary.
  For example if `prompt_name` is "query" and the `prompts` is &amp;lcub;"query": "query: ",...},
  then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?"
  because the prompt text will be prepended before any text to encode.
- **truncate** (*bool*, *optional*) --
  Whether to truncate the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **truncation_direction** (*Literal["Left", "Right"]*, *optional*) --
  Which side of the input should be truncated when *truncate=True* is passed.</paramsdesc><paramgroups>0</paramgroups><rettype>*np.ndarray*</rettype><retdesc>The embedding representing the input text as a float32 numpy array.</retdesc><raises>- [*InferenceTimeoutError*] -- 
  If the model is unavailable or the request times out.
- [*HfHubHTTPError*] -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[*InferenceTimeoutError*] or [*HfHubHTTPError*]</raisederrors></docstring>

Generate embeddings for a given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.feature_extraction.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.feature_extraction("Hi, who are you?")
array([[ 2.424802  ,  2.93384   ,  1.1750331 , ...,  1.240499, -0.13776633, -0.7889173 ],
[-0.42943227, -0.6364878 , -1.693462  , ...,  0.41978157, -2.4336355 ,  0.6162071 ],
...,
[ 0.28552425, -0.928395  , -1.2077185 , ...,  0.76810825, -2.1069427 ,  0.6236161 ]], dtype=float32)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fill_mask</name><anchor>huggingface_hub.InferenceClient.fill_mask</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1097</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask).
- **model** (`str`, *optional*) --
  The model to use for the fill mask task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended fill mask model will be used.
- **targets** (`list[str`, *optional*) --
  When passed, the model will limit the scores to the passed targets instead of looking up in the whole
  vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first
  resulting token will be used (with a warning, and that might be slower).
- **top_k** (`int`, *optional*) --
  When passed, overrides the number of predictions to return.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[FillMaskOutputElement]`</rettype><retdesc>a list of [FillMaskOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.FillMaskOutputElement) items containing the predicted label, associated
probability, token reference, and completed text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Fill in a hole with a missing word (token to be precise).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.fill_mask.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.fill_mask("The goal of life is <mask>.")
[
    FillMaskOutputElement(score=0.06897063553333282, token=11098, token_str=' happiness', sequence='The goal of life is happiness.'),
    FillMaskOutputElement(score=0.06554922461509705, token=45075, token_str=' immortality', sequence='The goal of life is immortality.')
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_endpoint_info</name><anchor>huggingface_hub.InferenceClient.get_endpoint_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3270</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, Any]`</rettype><retdesc>Information about the endpoint.</retdesc></docstring>

Get information about the deployed endpoint.

This endpoint is only available on endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).
Endpoints powered by `transformers` return an empty payload.







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.get_endpoint_info.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> client.get_endpoint_info()
{
    'model_id': 'meta-llama/Meta-Llama-3-70B-Instruct',
    'model_sha': None,
    'model_dtype': 'torch.float16',
    'model_device_type': 'cuda',
    'model_pipeline_tag': None,
    'max_concurrent_requests': 128,
    'max_best_of': 2,
    'max_stop_sequences': 4,
    'max_input_length': 8191,
    'max_total_tokens': 8192,
    'waiting_served_ratio': 0.3,
    'max_batch_total_tokens': 1259392,
    'max_waiting_tokens': 20,
    'max_batch_size': None,
    'validation_workers': 32,
    'max_client_batch_size': 4,
    'version': '2.0.2',
    'sha': 'dccab72549635c7eb5ddb17f43f0b7cdff07c214',
    'docker_label': 'sha-dccab72'
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>health_check</name><anchor>huggingface_hub.InferenceClient.health_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3328</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  URL of the Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if everything is working fine.</retdesc></docstring>

Check the health of the deployed endpoint.

Health check is only available with Inference Endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.health_check.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("https://jzgu0buei5.us-east-1.aws.endpoints.huggingface.cloud")
>>> client.health_check()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_classification</name><anchor>huggingface_hub.InferenceClient.image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1153</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to classify. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used.
- **function_to_apply** (`"ImageClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageClassificationOutputElement]`</rettype><retdesc>a list of [ImageClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image classification on the given image using the specified model.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[ImageClassificationOutputElement(label='Blenheim spaniel', score=0.9779096841812134), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_segmentation</name><anchor>huggingface_hub.InferenceClient.image_segmentation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1203</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to segment. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used.
- **mask_threshold** (`float`, *optional*) --
  Threshold to use when turning the predicted masks into binary values.
- **overlap_mask_area_threshold** (`float`, *optional*) --
  Mask overlap threshold to eliminate small, disconnected segments.
- **subtask** (`"ImageSegmentationSubtask"`, *optional*) --
  Segmentation task to be performed, depending on model capabilities.
- **threshold** (`float`, *optional*) --
  Probability threshold to filter out predicted masks.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageSegmentationOutputElement]`</rettype><retdesc>A list of [ImageSegmentationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageSegmentationOutputElement) items containing the segmented masks and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image segmentation on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_segmentation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_segmentation("cat.jpg")
[ImageSegmentationOutputElement(score=0.989008, label='LABEL_184', mask=<PIL.PngImagePlugin.PngImageFile image mode=L size=400x300 at 0x7FDD2B129CC0>), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_image</name><anchor>huggingface_hub.InferenceClient.image_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1271</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for translation. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the image generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate images closely
  linked to the text prompt at the expense of lower image quality.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **target_size** (`ImageToImageTargetSize`, *optional*) --
  The size in pixels of the output image. This parameter is only supported by some providers and for
  specific models. It will be ignored when unsupported.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The translated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image-to-image translation using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_image.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> image = client.image_to_image("cat.jpg", prompt="turn the cat into a tiger")
>>> image.save("tiger.jpg")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_text</name><anchor>huggingface_hub.InferenceClient.image_to_text</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1426</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImageToTextOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageToTextOutput)</rettype><retdesc>The generated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Takes an input image and return text.

Models can have very different outputs depending on your use case (image captioning, optical character recognition
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_text.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_to_text("cat.jpg")
'a cat standing in a grassy field '
>>> client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
'a dog laying on the grass next to a flower pot '
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_video</name><anchor>huggingface_hub.InferenceClient.image_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1347</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to generate a video from. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the video generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate videos closely
  linked to the text prompt at the expense of lower image quality.
- **seed** (`int`, *optional*) --
  The seed to use for the video generation.
- **target_size** (`ImageToVideoTargetSize`, *optional*) --
  The size in pixel of the output video frames.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video from an input image.







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_video.example">

Examples:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
...     f.write(video)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>object_detection</name><anchor>huggingface_hub.InferenceClient.object_detection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1472</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to detect objects on. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for object detection. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for object detection (DETR) will be used.
- **threshold** (`float`, *optional*) --
  The probability necessary to make a prediction.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ObjectDetectionOutputElement]`</rettype><retdesc>A list of [ObjectDetectionOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ObjectDetectionOutputElement) items containing the bounding boxes and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If the request output is not a List.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Perform object detection on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.object_detection.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.object_detection("people.jpg")
[ObjectDetectionOutputElement(score=0.9486683011054993, label='person', box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>question_answering</name><anchor>huggingface_hub.InferenceClient.question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1520</source><parameters>[{"name": "question", "val": ": str"}, {"name": "context", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **question** (`str`) --
  Question to be answered.
- **context** (`str`) --
  The context of the question.
- **model** (`str`) --
  The model to use for the question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint.
- **align_to_words** (`bool`, *optional*) --
  Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt
  on non-space-separated languages (like Japanese or Chinese)
- **doc_stride** (`int`, *optional*) --
  If the context is too long to fit with the question for the model, it will be split in several chunks
  with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using docStride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>Union[`QuestionAnsweringOutputElement`, list[QuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.QuestionAnsweringOutputElement)]</rettype><retdesc>When top_k is 1 or not provided, it returns a single `QuestionAnsweringOutputElement`.
When top_k is greater than 1, it returns a list of `QuestionAnsweringOutputElement`.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from a given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.question_answering(question="What's my name?", context="My name is Clara and I live in Berkeley.")
QuestionAnsweringOutputElement(answer='Clara', end=16, score=0.9326565265655518, start=11)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>sentence_similarity</name><anchor>huggingface_hub.InferenceClient.sentence_similarity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1604</source><parameters>[{"name": "sentence", "val": ": str"}, {"name": "other_sentences", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **sentence** (`str`) --
  The main sentence to compare to others.
- **other_sentences** (`list[str]`) --
  The list of sentences to compare to.
- **model** (`str`, *optional*) --
  The model to use for the sentence similarity task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended sentence similarity model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[float]`</rettype><retdesc>The embedding representing the input text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.sentence_similarity.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.sentence_similarity(
...     "Machine learning is so easy.",
...     other_sentences=[
...         "Deep learning is so straightforward.",
...         "This is so difficult, like rocket science.",
...         "I can't believe how much I struggled with this.",
...     ],
... )
[0.7785726189613342, 0.45876261591911316, 0.2906220555305481]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>summarization</name><anchor>huggingface_hub.InferenceClient.summarization</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1657</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to summarize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for summarization will be used.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.
- **truncation** (`"SummarizationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.</paramsdesc><paramgroups>0</paramgroups><rettype>[SummarizationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.SummarizationOutput)</rettype><retdesc>The generated summary text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate a summary of a given text using a specified model.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.summarization.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.summarization("The Eiffel tower...")
SummarizationOutput(generated_text="The Eiffel tower is one of the most famous landmarks in the world....")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>table_question_answering</name><anchor>huggingface_hub.InferenceClient.table_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1715</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "query", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **table** (`str`) --
  A table of data represented as a dict of lists where entries are headers and the lists are all the
  values, all lists must have the same size.
- **query** (`str`) --
  The query in plain text that you want to ask the table.
- **model** (`str`) --
  The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
  Hub or a URL to a deployed Inference Endpoint.
- **padding** (`"Padding"`, *optional*) --
  Activates and controls padding.
- **sequential** (`bool`, *optional*) --
  Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
  inference to be done sequentially to extract relations within sequences, given their conversational
  nature.
- **truncation** (`bool`, *optional*) --
  Activates and controls truncation.</paramsdesc><paramgroups>0</paramgroups><rettype>[TableQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TableQuestionAnsweringOutputElement)</rettype><retdesc>a table question answering output containing the answer, coordinates, cells and the aggregator used.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from information given in a table.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.table_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> query = "How many stars does the transformers repository have?"
>>> table = {"Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"]}
>>> client.table_question_answering(table, query, model="google/tapas-base-finetuned-wtq")
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_classification</name><anchor>huggingface_hub.InferenceClient.tabular_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1777</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes to classify.
- **model** (`str`, *optional*) --
  The model to use for the tabular classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular classification model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of labels, one per row in the initial table.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Classifying a target category (a group) based on a set of attributes.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.tabular_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> table = {
...     "fixed_acidity": ["7.4", "7.8", "10.3"],
...     "volatile_acidity": ["0.7", "0.88", "0.32"],
...     "citric_acid": ["0", "0", "0.45"],
...     "residual_sugar": ["1.9", "2.6", "6.4"],
...     "chlorides": ["0.076", "0.098", "0.073"],
...     "free_sulfur_dioxide": ["11", "25", "5"],
...     "total_sulfur_dioxide": ["34", "67", "13"],
...     "density": ["0.9978", "0.9968", "0.9976"],
...     "pH": ["3.51", "3.2", "3.23"],
...     "sulphates": ["0.56", "0.68", "0.82"],
...     "alcohol": ["9.4", "9.8", "12.6"],
... }
>>> client.tabular_classification(table=table, model="julien-c/wine-quality")
["5", "5", "5"]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_regression</name><anchor>huggingface_hub.InferenceClient.tabular_regression</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1832</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes stored in a table. The attributes used to predict the target can be both numerical and categorical.
- **model** (`str`, *optional*) --
  The model to use for the tabular regression task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular regression model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of predicted numerical target values.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Predicting a numerical target value given a set of attributes/features in a table.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.tabular_regression.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> table = {
...     "Height": ["11.52", "12.48", "12.3778"],
...     "Length1": ["23.2", "24", "23.9"],
...     "Length2": ["25.4", "26.3", "26.5"],
...     "Length3": ["30", "31.2", "31.1"],
...     "Species": ["Bream", "Bream", "Bream"],
...     "Width": ["4.02", "4.3056", "4.6961"],
... }
>>> client.tabular_regression(table, model="scikit-learn/Fish-Weight")
[110, 120, 130]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_classification</name><anchor>huggingface_hub.InferenceClient.text_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1882</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"TextClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TextClassificationOutputElement]`</rettype><retdesc>a list of [TextClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform text classification (e.g. sentiment-analysis) on the given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.text_classification("I like you")
[
    TextClassificationOutputElement(label='POSITIVE', score=0.9998695850372314),
    TextClassificationOutputElement(label='NEGATIVE', score=0.0001304351753788069),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_generation</name><anchor>huggingface_hub.InferenceClient.text_generation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2090</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stop_sequences", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  Input text.
- **details** (`bool`, *optional*) --
  By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens,
  probabilities, seed, finish reason, etc.). Only available for models running on with the
  `text-generation-inference` backend.
- **stream** (`bool`, *optional*) --
  By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of
  tokens to be returned. Only available for models running on with the `text-generation-inference`
  backend.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **adapter_id** (`str`, *optional*) --
  Lora adapter id.
- **best_of** (`int`, *optional*) --
  Generate best_of sequences and return the one if the highest token logprobs.
- **decoder_input_details** (`bool`, *optional*) --
  Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken
  into account. Defaults to `False`.
- **do_sample** (`bool`, *optional*) --
  Activate logits sampling
- **frequency_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in
  the text so far, decreasing the model's likelihood to repeat the same line verbatim.
- **grammar** ([TextGenerationInputGrammarType](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationInputGrammarType), *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **max_new_tokens** (`int`, *optional*) --
  Maximum number of generated tokens. Defaults to 100.
- **repetition_penalty** (`float`, *optional*) --
  The parameter for repetition penalty. 1.0 means no penalty. See [this
  paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- **return_full_text** (`bool`, *optional*) --
  Whether to prepend the prompt to the generated text
- **seed** (`int`, *optional*) --
  Random sampling seed
- **stop** (`list[str]`, *optional*) --
  Stop generating tokens if a member of `stop` is generated.
- **stop_sequences** (`list[str]`, *optional*) --
  Deprecated argument. Use `stop` instead.
- **temperature** (`float`, *optional*) --
  The value used to module the logits distribution.
- **top_n_tokens** (`int`, *optional*) --
  Return information about the `top_n_tokens` most likely tokens at each generation step, instead of
  just the sampled token.
- **top_k** (`int`, *optional`) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional`) --
  If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
  higher are kept for generation.
- **truncate** (`int`, *optional`) --
  Truncate inputs tokens to the given size.
- **typical_p** (`float`, *optional`) --
  Typical Decoding mass
  See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information
- **watermark** (`bool`, *optional*) --
  Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[str, TextGenerationOutput, Iterable[str], Iterable[TextGenerationStreamOutput]]`</rettype><retdesc>Generated text returned from the server:
- if `stream=False` and `details=False`, the generated text is returned as a `str` (default)
- if `stream=True` and `details=False`, the generated text is returned token by token as a `Iterable[str]`
- if `stream=False` and `details=True`, the generated text is returned with more details as a [TextGenerationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationOutput)
- if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [TextGenerationStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationStreamOutput)</retdesc><raises>- ``ValidationError`` -- 
  If input values are not valid. No HTTP call is made to the server.
- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``ValidationError`` or [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Given a prompt, generate the following text.

> [!TIP]
> If you want to generate a response from chat messages, you should use the [InferenceClient.chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) method.
> It accepts a list of messages instead of a single text prompt and handles the chat templating for you.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_generation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# Case 1: generate text
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# Case 2: iterate over the generated tokens. Useful for large generation.
>>> for token in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True):
...     print(token)
100
%
open
source
and
built
to
be
easy
to
use
.

# Case 3: get more details about the generation process.
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True)
TextGenerationOutput(
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationDetails(
        finish_reason='length',
        generated_tokens=12,
        seed=None,
        prefill=[
            TextGenerationPrefillOutputToken(id=487, text='The', logprob=None),
            TextGenerationPrefillOutputToken(id=53789, text=' hugging', logprob=-13.171875),
            (...)
            TextGenerationPrefillOutputToken(id=204, text=' ', logprob=-7.0390625)
        ],
        tokens=[
            TokenElement(id=1425, text='100', logprob=-1.0175781, special=False),
            TokenElement(id=16, text='%', logprob=-0.0463562, special=False),
            (...)
            TokenElement(id=25, text='.', logprob=-0.5703125, special=False)
        ],
        best_of_sequences=None
    )
)

# Case 4: iterate over the generated tokens with more details.
# Last object is more complete, containing the full generated text and the finish reason.
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
...     print(details)
...
TextGenerationStreamOutput(token=TokenElement(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationStreamOutputStreamDetails(finish_reason='length', generated_tokens=12, seed=None)
)

# Case 5: generate constrained output using grammar
>>> response = client.text_generation(
...     prompt="I saw a puppy a cat and a raccoon during my bike ride in the park",
...     model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
...     max_new_tokens=100,
...     repetition_penalty=1.3,
...     grammar={
...         "type": "json",
...         "value": {
...             "properties": {
...                 "location": {"type": "string"},
...                 "activity": {"type": "string"},
...                 "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...                 "animals": {"type": "array", "items": {"type": "string"}},
...             },
...             "required": ["location", "activity", "animals_seen", "animals"],
...         },
...     },
... )
>>> json.loads(response)
{
    "activity": "bike riding",
    "animals": ["puppy", "cat", "raccoon"],
    "animals_seen": 3,
    "location": "park"
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_image</name><anchor>huggingface_hub.InferenceClient.text_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2429</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate an image from.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **height** (`int`, *optional*) --
  The height in pixels of the output image
- **width** (`int`, *optional*) --
  The width in pixels of the output image
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-image model will be used.
  Defaults to None.
- **scheduler** (`str`, *optional*) --
  Override the scheduler with a compatible one.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The generated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate an image based on a given text using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     negative_prompt="low resolution, blurry",
...     model="stabilityai/stable-diffusion-2-1",
... )
>>> image.save("better_astronaut.png")
```

</ExampleCodeBlock>
Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Use fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> image = client.text_to_image(
...     "A majestic lion in a fantasy forest",
...     model="black-forest-labs/FLUX.1-schnell",
... )
>>> image.save("lion.png")
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-dev",
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>

Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-schnell",
...     extra_body={"output_quality": 100},
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_speech</name><anchor>huggingface_hub.InferenceClient.text_to_speech</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2666</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The text to synthesize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-speech model will be used.
  Defaults to None.
- **do_sample** (`bool`, *optional*) --
  Whether to use sampling instead of greedy decoding when generating new tokens.
- **early_stopping** (`Union[bool, "TextToSpeechEarlyStoppingEnum"]`, *optional*) --
  Controls the stopping condition for beam-based methods.
- **epsilon_cutoff** (`float`, *optional*) --
  If set to float strictly between 0 and 1, only tokens with a conditional probability greater than
  epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on
  the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **eta_cutoff** (`float`, *optional*) --
  Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly
  between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff)
  * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token
  probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3,
  depending on the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **max_length** (`int`, *optional*) --
  The maximum length (in tokens) of the generated text, including the input.
- **max_new_tokens** (`int`, *optional*) --
  The maximum number of tokens to generate. Takes precedence over max_length.
- **min_length** (`int`, *optional*) --
  The minimum length (in tokens) of the generated text, including the input.
- **min_new_tokens** (`int`, *optional*) --
  The minimum number of tokens to generate. Takes precedence over min_length.
- **num_beam_groups** (`int`, *optional*) --
  Number of groups to divide num_beams into in order to ensure diversity among different groups of beams.
  See [this paper](https://hf.co/papers/1610.02424) for more details.
- **num_beams** (`int`, *optional*) --
  Number of beams to use for beam search.
- **penalty_alpha** (`float`, *optional*) --
  The value balances the model confidence and the degeneration penalty in contrastive search decoding.
- **temperature** (`float`, *optional*) --
  The value used to modulate the next token probabilities.
- **top_k** (`int`, *optional*) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional*) --
  If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to
  top_p or higher are kept for generation.
- **typical_p** (`float`, *optional*) --
  Local typicality measures how similar the conditional probability of predicting a target token next is
  to the expected conditional probability of predicting a random token next, given the partial text
  already generated. If set to float < 1, the smallest set of the most locally typical tokens with
  probabilities that add up to typical_p or higher are kept for generation. See [this
  paper](https://hf.co/papers/2202.00666) for more details.
- **use_cache** (`bool`, *optional*) --
  Whether the model should use the past last key/values attentions to speed up decoding
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated audio.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Synthesize an audio of a voice pronouncing a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example">

Example:
```py
>>> from pathlib import Path
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> audio = client.text_to_speech("Hello world")
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider directly. Usage will be billed on your Replicate account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="your-replicate-api-key",  # Pass your Replicate API key directly
... )
>>> audio = client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio =client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>
Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio = client.text_to_speech(
...     "Hello, my name is Kororo, an awesome text-to-speech model.",
...     model="hexgrad/Kokoro-82M",
...     extra_body={"voice": "af_nicole"},
... )
>>> Path("hello.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example music-gen using "YuE-s1-7B-anneal-en-cot" on fal.ai
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> lyrics = '''
... [verse]
... In the town where I was born
... Lived a man who sailed to sea
... And he told us of his life
... In the land of submarines
... So we sailed on to the sun
... 'Til we found a sea of green
... And we lived beneath the waves
... In our yellow submarine

... [chorus]
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... '''
>>> genres = "pavarotti-style tenor voice"
>>> client = InferenceClient(
...     provider="fal-ai",
...     model="m-a-p/YuE-s1-7B-anneal-en-cot",
...     api_key=...,
... )
>>> audio = client.text_to_speech(lyrics, extra_body={"genres": genres})
>>> with open("output.mp3", "wb") as f:
...     f.write(audio)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_video</name><anchor>huggingface_hub.InferenceClient.text_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2569</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate a video from.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-video model will be used.
  Defaults to None.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate videos closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **negative_prompt** (`list[str]`, *optional*) --
  One or several prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video based on a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.







Example:

Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_video.example">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Using fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> video = client.text_to_video(
...     "A majestic lion running in a fantasy forest",
...     model="tencent/HunyuanVideo",
... )
>>> with open("lion.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_video.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Using replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> video = client.text_to_video(
...     "A cat running in a park",
...     model="genmo/mochi-1-preview",
... )
>>> with open("cat.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>token_classification</name><anchor>huggingface_hub.InferenceClient.token_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2874</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the token classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended token classification model will be used.
  Defaults to None.
- **aggregation_strategy** (`"TokenClassificationAggregationStrategy"`, *optional*) --
  The strategy used to fuse tokens based on model predictions
- **ignore_labels** (`list[str`, *optional*) --
  A list of labels to ignore
- **stride** (`int`, *optional*) --
  The number of overlapping tokens between chunks when splitting the input text.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TokenClassificationOutputElement]`</rettype><retdesc>List of [TokenClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TokenClassificationOutputElement) items containing the entity group, confidence score, word, start and end index.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform token classification on the given text.
Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.token_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.token_classification("My name is Sarah Jessica Parker but you can call me Jessica")
[
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9971321225166321,
        word='Sarah Jessica Parker',
        start=11,
        end=31,
    ),
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9773476123809814,
        word='Jessica',
        start=52,
        end=59,
    )
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>translation</name><anchor>huggingface_hub.InferenceClient.translation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2949</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be translated.
- **model** (`str`, *optional*) --
  The model to use for the translation task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended translation model will be used.
  Defaults to None.
- **src_lang** (`str`, *optional*) --
  The source language of the text. Required for models that can translate from multiple languages.
- **tgt_lang** (`str`, *optional*) --
  Target language to translate to. Required for models that can translate to multiple languages.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **truncation** (`"TranslationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.</paramsdesc><paramgroups>0</paramgroups><rettype>[TranslationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TranslationOutput)</rettype><retdesc>The generated translated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If only one of the `src_lang` and `tgt_lang` arguments are provided.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Convert text from one language to another.

Check out https://huggingface.co/tasks/translation for more information on how to choose the best model for
your specific use case. Source and target languages usually depend on the model.
However, it is possible to specify source and target languages for certain models. If you are working with one of these models,
you can use `src_lang` and `tgt_lang` arguments to pass the relevant information.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.translation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Wolfgang and I live in Berlin")
'Mein Name ist Wolfgang und ich lebe in Berlin.'
>>> client.translation("My name is Wolfgang and I live in Berlin", model="Helsinki-NLP/opus-mt-en-fr")
TranslationOutput(translation_text='Je m'appelle Wolfgang et je vis à Berlin.')
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.translation.example-2">

Specifying languages:
```py
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>visual_question_answering</name><anchor>huggingface_hub.InferenceClient.visual_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3038</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for the context. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the visual question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended visual question answering model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[VisualQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [VisualQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.VisualQuestionAnsweringOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Answering open-ended questions based on an image.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.visual_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.visual_question_answering(
...     image="https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg",
...     question="What is the animal doing?"
... )
[
    VisualQuestionAnsweringOutputElement(score=0.778609573841095, answer='laying down'),
    VisualQuestionAnsweringOutputElement(score=0.6957435607910156, answer='sitting'),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_classification</name><anchor>huggingface_hub.InferenceClient.zero_shot_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3097</source><parameters>[{"name": "text", "val": ": str"}, {"name": "candidate_labels", "val": ": list"}, {"name": "multi_label", "val": ": typing.Optional[bool] = False"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to classify.
- **candidate_labels** (`list[str]`) --
  The set of possible class labels to classify the text into.
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of strings. Each string is the verbalization of a possible label for the input text.
- **multi_label** (`bool`, *optional*) --
  Whether multiple candidate labels can be true. If false, the scores are normalized such that the sum of
  the label likelihoods for each sequence is 1. If true, the labels are considered independent and
  probabilities are normalized for each candidate.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the text classification by
  replacing the placeholder with the candidate labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot classification model will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ZeroShotClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide as input a text and a set of candidate labels to classify the input text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_classification.example">

Example with `multi_label=False`:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> text = (
...     "A new model offers an explanation for how the Galilean satellites formed around the solar system's"
...     "largest world. Konstantin Batygin did not set out to solve one of the solar system's most puzzling"
...     " mysteries when he went for a run up a hill in Nice, France."
... )
>>> labels = ["space & cosmos", "scientific discovery", "microbiology", "robots", "archeology"]
>>> client.zero_shot_classification(text, labels)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.7961668968200684),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.18570658564567566),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.00730885099619627),
    ZeroShotClassificationOutputElement(label='archeology', score=0.006258360575884581),
    ZeroShotClassificationOutputElement(label='robots', score=0.004559356719255447),
]
>>> client.zero_shot_classification(text, labels, multi_label=True)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.9829297661781311),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.755190908908844),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.0005462635890580714),
    ZeroShotClassificationOutputElement(label='archeology', score=0.00047131875180639327),
    ZeroShotClassificationOutputElement(label='robots', score=0.00030448526376858354),
]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_classification.example-2">

Example with `multi_label=True` and a custom `hypothesis_template`:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_classification(
...    text="I really like our dinner and I'm very happy. I don't like the weather though.",
...    labels=["positive", "negative", "pessimistic", "optimistic"],
...    multi_label=True,
...    hypothesis_template="This text is {} towards the weather"
... )
[
    ZeroShotClassificationOutputElement(label='negative', score=0.9231801629066467),
    ZeroShotClassificationOutputElement(label='pessimistic', score=0.8760990500450134),
    ZeroShotClassificationOutputElement(label='optimistic', score=0.0008674879791215062),
    ZeroShotClassificationOutputElement(label='positive', score=0.0005250611575320363)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_image_classification</name><anchor>huggingface_hub.InferenceClient.zero_shot_image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3203</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "candidate_labels", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "labels", "val": ": list = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **candidate_labels** (`list[str]`) --
  The candidate labels for this image
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of string possible labels. There must be at least 2 labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot image classification model will be used.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the image classification by
  replacing the placeholder with the candidate labels.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotImageClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotImageClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ZeroShotImageClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide input image and text labels to predict text labels for the image.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_image_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[ZeroShotImageClassificationOutputElement(label='dog', score=0.956),...]
```

</ExampleCodeBlock>


</div></div>

## Async Inference Client[[huggingface_hub.AsyncInferenceClient]]

An async version of the client is also provided, based on `asyncio` and `httpx`.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AsyncInferenceClient</name><anchor>huggingface_hub.AsyncInferenceClient</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L114</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "provider", "val": ": typing.Union[typing.Literal['black-forest-labs', 'cerebras', 'clarifai', 'cohere', 'fal-ai', 'featherless-ai', 'fireworks-ai', 'groq', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'nscale', 'openai', 'publicai', 'replicate', 'sambanova', 'scaleway', 'together', 'wavespeed', 'zai-org'], typing.Literal['auto'], NoneType] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "timeout", "val": ": typing.Optional[float] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "cookies", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "bill_to", "val": ": typing.Optional[str] = None"}, {"name": "base_url", "val": ": typing.Optional[str] = None"}, {"name": "api_key", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, `optional`) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is
  automatically selected for the task.
  Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2
  arguments are mutually exclusive. If a URL is passed as `model` or `base_url` for chat completion, the `(/v1)/chat/completions` suffix path will be appended to the URL.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Can be `"black-forest-labs"`, `"cerebras"`, `"clarifai"`, `"cohere"`, `"fal-ai"`, `"featherless-ai"`, `"fireworks-ai"`, `"groq"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"nscale"`, `"openai"`, `"publicai"`, `"replicate"`, `"sambanova"`, `"scaleway"`, `"together"`, `"wavespeed"` or `"zai-org"`.
  Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **token** (`str`, *optional*) --
  Hugging Face token. Will default to the locally saved token if not provided.
  Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2
  arguments are mutually exclusive and have the exact same behavior.
- **timeout** (`float`, `optional`) --
  The maximum number of seconds to wait for a response from the server. Defaults to None, meaning it will loop until the server is available.
- **headers** (`dict[str, str]`, `optional`) --
  Additional headers to send to the server. By default only the authorization and user-agent headers are sent.
  Values in this dictionary will override the default values.
- **bill_to** (`str`, `optional`) --
  The billing account to use for the requests. By default the requests are billed on the user's account.
  Requests can only be billed to an organization the user is a member of, and which has subscribed to Enterprise Hub.
- **cookies** (`dict[str, str]`, `optional`) --
  Additional cookies to send to the server.
- **base_url** (`str`, `optional`) --
  Base URL to run inference. This is a duplicated argument from `model` to make [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
- **api_key** (`str`, `optional`) --
  Token to use for authentication. This is a duplicated argument from `token` to make [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>

Initialize a new Inference Client.

[InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) aims to provide a unified experience to perform inference. The client can be used
seamlessly with either the (free) Inference API, self-hosted Inference Endpoints, or third-party Inference Providers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_classification</name><anchor>huggingface_hub.AsyncInferenceClient.audio_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L317</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio classification will be used.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"AudioClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioClassificationOutputElement]`</rettype><retdesc>List of [AudioClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AudioClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform audio classification on the provided audio content.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.audio_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.audio_classification("audio.flac")
[
    AudioClassificationOutputElement(score=0.4976358711719513, label='hap'),
    AudioClassificationOutputElement(score=0.3677836060523987, label='neu'),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_to_audio</name><anchor>huggingface_hub.AsyncInferenceClient.audio_to_audio</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L375</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content for the model. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model can be any model which takes an audio file and returns another audio file. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio_to_audio will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioToAudioOutputElement]`</rettype><retdesc>A list of [AudioToAudioOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AudioToAudioOutputElement) items containing audios label, content-type, and audio content in blob.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Performs multiple tasks related to audio-to-audio depending on the model (eg: speech enhancement, source separation).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.audio_to_audio.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> audio_output = await client.audio_to_audio("audio.flac")
>>> async for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item.blob)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>automatic_speech_recognition</name><anchor>huggingface_hub.AsyncInferenceClient.automatic_speech_recognition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L428</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file.
- **model** (`str`, *optional*) --
  The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for ASR will be used.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[AutomaticSpeechRecognitionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.AutomaticSpeechRecognitionOutput)</rettype><retdesc>An item containing the transcribed text and optionally the timestamp chunks.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform automatic speech recognition (ASR or audio-to-text) on the given audio content.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.automatic_speech_recognition.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.automatic_speech_recognition("hello_world.flac").text
"hello world"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>chat_completion</name><anchor>huggingface_hub.AsyncInferenceClient.chat_completion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L555</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "stream", "val": ": bool = False"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **messages** (List of [ChatCompletionInputMessage](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputMessage)) --
  Conversation history consisting of roles and content pairs.
- **model** (`str`, *optional*) --
  The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
  See https://huggingface.co/tasks/text-generation for more details.
  If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
  custom URL while setting `model` in the request payload, you must set `base_url` when initializing [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient).
- **frequency_penalty** (`float`, *optional*) --
  Penalizes new tokens based on their existing frequency
  in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
- **logit_bias** (`list[float]`, *optional*) --
  Adjusts the likelihood of specific tokens appearing in the generated output.
- **logprobs** (`bool`, *optional*) --
  Whether to return log probabilities of the output tokens or not. If true, returns the log
  probabilities of each output token returned in the content of message.
- **max_tokens** (`int`, *optional*) --
  Maximum number of tokens allowed in the response. Defaults to 100.
- **n** (`int`, *optional*) --
  The number of completions to generate for each prompt.
- **presence_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
  text so far, increasing the model's likelihood to talk about new topics.
- **response_format** (`ChatCompletionInputGrammarType()`, *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **seed** (Optional`int`, *optional*) --
  Seed for reproducible control flow. Defaults to None.
- **stop** (`list[str]`, *optional*) --
  Up to four strings which trigger the end of the response.
  Defaults to None.
- **stream** (`bool`, *optional*) --
  Enable realtime streaming of responses. Defaults to False.
- **stream_options** ([ChatCompletionInputStreamOptions](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputStreamOptions), *optional*) --
  Options for streaming completions.
- **temperature** (`float`, *optional*) --
  Controls randomness of the generations. Lower values ensure
  less random completions. Range: [0, 2]. Defaults to 1.0.
- **top_logprobs** (`int`, *optional*) --
  An integer between 0 and 5 specifying the number of most likely tokens to return at each token
  position, each with an associated log probability. logprobs must be set to true if this parameter is
  used.
- **top_p** (`float`, *optional*) --
  Fraction of the most likely next words to sample from.
  Must be between 0 and 1. Defaults to 1.0.
- **tool_choice** ([ChatCompletionInputToolChoiceClass](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputToolChoiceClass) or `ChatCompletionInputToolChoiceEnum()`, *optional*) --
  The tool to use for the completion. Defaults to "auto".
- **tool_prompt** (`str`, *optional*) --
  A prompt to be appended before the tools.
- **tools** (List of [ChatCompletionInputTool](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionInputTool), *optional*) --
  A list of tools the model may call. Currently, only functions are supported as a tool. Use this to
  provide a list of functions the model may generate JSON inputs for.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[ChatCompletionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) or Iterable of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput)</rettype><retdesc>Generated text returned from the server:
- if `stream=False`, the generated text is returned as a [ChatCompletionOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) (default).
- if `stream=True`, the generated text is returned token by token as a sequence of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput).</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

A method for completing conversations using a specified language model.

> [!TIP]
> The `client.chat_completion` method is aliased as `client.chat.completions.create` for compatibility with OpenAI's client.
> Inputs and outputs are strictly the same and using either syntax will yield the same results.
> Check out the [Inference guide](https://huggingface.co/docs/huggingface_hub/guides/inference#openai-compatibility)
> for more details about OpenAI's compatibility.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example">

Example:

```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> await client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputComplete(
            finish_reason='eos_token',
            index=0,
            message=ChatCompletionOutputMessage(
                role='assistant',
                content='The capital of France is Paris.',
                name=None,
                tool_calls=None
            ),
            logprobs=None
        )
    ],
    created=1719907176,
    id='',
    model='meta-llama/Meta-Llama-3-8B-Instruct',
    object='text_completion',
    system_fingerprint='2.0.4-sha-f426a33',
    usage=ChatCompletionOutputUsage(
        completion_tokens=8,
        prompt_tokens=17,
        total_tokens=25
    )
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-2">

Example using streaming:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> async for token in await client.chat_completion(messages, max_tokens=10, stream=True):
...     print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-3">

Example using OpenAI's syntax:
```py
# Must be run in an async context
# instead of `from openai import OpenAI`
from huggingface_hub import AsyncInferenceClient

# instead of `client = OpenAI(...)`
client = AsyncInferenceClient(
    base_url=...,
    api_key=...,
)

output = await client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)
```

</ExampleCodeBlock>

Example using a third-party provider directly with extra (provider-specific) parameters. Usage will be billed on your Together AI account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
...     extra_body={"safety_model": "Meta-Llama/Llama-Guard-7b"},
... )
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="sambanova",  # Use Sambanova provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-6">

Example using Image + Text as input:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient

# provide a remote URL
>>> image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
# or a base64-encoded image
>>> image_path = "/path/to/image.jpeg"
>>> with open(image_path, "rb") as f:
...     base64_image = base64.b64encode(f.read()).decode("utf-8")
>>> image_url = f"data:image/jpeg;base64,{base64_image}"

>>> client = AsyncInferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
>>> output = await client.chat.completions.create(
...     messages=[
...         {
...             "role": "user",
...             "content": [
...                 {
...                     "type": "image_url",
...                     "image_url": {"url": image_url},
...                 },
...                 {
...                     "type": "text",
...                     "text": "Describe this image in one sentence.",
...                 },
...             ],
...         },
...     ],
... )
>>> output
The image depicts the iconic Statue of Liberty situated in New York Harbor, New York, on a clear day.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-7">

Example using tools:
```py
# Must be run in an async context
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "system",
...         "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
...     },
...     {
...         "role": "user",
...         "content": "What's the weather like the next 3 days in San Francisco, CA?",
...     },
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     {
...         "type": "function",
...         "function": {
...             "name": "get_n_day_weather_forecast",
...             "description": "Get an N-day weather forecast",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                     "num_days": {
...                         "type": "integer",
...                         "description": "The number of days to forecast",
...                     },
...                 },
...                 "required": ["location", "format", "num_days"],
...             },
...         },
...     },
... ]

>>> response = await client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-8">

Example using response_format:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "user",
...         "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
...     },
... ]
>>> response_format = {
...     "type": "json",
...     "value": {
...         "properties": {
...             "location": {"type": "string"},
...             "activity": {"type": "string"},
...             "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...             "animals": {"type": "array", "items": {"type": "string"}},
...         },
...         "required": ["location", "activity", "animals_seen", "animals"],
...     },
... }
>>> response = await client.chat_completion(
...     messages=messages,
...     response_format=response_format,
...     max_tokens=500,
... )
>>> response.choices[0].message.content
'{

y": "bike ride",
": ["puppy", "cat", "raccoon"],
_seen": 3,
n": "park"}'
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>close</name><anchor>huggingface_hub.AsyncInferenceClient.close</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L238</source><parameters>[]</parameters></docstring>
Close the client.

This method is automatically called when using the client as a context manager.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>document_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.document_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L963</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO]`) --
  The input image for the context. It can be raw bytes, an image file, or a URL to an online image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the document question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended document question answering model will be used.
  Defaults to None.
- **doc_stride** (`int`, *optional*) --
  If the words in the document are too long to fit with the question for the model, it will be split in
  several chunks with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer
- **lang** (`str`, *optional*) --
  Language to use while running OCR. Defaults to english.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using doc_stride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Can return less than top_k
  answers if there are not enough options available within the context.
- **word_boxes** (`list[Union[list[float], str`, *optional*) --
  A list of words and bounding boxes (normalized 0->1000). If provided, the inference will skip the OCR
  step and use the provided bounding boxes instead.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[DocumentQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [DocumentQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.DocumentQuestionAnsweringOutputElement) items containing the predicted label, associated probability, word ids, and page number.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Answer questions on document images.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.document_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.document_question_answering(image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", question="What is the invoice number?")
[DocumentQuestionAnsweringOutputElement(answer='us-001', end=16, score=0.9999666213989258, start=16)]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>feature_extraction</name><anchor>huggingface_hub.AsyncInferenceClient.feature_extraction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1051</source><parameters>[{"name": "text", "val": ": str"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[typing.Literal['Left', 'Right']] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (*str*) --
  The text to embed.
- **model** (*str*, *optional*) --
  The model to use for the feature extraction task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended feature extraction model will be used.
  Defaults to None.
- **normalize** (*bool*, *optional*) --
  Whether to normalize the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **prompt_name** (*str*, *optional*) --
  The name of the prompt that should be used by for encoding. If not set, no prompt will be applied.
  Must be a key in the *Sentence Transformers* configuration *prompts* dictionary.
  For example if `prompt_name` is "query" and the `prompts` is &amp;lcub;"query": "query: ",...},
  then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?"
  because the prompt text will be prepended before any text to encode.
- **truncate** (*bool*, *optional*) --
  Whether to truncate the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **truncation_direction** (*Literal["Left", "Right"]*, *optional*) --
  Which side of the input should be truncated when *truncate=True* is passed.</paramsdesc><paramgroups>0</paramgroups><rettype>*np.ndarray*</rettype><retdesc>The embedding representing the input text as a float32 numpy array.</retdesc><raises>- [*InferenceTimeoutError*] -- 
  If the model is unavailable or the request times out.
- [*HfHubHTTPError*] -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[*InferenceTimeoutError*] or [*HfHubHTTPError*]</raisederrors></docstring>

Generate embeddings for a given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.feature_extraction.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.feature_extraction("Hi, who are you?")
array([[ 2.424802  ,  2.93384   ,  1.1750331 , ...,  1.240499, -0.13776633, -0.7889173 ],
[-0.42943227, -0.6364878 , -1.693462  , ...,  0.41978157, -2.4336355 ,  0.6162071 ],
...,
[ 0.28552425, -0.928395  , -1.2077185 , ...,  0.76810825, -2.1069427 ,  0.6236161 ]], dtype=float32)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fill_mask</name><anchor>huggingface_hub.AsyncInferenceClient.fill_mask</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1125</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask).
- **model** (`str`, *optional*) --
  The model to use for the fill mask task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended fill mask model will be used.
- **targets** (`list[str`, *optional*) --
  When passed, the model will limit the scores to the passed targets instead of looking up in the whole
  vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first
  resulting token will be used (with a warning, and that might be slower).
- **top_k** (`int`, *optional*) --
  When passed, overrides the number of predictions to return.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[FillMaskOutputElement]`</rettype><retdesc>a list of [FillMaskOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.FillMaskOutputElement) items containing the predicted label, associated
probability, token reference, and completed text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Fill in a hole with a missing word (token to be precise).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.fill_mask.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.fill_mask("The goal of life is <mask>.")
[
    FillMaskOutputElement(score=0.06897063553333282, token=11098, token_str=' happiness', sequence='The goal of life is happiness.'),
    FillMaskOutputElement(score=0.06554922461509705, token=45075, token_str=' immortality', sequence='The goal of life is immortality.')
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_endpoint_info</name><anchor>huggingface_hub.AsyncInferenceClient.get_endpoint_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3321</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, Any]`</rettype><retdesc>Information about the endpoint.</retdesc></docstring>

Get information about the deployed endpoint.

This endpoint is only available on endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).
Endpoints powered by `transformers` return an empty payload.







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.get_endpoint_info.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> await client.get_endpoint_info()
{
    'model_id': 'meta-llama/Meta-Llama-3-70B-Instruct',
    'model_sha': None,
    'model_dtype': 'torch.float16',
    'model_device_type': 'cuda',
    'model_pipeline_tag': None,
    'max_concurrent_requests': 128,
    'max_best_of': 2,
    'max_stop_sequences': 4,
    'max_input_length': 8191,
    'max_total_tokens': 8192,
    'waiting_served_ratio': 0.3,
    'max_batch_total_tokens': 1259392,
    'max_waiting_tokens': 20,
    'max_batch_size': None,
    'validation_workers': 32,
    'max_client_batch_size': 4,
    'version': '2.0.2',
    'sha': 'dccab72549635c7eb5ddb17f43f0b7cdff07c214',
    'docker_label': 'sha-dccab72'
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>health_check</name><anchor>huggingface_hub.AsyncInferenceClient.health_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3381</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  URL of the Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if everything is working fine.</retdesc></docstring>

Check the health of the deployed endpoint.

Health check is only available with Inference Endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.health_check.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("https://jzgu0buei5.us-east-1.aws.endpoints.huggingface.cloud")
>>> await client.health_check()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_classification</name><anchor>huggingface_hub.AsyncInferenceClient.image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1182</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to classify. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used.
- **function_to_apply** (`"ImageClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageClassificationOutputElement]`</rettype><retdesc>a list of [ImageClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image classification on the given image using the specified model.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[ImageClassificationOutputElement(label='Blenheim spaniel', score=0.9779096841812134), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_segmentation</name><anchor>huggingface_hub.AsyncInferenceClient.image_segmentation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1233</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to segment. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used.
- **mask_threshold** (`float`, *optional*) --
  Threshold to use when turning the predicted masks into binary values.
- **overlap_mask_area_threshold** (`float`, *optional*) --
  Mask overlap threshold to eliminate small, disconnected segments.
- **subtask** (`"ImageSegmentationSubtask"`, *optional*) --
  Segmentation task to be performed, depending on model capabilities.
- **threshold** (`float`, *optional*) --
  Probability threshold to filter out predicted masks.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageSegmentationOutputElement]`</rettype><retdesc>A list of [ImageSegmentationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageSegmentationOutputElement) items containing the segmented masks and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image segmentation on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_segmentation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_segmentation("cat.jpg")
[ImageSegmentationOutputElement(score=0.989008, label='LABEL_184', mask=<PIL.PngImagePlugin.PngImageFile image mode=L size=400x300 at 0x7FDD2B129CC0>), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_image</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1302</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for translation. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the image generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate images closely
  linked to the text prompt at the expense of lower image quality.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **target_size** (`ImageToImageTargetSize`, *optional*) --
  The size in pixels of the output image. This parameter is only supported by some providers and for
  specific models. It will be ignored when unsupported.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The translated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image-to-image translation using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_image.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.image_to_image("cat.jpg", prompt="turn the cat into a tiger")
>>> image.save("tiger.jpg")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_text</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_text</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1459</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImageToTextOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ImageToTextOutput)</rettype><retdesc>The generated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Takes an input image and return text.

Models can have very different outputs depending on your use case (image captioning, optical character recognition
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_text.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_to_text("cat.jpg")
'a cat standing in a grassy field '
>>> await client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
'a dog laying on the grass next to a flower pot '
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_video</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1379</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to generate a video from. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the video generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate videos closely
  linked to the text prompt at the expense of lower image quality.
- **seed** (`int`, *optional*) --
  The seed to use for the video generation.
- **target_size** (`ImageToVideoTargetSize`, *optional*) --
  The size in pixel of the output video frames.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video from an input image.







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_video.example">

Examples:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> video = await client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
...     f.write(video)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>object_detection</name><anchor>huggingface_hub.AsyncInferenceClient.object_detection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1506</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to detect objects on. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for object detection. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for object detection (DETR) will be used.
- **threshold** (`float`, *optional*) --
  The probability necessary to make a prediction.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ObjectDetectionOutputElement]`</rettype><retdesc>A list of [ObjectDetectionOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ObjectDetectionOutputElement) items containing the bounding boxes and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If the request output is not a List.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Perform object detection on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.object_detection.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.object_detection("people.jpg")
[ObjectDetectionOutputElement(score=0.9486683011054993, label='person', box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1555</source><parameters>[{"name": "question", "val": ": str"}, {"name": "context", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **question** (`str`) --
  Question to be answered.
- **context** (`str`) --
  The context of the question.
- **model** (`str`) --
  The model to use for the question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint.
- **align_to_words** (`bool`, *optional*) --
  Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt
  on non-space-separated languages (like Japanese or Chinese)
- **doc_stride** (`int`, *optional*) --
  If the context is too long to fit with the question for the model, it will be split in several chunks
  with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using docStride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>Union[`QuestionAnsweringOutputElement`, list[QuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.QuestionAnsweringOutputElement)]</rettype><retdesc>When top_k is 1 or not provided, it returns a single `QuestionAnsweringOutputElement`.
When top_k is greater than 1, it returns a list of `QuestionAnsweringOutputElement`.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from a given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.question_answering(question="What's my name?", context="My name is Clara and I live in Berkeley.")
QuestionAnsweringOutputElement(answer='Clara', end=16, score=0.9326565265655518, start=11)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>sentence_similarity</name><anchor>huggingface_hub.AsyncInferenceClient.sentence_similarity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1640</source><parameters>[{"name": "sentence", "val": ": str"}, {"name": "other_sentences", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **sentence** (`str`) --
  The main sentence to compare to others.
- **other_sentences** (`list[str]`) --
  The list of sentences to compare to.
- **model** (`str`, *optional*) --
  The model to use for the sentence similarity task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended sentence similarity model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[float]`</rettype><retdesc>The embedding representing the input text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.sentence_similarity.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.sentence_similarity(
...     "Machine learning is so easy.",
...     other_sentences=[
...         "Deep learning is so straightforward.",
...         "This is so difficult, like rocket science.",
...         "I can't believe how much I struggled with this.",
...     ],
... )
[0.7785726189613342, 0.45876261591911316, 0.2906220555305481]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>summarization</name><anchor>huggingface_hub.AsyncInferenceClient.summarization</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1694</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to summarize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for summarization will be used.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.
- **truncation** (`"SummarizationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.</paramsdesc><paramgroups>0</paramgroups><rettype>[SummarizationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.SummarizationOutput)</rettype><retdesc>The generated summary text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate a summary of a given text using a specified model.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.summarization.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.summarization("The Eiffel tower...")
SummarizationOutput(generated_text="The Eiffel tower is one of the most famous landmarks in the world....")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>table_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.table_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1753</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "query", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **table** (`str`) --
  A table of data represented as a dict of lists where entries are headers and the lists are all the
  values, all lists must have the same size.
- **query** (`str`) --
  The query in plain text that you want to ask the table.
- **model** (`str`) --
  The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
  Hub or a URL to a deployed Inference Endpoint.
- **padding** (`"Padding"`, *optional*) --
  Activates and controls padding.
- **sequential** (`bool`, *optional*) --
  Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
  inference to be done sequentially to extract relations within sequences, given their conversational
  nature.
- **truncation** (`bool`, *optional*) --
  Activates and controls truncation.</paramsdesc><paramgroups>0</paramgroups><rettype>[TableQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TableQuestionAnsweringOutputElement)</rettype><retdesc>a table question answering output containing the answer, coordinates, cells and the aggregator used.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from information given in a table.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.table_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> query = "How many stars does the transformers repository have?"
>>> table = {"Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"]}
>>> await client.table_question_answering(table, query, model="google/tapas-base-finetuned-wtq")
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_classification</name><anchor>huggingface_hub.AsyncInferenceClient.tabular_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1816</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes to classify.
- **model** (`str`, *optional*) --
  The model to use for the tabular classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular classification model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of labels, one per row in the initial table.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Classifying a target category (a group) based on a set of attributes.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.tabular_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> table = {
...     "fixed_acidity": ["7.4", "7.8", "10.3"],
...     "volatile_acidity": ["0.7", "0.88", "0.32"],
...     "citric_acid": ["0", "0", "0.45"],
...     "residual_sugar": ["1.9", "2.6", "6.4"],
...     "chlorides": ["0.076", "0.098", "0.073"],
...     "free_sulfur_dioxide": ["11", "25", "5"],
...     "total_sulfur_dioxide": ["34", "67", "13"],
...     "density": ["0.9978", "0.9968", "0.9976"],
...     "pH": ["3.51", "3.2", "3.23"],
...     "sulphates": ["0.56", "0.68", "0.82"],
...     "alcohol": ["9.4", "9.8", "12.6"],
... }
>>> await client.tabular_classification(table=table, model="julien-c/wine-quality")
["5", "5", "5"]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_regression</name><anchor>huggingface_hub.AsyncInferenceClient.tabular_regression</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1872</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes stored in a table. The attributes used to predict the target can be both numerical and categorical.
- **model** (`str`, *optional*) --
  The model to use for the tabular regression task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular regression model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of predicted numerical target values.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Predicting a numerical target value given a set of attributes/features in a table.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.tabular_regression.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> table = {
...     "Height": ["11.52", "12.48", "12.3778"],
...     "Length1": ["23.2", "24", "23.9"],
...     "Length2": ["25.4", "26.3", "26.5"],
...     "Length3": ["30", "31.2", "31.1"],
...     "Species": ["Bream", "Bream", "Bream"],
...     "Width": ["4.02", "4.3056", "4.6961"],
... }
>>> await client.tabular_regression(table, model="scikit-learn/Fish-Weight")
[110, 120, 130]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_classification</name><anchor>huggingface_hub.AsyncInferenceClient.text_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1923</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"TextClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TextClassificationOutputElement]`</rettype><retdesc>a list of [TextClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform text classification (e.g. sentiment-analysis) on the given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_classification("I like you")
[
    TextClassificationOutputElement(label='POSITIVE', score=0.9998695850372314),
    TextClassificationOutputElement(label='NEGATIVE', score=0.0001304351753788069),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_generation</name><anchor>huggingface_hub.AsyncInferenceClient.text_generation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2132</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stop_sequences", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  Input text.
- **details** (`bool`, *optional*) --
  By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens,
  probabilities, seed, finish reason, etc.). Only available for models running on with the
  `text-generation-inference` backend.
- **stream** (`bool`, *optional*) --
  By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of
  tokens to be returned. Only available for models running on with the `text-generation-inference`
  backend.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **adapter_id** (`str`, *optional*) --
  Lora adapter id.
- **best_of** (`int`, *optional*) --
  Generate best_of sequences and return the one if the highest token logprobs.
- **decoder_input_details** (`bool`, *optional*) --
  Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken
  into account. Defaults to `False`.
- **do_sample** (`bool`, *optional*) --
  Activate logits sampling
- **frequency_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in
  the text so far, decreasing the model's likelihood to repeat the same line verbatim.
- **grammar** ([TextGenerationInputGrammarType](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationInputGrammarType), *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **max_new_tokens** (`int`, *optional*) --
  Maximum number of generated tokens. Defaults to 100.
- **repetition_penalty** (`float`, *optional*) --
  The parameter for repetition penalty. 1.0 means no penalty. See [this
  paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- **return_full_text** (`bool`, *optional*) --
  Whether to prepend the prompt to the generated text
- **seed** (`int`, *optional*) --
  Random sampling seed
- **stop** (`list[str]`, *optional*) --
  Stop generating tokens if a member of `stop` is generated.
- **stop_sequences** (`list[str]`, *optional*) --
  Deprecated argument. Use `stop` instead.
- **temperature** (`float`, *optional*) --
  The value used to module the logits distribution.
- **top_n_tokens** (`int`, *optional*) --
  Return information about the `top_n_tokens` most likely tokens at each generation step, instead of
  just the sampled token.
- **top_k** (`int`, *optional`) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional`) --
  If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
  higher are kept for generation.
- **truncate** (`int`, *optional`) --
  Truncate inputs tokens to the given size.
- **typical_p** (`float`, *optional`) --
  Typical Decoding mass
  See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information
- **watermark** (`bool`, *optional*) --
  Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[str, TextGenerationOutput, AsyncIterable[str], AsyncIterable[TextGenerationStreamOutput]]`</rettype><retdesc>Generated text returned from the server:
- if `stream=False` and `details=False`, the generated text is returned as a `str` (default)
- if `stream=True` and `details=False`, the generated text is returned token by token as a `AsyncIterable[str]`
- if `stream=False` and `details=True`, the generated text is returned with more details as a [TextGenerationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationOutput)
- if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [TextGenerationStreamOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TextGenerationStreamOutput)</retdesc><raises>- ``ValidationError`` -- 
  If input values are not valid. No HTTP call is made to the server.
- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``ValidationError`` or [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Given a prompt, generate the following text.

> [!TIP]
> If you want to generate a response from chat messages, you should use the [InferenceClient.chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) method.
> It accepts a list of messages instead of a single text prompt and handles the chat templating for you.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_generation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

# Case 1: generate text
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# Case 2: iterate over the generated tokens. Useful for large generation.
>>> async for token in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True):
...     print(token)
100
%
open
source
and
built
to
be
easy
to
use
.

# Case 3: get more details about the generation process.
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True)
TextGenerationOutput(
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationDetails(
        finish_reason='length',
        generated_tokens=12,
        seed=None,
        prefill=[
            TextGenerationPrefillOutputToken(id=487, text='The', logprob=None),
            TextGenerationPrefillOutputToken(id=53789, text=' hugging', logprob=-13.171875),
            (...)
            TextGenerationPrefillOutputToken(id=204, text=' ', logprob=-7.0390625)
        ],
        tokens=[
            TokenElement(id=1425, text='100', logprob=-1.0175781, special=False),
            TokenElement(id=16, text='%', logprob=-0.0463562, special=False),
            (...)
            TokenElement(id=25, text='.', logprob=-0.5703125, special=False)
        ],
        best_of_sequences=None
    )
)

# Case 4: iterate over the generated tokens with more details.
# Last object is more complete, containing the full generated text and the finish reason.
>>> async for details in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
...     print(details)
...
TextGenerationStreamOutput(token=TokenElement(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationStreamOutputStreamDetails(finish_reason='length', generated_tokens=12, seed=None)
)

# Case 5: generate constrained output using grammar
>>> response = await client.text_generation(
...     prompt="I saw a puppy a cat and a raccoon during my bike ride in the park",
...     model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
...     max_new_tokens=100,
...     repetition_penalty=1.3,
...     grammar={
...         "type": "json",
...         "value": {
...             "properties": {
...                 "location": {"type": "string"},
...                 "activity": {"type": "string"},
...                 "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...                 "animals": {"type": "array", "items": {"type": "string"}},
...             },
...             "required": ["location", "activity", "animals_seen", "animals"],
...         },
...     },
... )
>>> json.loads(response)
{
    "activity": "bike riding",
    "animals": ["puppy", "cat", "raccoon"],
    "animals_seen": 3,
    "location": "park"
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_image</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2472</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate an image from.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **height** (`int`, *optional*) --
  The height in pixels of the output image
- **width** (`int`, *optional*) --
  The width in pixels of the output image
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-image model will be used.
  Defaults to None.
- **scheduler** (`str`, *optional*) --
  Override the scheduler with a compatible one.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The generated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate an image based on a given text using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> image = await client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     negative_prompt="low resolution, blurry",
...     model="stabilityai/stable-diffusion-2-1",
... )
>>> image.save("better_astronaut.png")
```

</ExampleCodeBlock>
Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Use fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> image = client.text_to_image(
...     "A majestic lion in a fantasy forest",
...     model="black-forest-labs/FLUX.1-schnell",
... )
>>> image.save("lion.png")
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-dev",
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>

Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-schnell",
...     extra_body={"output_quality": 100},
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_speech</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_speech</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2710</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The text to synthesize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-speech model will be used.
  Defaults to None.
- **do_sample** (`bool`, *optional*) --
  Whether to use sampling instead of greedy decoding when generating new tokens.
- **early_stopping** (`Union[bool, "TextToSpeechEarlyStoppingEnum"]`, *optional*) --
  Controls the stopping condition for beam-based methods.
- **epsilon_cutoff** (`float`, *optional*) --
  If set to float strictly between 0 and 1, only tokens with a conditional probability greater than
  epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on
  the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **eta_cutoff** (`float`, *optional*) --
  Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly
  between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff)
  * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token
  probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3,
  depending on the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **max_length** (`int`, *optional*) --
  The maximum length (in tokens) of the generated text, including the input.
- **max_new_tokens** (`int`, *optional*) --
  The maximum number of tokens to generate. Takes precedence over max_length.
- **min_length** (`int`, *optional*) --
  The minimum length (in tokens) of the generated text, including the input.
- **min_new_tokens** (`int`, *optional*) --
  The minimum number of tokens to generate. Takes precedence over min_length.
- **num_beam_groups** (`int`, *optional*) --
  Number of groups to divide num_beams into in order to ensure diversity among different groups of beams.
  See [this paper](https://hf.co/papers/1610.02424) for more details.
- **num_beams** (`int`, *optional*) --
  Number of beams to use for beam search.
- **penalty_alpha** (`float`, *optional*) --
  The value balances the model confidence and the degeneration penalty in contrastive search decoding.
- **temperature** (`float`, *optional*) --
  The value used to modulate the next token probabilities.
- **top_k** (`int`, *optional*) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional*) --
  If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to
  top_p or higher are kept for generation.
- **typical_p** (`float`, *optional*) --
  Local typicality measures how similar the conditional probability of predicting a target token next is
  to the expected conditional probability of predicting a random token next, given the partial text
  already generated. If set to float < 1, the smallest set of the most locally typical tokens with
  probabilities that add up to typical_p or higher are kept for generation. See [this
  paper](https://hf.co/papers/2202.00666) for more details.
- **use_cache** (`bool`, *optional*) --
  Whether the model should use the past last key/values attentions to speed up decoding
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated audio.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Synthesize an audio of a voice pronouncing a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example">

Example:
```py
# Must be run in an async context
>>> from pathlib import Path
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> audio = await client.text_to_speech("Hello world")
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider directly. Usage will be billed on your Replicate account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="your-replicate-api-key",  # Pass your Replicate API key directly
... )
>>> audio = client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio =client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>
Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio = client.text_to_speech(
...     "Hello, my name is Kororo, an awesome text-to-speech model.",
...     model="hexgrad/Kokoro-82M",
...     extra_body={"voice": "af_nicole"},
... )
>>> Path("hello.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example music-gen using "YuE-s1-7B-anneal-en-cot" on fal.ai
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> lyrics = '''
... [verse]
... In the town where I was born
... Lived a man who sailed to sea
... And he told us of his life
... In the land of submarines
... So we sailed on to the sun
... 'Til we found a sea of green
... And we lived beneath the waves
... In our yellow submarine

... [chorus]
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... '''
>>> genres = "pavarotti-style tenor voice"
>>> client = InferenceClient(
...     provider="fal-ai",
...     model="m-a-p/YuE-s1-7B-anneal-en-cot",
...     api_key=...,
... )
>>> audio = client.text_to_speech(lyrics, extra_body={"genres": genres})
>>> with open("output.mp3", "wb") as f:
...     f.write(audio)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_video</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2613</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate a video from.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-video model will be used.
  Defaults to None.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate videos closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **negative_prompt** (`list[str]`, *optional*) --
  One or several prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video based on a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.







Example:

Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_video.example">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Using fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> video = client.text_to_video(
...     "A majestic lion running in a fantasy forest",
...     model="tencent/HunyuanVideo",
... )
>>> with open("lion.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_video.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Using replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> video = client.text_to_video(
...     "A cat running in a park",
...     model="genmo/mochi-1-preview",
... )
>>> with open("cat.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>token_classification</name><anchor>huggingface_hub.AsyncInferenceClient.token_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2919</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the token classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended token classification model will be used.
  Defaults to None.
- **aggregation_strategy** (`"TokenClassificationAggregationStrategy"`, *optional*) --
  The strategy used to fuse tokens based on model predictions
- **ignore_labels** (`list[str`, *optional*) --
  A list of labels to ignore
- **stride** (`int`, *optional*) --
  The number of overlapping tokens between chunks when splitting the input text.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TokenClassificationOutputElement]`</rettype><retdesc>List of [TokenClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TokenClassificationOutputElement) items containing the entity group, confidence score, word, start and end index.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform token classification on the given text.
Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.token_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.token_classification("My name is Sarah Jessica Parker but you can call me Jessica")
[
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9971321225166321,
        word='Sarah Jessica Parker',
        start=11,
        end=31,
    ),
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9773476123809814,
        word='Jessica',
        start=52,
        end=59,
    )
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>translation</name><anchor>huggingface_hub.AsyncInferenceClient.translation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2995</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be translated.
- **model** (`str`, *optional*) --
  The model to use for the translation task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended translation model will be used.
  Defaults to None.
- **src_lang** (`str`, *optional*) --
  The source language of the text. Required for models that can translate from multiple languages.
- **tgt_lang** (`str`, *optional*) --
  Target language to translate to. Required for models that can translate to multiple languages.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **truncation** (`"TranslationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.</paramsdesc><paramgroups>0</paramgroups><rettype>[TranslationOutput](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.TranslationOutput)</rettype><retdesc>The generated translated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If only one of the `src_lang` and `tgt_lang` arguments are provided.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Convert text from one language to another.

Check out https://huggingface.co/tasks/translation for more information on how to choose the best model for
your specific use case. Source and target languages usually depend on the model.
However, it is possible to specify source and target languages for certain models. If you are working with one of these models,
you can use `src_lang` and `tgt_lang` arguments to pass the relevant information.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.translation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.translation("My name is Wolfgang and I live in Berlin")
'Mein Name ist Wolfgang und ich lebe in Berlin.'
>>> await client.translation("My name is Wolfgang and I live in Berlin", model="Helsinki-NLP/opus-mt-en-fr")
TranslationOutput(translation_text='Je m'appelle Wolfgang et je vis à Berlin.')
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.translation.example-2">

Specifying languages:
```py
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>visual_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.visual_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3085</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for the context. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the visual question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended visual question answering model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[VisualQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [VisualQuestionAnsweringOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.VisualQuestionAnsweringOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Answering open-ended questions based on an image.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.visual_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.visual_question_answering(
...     image="https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg",
...     question="What is the animal doing?"
... )
[
    VisualQuestionAnsweringOutputElement(score=0.778609573841095, answer='laying down'),
    VisualQuestionAnsweringOutputElement(score=0.6957435607910156, answer='sitting'),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_classification</name><anchor>huggingface_hub.AsyncInferenceClient.zero_shot_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3145</source><parameters>[{"name": "text", "val": ": str"}, {"name": "candidate_labels", "val": ": list"}, {"name": "multi_label", "val": ": typing.Optional[bool] = False"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to classify.
- **candidate_labels** (`list[str]`) --
  The set of possible class labels to classify the text into.
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of strings. Each string is the verbalization of a possible label for the input text.
- **multi_label** (`bool`, *optional*) --
  Whether multiple candidate labels can be true. If false, the scores are normalized such that the sum of
  the label likelihoods for each sequence is 1. If true, the labels are considered independent and
  probabilities are normalized for each candidate.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the text classification by
  replacing the placeholder with the candidate labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot classification model will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ZeroShotClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide as input a text and a set of candidate labels to classify the input text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_classification.example">

Example with `multi_label=False`:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> text = (
...     "A new model offers an explanation for how the Galilean satellites formed around the solar system's"
...     "largest world. Konstantin Batygin did not set out to solve one of the solar system's most puzzling"
...     " mysteries when he went for a run up a hill in Nice, France."
... )
>>> labels = ["space & cosmos", "scientific discovery", "microbiology", "robots", "archeology"]
>>> await client.zero_shot_classification(text, labels)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.7961668968200684),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.18570658564567566),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.00730885099619627),
    ZeroShotClassificationOutputElement(label='archeology', score=0.006258360575884581),
    ZeroShotClassificationOutputElement(label='robots', score=0.004559356719255447),
]
>>> await client.zero_shot_classification(text, labels, multi_label=True)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.9829297661781311),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.755190908908844),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.0005462635890580714),
    ZeroShotClassificationOutputElement(label='archeology', score=0.00047131875180639327),
    ZeroShotClassificationOutputElement(label='robots', score=0.00030448526376858354),
]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_classification.example-2">

Example with `multi_label=True` and a custom `hypothesis_template`:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.zero_shot_classification(
...    text="I really like our dinner and I'm very happy. I don't like the weather though.",
...    labels=["positive", "negative", "pessimistic", "optimistic"],
...    multi_label=True,
...    hypothesis_template="This text is {} towards the weather"
... )
[
    ZeroShotClassificationOutputElement(label='negative', score=0.9231801629066467),
    ZeroShotClassificationOutputElement(label='pessimistic', score=0.8760990500450134),
    ZeroShotClassificationOutputElement(label='optimistic', score=0.0008674879791215062),
    ZeroShotClassificationOutputElement(label='positive', score=0.0005250611575320363)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_image_classification</name><anchor>huggingface_hub.AsyncInferenceClient.zero_shot_image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3253</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "candidate_labels", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "labels", "val": ": list = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **candidate_labels** (`list[str]`) --
  The candidate labels for this image
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of string possible labels. There must be at least 2 labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot image classification model will be used.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the image classification by
  replacing the placeholder with the candidate labels.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotImageClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotImageClassificationOutputElement](/docs/huggingface_hub/main/en/package_reference/inference_types#huggingface_hub.ZeroShotImageClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide input image and text labels to predict text labels for the image.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_image_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> await client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[ZeroShotImageClassificationOutputElement(label='dog', score=0.956),...]
```

</ExampleCodeBlock>


</div></div>

## InferenceTimeoutError[[huggingface_hub.InferenceTimeoutError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceTimeoutError</name><anchor>huggingface_hub.InferenceTimeoutError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L95</source><parameters>[{"name": "message", "val": ": str"}]</parameters></docstring>
Error raised when a model is unavailable or the request times out.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/inference_client.md" />

### Environment variables
https://huggingface.co/docs/huggingface_hub/main/package_reference/environment_variables.md

# Environment variables

`huggingface_hub` can be configured using environment variables.

If you are unfamiliar with environment variable, here are generic articles about them
[on macOS and Linux](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/)
and on [Windows](https://phoenixnap.com/kb/windows-set-environment-variable).

This page will guide you through all environment variables specific to `huggingface_hub`
and their meaning.

## Generic

### HF_INFERENCE_ENDPOINT

To configure the inference api base url. You might want to set this variable if your organization
is pointing at an API Gateway rather than directly at the inference api.

Defaults to `"https://api-inference.huggingface.co"`.

### HF_HOME

To configure where `huggingface_hub` will locally store data. In particular, your token
and the cache will be stored in this folder.

Defaults to `"~/.cache/huggingface"` unless [XDG_CACHE_HOME](#xdgcachehome) is set.

### HF_HUB_CACHE

To configure where repositories from the Hub will be cached locally (models, datasets and
spaces).

Defaults to `"$HF_HOME/hub"` (e.g. `"~/.cache/huggingface/hub"` by default).

### HF_XET_CACHE

To configure where Xet chunks (byte ranges from files managed by Xet backend) are cached locally.

Defaults to `"$HF_HOME/xet"` (e.g. `"~/.cache/huggingface/xet"` by default).

### HF_ASSETS_CACHE

To configure where [assets](../guides/manage-cache#caching-assets) created by downstream libraries
will be cached locally. Those assets can be preprocessed data, files downloaded from GitHub,
logs,...

Defaults to `"$HF_HOME/assets"` (e.g. `"~/.cache/huggingface/assets"` by default).

### HF_TOKEN

To configure the User Access Token to authenticate to the Hub. If set, this value will
overwrite the token stored on the machine (in either `$HF_TOKEN_PATH` or `"$HF_HOME/token"` if the former is not set).

For more details about authentication, check out [this section](../quick-start#authentication).

### HF_TOKEN_PATH

To configure where `huggingface_hub` should store the User Access Token. Defaults to `"$HF_HOME/token"` (e.g. `~/.cache/huggingface/token` by default).


### HF_HUB_VERBOSITY

Set the verbosity level of the `huggingface_hub`'s logger. Must be one of
`{"debug", "info", "warning", "error", "critical"}`.

Defaults to `"warning"`.

For more details, see [logging reference](../package_reference/utilities#huggingface_hub.utils.logging.get_verbosity).

### HF_HUB_ETAG_TIMEOUT

Integer value to define the number of seconds to wait for server response when fetching the latest metadata from a repo before downloading a file. If the request times out, `huggingface_hub` will default to the locally cached files. Setting a lower value speeds up the workflow for machines with a slow connection that have already cached files. A higher value guarantees the metadata call to succeed in more cases. Default to 10s.

### HF_HUB_DOWNLOAD_TIMEOUT

Integer value to define the number of seconds to wait for server response when downloading a file. If the request times out, a TimeoutError is raised. Setting a higher value is beneficial on machine with a slow connection. A smaller value makes the process fail quicker in case of complete network outage. Default to 10s.

## Xet 

### Other Xet environment variables
* [`HF_HUB_DISABLE_XET`](../package_reference/environment_variables#hfhubdisablexet)
* [`HF_XET_CACHE`](../package_reference/environment_variables#hfxetcache)
* [`HF_XET_HIGH_PERFORMANCE`](../package_reference/environment_variables#hfxethighperformance)
* [`HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY`](../package_reference/environment_variables#hfxetreconstructwritesequentially)

### HF_XET_CHUNK_CACHE_SIZE_BYTES

To set the size of the Xet chunk cache locally. By default, the chunk cache is disabled. The chunk cache can be beneficial if you are generating new revisions to existing models or datasets as this is used to cache terms/chunks that are fetched from S3. A larger cache can better take advantage of deduplication across repos & files. To enable the chunk cache set the environment variable to a large number (10GB) or greater. However, in most cases when downloading or uploading new data, disabling the chunk cache will have better performance, which is why it is disabled by default.

Defaults to `0` (0 bytes, means chunk cache is disabled).

### HF_XET_SHARD_CACHE_SIZE_LIMIT

To set the size of the Xet shard cache locally. Increasing this will improve upload efficiency as chunks referenced in cached shard files are not re-uploaded. Note that the default soft limit is likely sufficient for most workloads. 

Defaults to `4000000000` (4GB).

### HF_XET_NUM_CONCURRENT_RANGE_GETS

To set the number of concurrent terms (range of bytes from within a xorb, often called a chunk) downloaded from S3 per file. Increasing this will help with the speed of downloading a file if there is network bandwidth available. 

Defaults to `16`.

## Boolean values

The following environment variables expect a boolean value. The variable will be considered
as `True` if its value is one of `{"1", "ON", "YES", "TRUE"}` (case-insensitive). Any other value
(or undefined) will be considered as `False`.

### HF_DEBUG

If set, the log level for the `huggingface_hub` logger is set to DEBUG. Additionally, all requests made by HF libraries will be logged as equivalent cURL commands for easier debugging and reproducibility.

### HF_HUB_OFFLINE

If set, no HTTP calls will be made to the Hugging Face Hub. If you try to download files, only the cached files will be accessed. If no cache file is detected, an error is raised This is useful in case your network is slow and you don't care about having the latest version of a file.

If `HF_HUB_OFFLINE=1` is set as environment variable and you call any method of [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi), an [OfflineModeIsEnabled](/docs/huggingface_hub/main/en/package_reference/utilities#huggingface_hub.errors.OfflineModeIsEnabled) exception will be raised.

**Note:** even if the latest version of a file is cached, calling `hf_hub_download` still triggers a HTTP request to check that a new version is not available. Setting `HF_HUB_OFFLINE=1` will skip this call which speeds up your loading time.

### HF_HUB_DISABLE_IMPLICIT_TOKEN

Authentication is not mandatory for every request to the Hub. For instance, requesting
details about `"gpt2"` model does not require to be authenticated. However, if a user is
[logged in](../package_reference/login), the default behavior will be to always send the token
in order to ease user experience (never get a HTTP 401 Unauthorized) when accessing private or gated repositories. For privacy, you can
disable this behavior by setting `HF_HUB_DISABLE_IMPLICIT_TOKEN=1`. In this case,
the token will be sent only for "write-access" calls (example: create a commit).

**Note:** disabling implicit sending of token can have weird side effects. For example,
if you want to list all models on the Hub, your private models will not be listed. You
would need to explicitly pass `token=True` argument in your script.

### HF_HUB_DISABLE_PROGRESS_BARS

For time-consuming tasks, `huggingface_hub` displays a progress bar by default (using tqdm).
You can disable all the progress bars at once by setting `HF_HUB_DISABLE_PROGRESS_BARS=1`.

### HF_HUB_DISABLE_SYMLINKS_WARNING

If you are on a Windows machine, it is recommended to enable the developer mode or to run
`huggingface_hub` in admin mode. If not, `huggingface_hub` will not be able to create
symlinks in your cache system. You will be able to execute any script but your user experience
will be degraded as some huge files might end-up duplicated on your hard-drive. A warning
message is triggered to warn you about this behavior. Set `HF_HUB_DISABLE_SYMLINKS_WARNING=1`,
to disable this warning.

For more details, see [cache limitations](../guides/manage-cache#limitations).

### HF_HUB_DISABLE_EXPERIMENTAL_WARNING

Some features of `huggingface_hub` are experimental. This means you can use them but we do not guarantee they will be
maintained in the future. In particular, we might update the API or behavior of such features without any deprecation
cycle. A warning message is triggered when using an experimental feature to warn you about it. If you're comfortable debugging any potential issues using an experimental feature, you can set `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1` to disable the warning.

If you are using an experimental feature, please let us know! Your feedback can help us design and improve it.

### HF_HUB_DISABLE_TELEMETRY

By default, some data is collected by HF libraries (`transformers`, `datasets`, `gradio`,..) to monitor usage, debug issues and help prioritize features.
Each library defines its own policy (i.e. which usage to monitor) but the core implementation happens in `huggingface_hub` (see `send_telemetry`).

You can set `HF_HUB_DISABLE_TELEMETRY=1` as environment variable to globally disable telemetry.

### HF_HUB_DISABLE_XET

Set to disable using `hf-xet`, even if it is available in your Python environment. This is since `hf-xet` will be used automatically if it is found, this allows explicitly disabling its usage. If you are disabling Xet, please consider [filing an issue and including the diagnostics](https://github.com/huggingface/xet-core?tab=readme-ov-file#issues-diagnostics--debugging) information to help us understand why Xet is not working for you.

### HF_HUB_ENABLE_HF_TRANSFER

> [!WARNING]
> This is a deprecated environment variable.
> Now that the Hugging Face Hub is fully powered by the Xet storage backend, all file transfers go through the `hf-xet` binary package. It provides efficient transfers using a chunk-based deduplication strategy and integrates seamlessly with `huggingface_hub`.
> This means `hf_transfer` can't be used anymore. If you are interested in higher performance, check out the [`HF_XET_HIGH_PERFORMANCE` section](#hf_xet_high_performance)

### HF_XET_HIGH_PERFORMANCE

Set `hf-xet` to operate with increased settings to maximize network and disk resources on the machine. Enabling high performance mode will try to saturate the network bandwidth of this machine and utilize all CPU cores for parallel upload/download activity.

Consider this analogous to the legacy `HF_HUB_ENABLE_HF_TRANSFER=1` environment variable but applied to `hf-xet`.

To learn more about the benefits of Xet storage and `hf_xet`, refer to this [section](https://huggingface.co/docs/hub/xet/index).

### HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY

To have `hf-xet` write sequentially to local disk, instead of in parallel. `hf-xet` is designed for SSD/NVMe disks (using parallel writes with direct addressing). If you are using an HDD (spinning hard disk), setting this will change disk writes to be sequential instead of parallel. For slower hard disks, this can improve overall write performance, as the disk is not spinning to seek for parallel writes.

## Deprecated environment variables

In order to standardize all environment variables within the Hugging Face ecosystem, some variables have been marked as deprecated. Although they remain functional, they no longer take precedence over their replacements. The following table outlines the deprecated variables and their corresponding alternatives:


| Deprecated Variable         | Replacement        |
| --------------------------- | ------------------ |
| `HUGGINGFACE_HUB_CACHE`     | `HF_HUB_CACHE`     |
| `HUGGINGFACE_ASSETS_CACHE`  | `HF_ASSETS_CACHE`  |
| `HUGGING_FACE_HUB_TOKEN`    | `HF_TOKEN`         |
| `HUGGINGFACE_HUB_VERBOSITY` | `HF_HUB_VERBOSITY` |

## From external tools

Some environment variables are not specific to `huggingface_hub` but are still taken into account when they are set.

### DO_NOT_TRACK

Boolean value. Equivalent to `HF_HUB_DISABLE_TELEMETRY`. When set to true, telemetry is globally disabled in the Hugging Face Python ecosystem (`transformers`, `diffusers`, `gradio`, etc.). See https://consoledonottrack.com/ for more details.

### NO_COLOR

Boolean value. When set, `hf` CLI will not print any ANSI color.
See [no-color.org](https://no-color.org/).

### XDG_CACHE_HOME

Used only when `HF_HOME` is not set!

This is the default way to configure where [user-specific non-essential (cached) data should be written](https://wiki.archlinux.org/title/XDG_Base_Directory)
on linux machines.

If `HF_HOME` is not set, the default home will be `"$XDG_CACHE_HOME/huggingface"` instead
of `"~/.cache/huggingface"`.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/environment_variables.md" />

### Mixins & serialization methods
https://huggingface.co/docs/huggingface_hub/main/package_reference/mixins.md

# Mixins & serialization methods

## Mixins

The `huggingface_hub` library offers a range of mixins that can be used as a parent class for your objects, in order to
provide simple uploading and downloading functions. Check out our [integration guide](../guides/integrations) to learn
how to integrate any ML framework with the Hub.

### Generic[[huggingface_hub.ModelHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelHubMixin</name><anchor>huggingface_hub.ModelHubMixin</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L76</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_url** (`str`, *optional*) --
  URL of the library repository. Used to generate model card.
- **paper_url** (`str`, *optional*) --
  URL of the library paper. Used to generate model card.
- **docs_url** (`str`, *optional*) --
  URL of the library documentation. Used to generate model card.
- **model_card_template** (`str`, *optional*) --
  Template of the model card. Used to generate model card. Defaults to a generic template.
- **language** (`str` or `list[str]`, *optional*) --
  Language supported by the library. Used to generate model card.
- **library_name** (`str`, *optional*) --
  Name of the library integrating ModelHubMixin. Used to generate model card.
- **license** (`str`, *optional*) --
  License of the library integrating ModelHubMixin. Used to generate model card.
  E.g: "apache-2.0"
- **license_name** (`str`, *optional*) --
  Name of the library integrating ModelHubMixin. Used to generate model card.
  Only used if `license` is set to `other`.
  E.g: "coqui-public-model-license".
- **license_link** (`str`, *optional*) --
  URL to the license of the library integrating ModelHubMixin. Used to generate model card.
  Only used if `license` is set to `other` and `license_name` is set.
  E.g: "https://coqui.ai/cpml".
- **pipeline_tag** (`str`, *optional*) --
  Tag of the pipeline. Used to generate model card. E.g. "text-classification".
- **tags** (`list[str]`, *optional*) --
  Tags to be added to the model card. Used to generate model card. E.g. ["computer-vision"]
- **coders** (`dict[Type, tuple[Callable, Callable]]`, *optional*) --
  Dictionary of custom types and their encoders/decoders. Used to encode/decode arguments that are not
  jsonable by default. E.g. dataclasses, argparse.Namespace, OmegaConf, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>

A generic mixin to integrate ANY machine learning framework with the Hub.

To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
have to be overwritten in  `_from_pretrained` and `_save_pretrained`. [PyTorchModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) is a good example
of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.

When inheriting from [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin), you can define class-level attributes. These attributes are not passed to
`__init__` but to the class definition itself. This is useful to define metadata about the library integrating
[ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin).

For more details on how to integrate the mixin with your library, checkout the [integration guide](../guides/integrations).



<ExampleCodeBlock anchor="huggingface_hub.ModelHubMixin.example">

Example:

```python
>>> from huggingface_hub import ModelHubMixin

# Inherit from ModelHubMixin
>>> class MyCustomModel(
...         ModelHubMixin,
...         library_name="my-library",
...         tags=["computer-vision"],
...         repo_url="https://github.com/huggingface/my-cool-library",
...         paper_url="https://arxiv.org/abs/2304.12244",
...         docs_url="https://huggingface.co/docs/my-cool-library",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, size: int = 512, device: str = "cpu"):
...         # define how to initialize your model
...         super().__init__()
...         ...
...
...     def _save_pretrained(self, save_directory: Path) -> None:
...         # define how to serialize your model
...         ...
...
...     @classmethod
...     def from_pretrained(
...         cls: type[T],
...         pretrained_model_name_or_path: Union[str, Path],
...         *,
...         force_download: bool = False,
...         token: Optional[Union[str, bool]] = None,
...         cache_dir: Optional[Union[str, Path]] = None,
...         local_files_only: bool = False,
...         revision: Optional[str] = None,
...         **model_kwargs,
...     ) -> T:
...         # define how to deserialize your model
...         ...

>>> model = MyCustomModel(size=256, device="gpu")

# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")

# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# Download and initialize weights from the Hub
>>> reloaded_model = MyCustomModel.from_pretrained("username/my-awesome-model")
>>> reloaded_model.size
256

# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["x-custom-tag", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"my-library"
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>_save_pretrained</name><anchor>huggingface_hub.ModelHubMixin._save_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L449</source><parameters>[{"name": "save_directory", "val": ": Path"}]</parameters><paramsdesc>- **save_directory** (`str` or `Path`) --
  Path to directory in which the model weights and configuration will be saved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Overwrite this method in subclass to define how to save your model.
Check out our [integration guide](../guides/integrations) for instructions.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>_from_pretrained</name><anchor>huggingface_hub.ModelHubMixin._from_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L576</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str]"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType]"}, {"name": "force_download", "val": ": bool"}, {"name": "local_files_only", "val": ": bool"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType]"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`).
- **revision** (`str`, *optional*) --
  Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the
  latest commit on `main` branch.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
  the existing cache.
- **token** (`str` or `bool`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- **model_kwargs** --
  Additional keyword arguments passed along to the [_from_pretrained()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin._from_pretrained) method.</paramsdesc><paramgroups>0</paramgroups></docstring>
Overwrite this method in subclass to define how to load your model from pretrained.

Use [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) or [snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) to download files from the Hub before loading them. Most
args taken as input can be directly passed to those 2 methods. If needed, you can add more arguments to this
method using "model_kwargs". For example `PyTorchModelHubMixin._from_pretrained()` takes as input a `map_location`
parameter to set on which device the model should be loaded.

Check out our [integration guide](../guides/integrations) for more instructions.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>huggingface_hub.ModelHubMixin.from_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L460</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "force_download", "val": ": bool = False"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType] = None"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str`, `Path`) --
  - Either the `model_id` (string) of a model hosted on the Hub, e.g. `bigscience/bloom`.
  - Or a path to a `directory` containing model weights saved using
    [save_pretrained](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `../path/to/my_model_directory/`.
- **revision** (`str`, *optional*) --
  Revision of the model on the Hub. Can be a branch name, a git tag or any commit id.
  Defaults to the latest commit on `main` branch.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
  the existing cache.
- **token** (`str` or `bool`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- **model_kwargs** (`dict`, *optional*) --
  Additional kwargs to pass to the model during initialization.</paramsdesc><paramgroups>0</paramgroups></docstring>

Download a model from the Huggingface Hub and instantiate it.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.ModelHubMixin.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L618</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "config", "val": ": typing.Union[dict, huggingface_hub.hub_mixin.DataclassInstance, NoneType] = None"}, {"name": "commit_message", "val": ": str = 'Push model using huggingface_hub.'"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "branch", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "delete_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "model_card_kwargs", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repository to push to (example: `"username/my-model"`).
- **config** (`dict` or `DataclassInstance`, *optional*) --
  Model configuration specified as a key/value dictionary or a dataclass instance.
- **commit_message** (`str`, *optional*) --
  Message to commit while pushing.
- **private** (`bool`, *optional*) --
  Whether the repository created should be private.
  If `None` (default), the repo will be public unless the organization's default is private.
- **token** (`str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **branch** (`str`, *optional*) --
  The git branch on which to push the model. This defaults to `"main"`.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request from `branch` with that commit. Defaults to `False`.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are pushed.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not pushed.
- **delete_patterns** (`list[str]` or `str`, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo.
- **model_card_kwargs** (`dict[str, Any]`, *optional*) --
  Additional arguments passed to the model card template to customize the model card.</paramsdesc><paramgroups>0</paramgroups><retdesc>The url of the commit of your model in the given repository.</retdesc></docstring>

Upload model checkpoint to the Hub.

Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) reference for more
details.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>huggingface_hub.ModelHubMixin.save_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L381</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "config", "val": ": typing.Union[dict, huggingface_hub.hub_mixin.DataclassInstance, NoneType] = None"}, {"name": "repo_id", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "model_card_kwargs", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "**push_to_hub_kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `Path`) --
  Path to directory in which the model weights and configuration will be saved.
- **config** (`dict` or `DataclassInstance`, *optional*) --
  Model configuration specified as a key/value dictionary or a dataclass instance.
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Huggingface Hub after saving it.
- **repo_id** (`str`, *optional*) --
  ID of your repository on the Hub. Used only if `push_to_hub=True`. Will default to the folder name if
  not provided.
- **model_card_kwargs** (`dict[str, Any]`, *optional*) --
  Additional arguments passed to the model card template to customize the model card.
- **push_to_hub_kwargs** --
  Additional key word arguments passed along to the [push_to_hub()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or `None`</rettype><retdesc>url of the commit on the Hub if `push_to_hub=True`, `None` otherwise.</retdesc></docstring>

Save weights in local directory.








</div></div>

### PyTorch[[huggingface_hub.PyTorchModelHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.PyTorchModelHubMixin</name><anchor>huggingface_hub.PyTorchModelHubMixin</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L701</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Implementation of [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) to provide model Hub upload/download capabilities to PyTorch models. The model
is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
you should first set it back in training mode with `model.train()`.

See [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) for more details on how to use the mixin.

<ExampleCodeBlock anchor="huggingface_hub.PyTorchModelHubMixin.example">

Example:

```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin

>>> class MyModel(
...         nn.Module,
...         PyTorchModelHubMixin,
...         library_name="keras-nlp",
...         repo_url="https://github.com/keras-team/keras-nlp",
...         paper_url="https://arxiv.org/abs/2304.12244",
...         docs_url="https://keras.io/keras_nlp/",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
...         super().__init__()
...         self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
...         self.linear = nn.Linear(output_size, vocab_size)

...     def forward(self, x):
...         return self.linear(x + self.param)
>>> model = MyModel(hidden_size=256)

# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")

# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.hidden_size
256
```

</ExampleCodeBlock>


</div>

### Fastai[[huggingface_hub.from_pretrained_fastai]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.from_pretrained_fastai</name><anchor>huggingface_hub.from_pretrained_fastai</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/fastai_utils.py#L289</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The location where the pickled fastai.Learner is. It can be either of the two:
  - Hosted on the Hugging Face Hub. E.g.: 'espejelomar/fatai-pet-breeds-classification' or 'distilgpt2'.
    You can add a `revision` by appending `@` at the end of `repo_id`. E.g.: `dbmdz/bert-base-german-cased@main`.
    Revision is the specific model version to use. Since we use a git-based system for storing models and other
    artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.
  - Hosted locally. `repo_id` would be a directory containing the pickle and a pyproject.toml
    indicating the fastai and fastcore versions used to build the `fastai.Learner`. E.g.: `./my_model_directory/`.
- **revision** (`str`, *optional*) --
  Revision at which the repo's files are downloaded. See documentation of `snapshot_download`.</paramsdesc><paramgroups>0</paramgroups><retdesc>The `fastai.Learner` model in the `repo_id` repo.</retdesc></docstring>

Load pretrained fastai model from the Hub or from a local directory.






</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.push_to_hub_fastai</name><anchor>huggingface_hub.push_to_hub_fastai</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/fastai_utils.py#L334</source><parameters>[{"name": "learner", "val": ""}, {"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": str = 'Push FastAI model using huggingface_hub.'"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "config", "val": ": typing.Optional[dict] = None"}, {"name": "branch", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "delete_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "api_endpoint", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **learner** (*Learner*) --
  The *fastai.Learner' you'd like to push to the Hub.
- **repo_id** (*str*) --
  The repository id for your model in Hub in the format of "namespace/repo_name". The namespace can be your individual account or an organization to which you have write access (for example, 'stanfordnlp/stanza-de').
- **commit_message** (*str`, *optional*) -- Message to commit while pushing. Will default to `"add model"`.
- **private** (*bool*, *optional*) --
  Whether or not the repository created should be private.
  If *None* (default), will default to been public except if the organization's default is private.
- **token** (*str*, *optional*) --
  The Hugging Face account token to use as HTTP bearer authorization for remote files. If `None`, the token will be asked by a prompt.
- **config** (*dict*, *optional*) --
  Configuration object to be saved alongside the model weights.
- **branch** (*str*, *optional*) --
  The git branch on which to push the model. This defaults to
  the default branch as specified in your repository, which
  defaults to *"main"*.
- **create_pr** (*boolean*, *optional*) --
  Whether or not to create a Pull Request from *branch* with that commit.
  Defaults to *False*.
- **api_endpoint** (*str*, *optional*) --
  The API endpoint to use when pushing the model to the hub.
- **allow_patterns** (*list[str]* or *str*, *optional*) --
  If provided, only files matching at least one pattern are pushed.
- **ignore_patterns** (*list[str]* or *str*, *optional*) --
  If provided, files matching any of the patterns are not pushed.
- **delete_patterns** (*list[str]* or *str*, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo.</paramsdesc><paramgroups>0</paramgroups><retdesc>The url of the commit of your model in the given repository.</retdesc></docstring>

Upload learner checkpoint files to the Hub.

Use *allow_patterns* and *ignore_patterns* to precisely filter which files should be pushed to the hub. Use
*delete_patterns* to delete existing remote files in the same commit. See [*upload_folder*] reference for more
details.





> [!TIP]
> Raises the following error:
>
>     - [*ValueError*](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if the user is not log on to the Hugging Face Hub.


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/mixins.md" />

### Serialization
https://huggingface.co/docs/huggingface_hub/main/package_reference/serialization.md

# Serialization

`huggingface_hub` provides helpers to save and load ML model weights in a standardized way. This part of the library is still under development and will be improved in future releases. The goal is to harmonize how weights are saved and loaded across the Hub, both to remove code duplication across libraries and to establish consistent conventions.

## DDUF file format

DDUF is a file format designed for diffusion models. It allows saving all the information to run a model in a single file. This work is inspired by the [GGUF](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) format. `huggingface_hub` provides helpers to save and load DDUF files, ensuring the file format is respected.

> [!WARNING]
> This is a very early version of the parser. The API and implementation can evolve in the near future.
>
> The parser currently does very little validation. For more details about the file format, check out https://github.com/huggingface/huggingface.js/tree/main/packages/dduf.

### How to write a DDUF file?

Here is how to export a folder containing different parts of a diffusion model using [export_folder_as_dduf()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.export_folder_as_dduf):

```python
# Export a folder as a DDUF file
>>> from huggingface_hub import export_folder_as_dduf
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")
```

For more flexibility, you can use [export_entries_as_dduf()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.export_entries_as_dduf) and pass a list of files to include in the final DDUF file:

```python
# Export specific files from the local disk.
>>> from huggingface_hub import export_entries_as_dduf
>>> export_entries_as_dduf(
...     dduf_path="stable-diffusion-v1-4-FP16.dduf",
...     entries=[ # List entries to add to the DDUF file (here, only FP16 weights)
...         ("model_index.json", "path/to/model_index.json"),
...         ("vae/config.json", "path/to/vae/config.json"),
...         ("vae/diffusion_pytorch_model.fp16.safetensors", "path/to/vae/diffusion_pytorch_model.fp16.safetensors"),
...         ("text_encoder/config.json", "path/to/text_encoder/config.json"),
...         ("text_encoder/model.fp16.safetensors", "path/to/text_encoder/model.fp16.safetensors"),
...         # ... add more entries here
...     ]
... )
```

The `entries` parameter also supports passing an iterable of paths or bytes. This can prove useful if you have a loaded model and want to serialize it directly into a DDUF file instead of having to serialize each component to disk first and then as a DDUF file. Here is an example of how a `StableDiffusionPipeline` can be serialized as DDUF:


```python
# Export state_dicts one by one from a loaded pipeline 
>>> from diffusers import DiffusionPipeline
>>> from typing import Generator, Tuple
>>> import safetensors.torch
>>> from huggingface_hub import export_entries_as_dduf
>>> pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
... # ... do some work with the pipeline

>>> def as_entries(pipe: DiffusionPipeline) -> Generator[Tuple[str, bytes], None, None]:
...     # Build a generator that yields the entries to add to the DDUF file.
...     # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file.
...     # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time)
...     yield "vae/config.json", pipe.vae.to_json_string().encode()
...     yield "vae/diffusion_pytorch_model.safetensors", safetensors.torch.save(pipe.vae.state_dict())
...     yield "text_encoder/config.json", pipe.text_encoder.config.to_json_string().encode()
...     yield "text_encoder/model.safetensors", safetensors.torch.save(pipe.text_encoder.state_dict())
...     # ... add more entries here

>>> export_entries_as_dduf(dduf_path="stable-diffusion-v1-4.dduf", entries=as_entries(pipe))
```

**Note:** in practice, `diffusers` provides a method to directly serialize a pipeline in a DDUF file. The snippet above is only meant as an example.

### How to read a DDUF file?

```python
>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

# Read DDUF metadata
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

# Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

# Load model index as JSON
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}

# Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
...     state_dict = safetensors.torch.load(mm)
```

### Helpers[[huggingface_hub.export_entries_as_dduf]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.export_entries_as_dduf</name><anchor>huggingface_hub.export_entries_as_dduf</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L159</source><parameters>[{"name": "dduf_path", "val": ": typing.Union[str, os.PathLike]"}, {"name": "entries", "val": ": typing.Iterable[tuple[str, typing.Union[str, pathlib.Path, bytes]]]"}]</parameters><paramsdesc>- **dduf_path** (`str` or `os.PathLike`) --
  The path to the DDUF file to write.
- **entries** (`Iterable[tuple[str, Union[str, Path, bytes]]]`) --
  An iterable of entries to write in the DDUF file. Each entry is a tuple with the filename and the content.
  The filename should be the path to the file in the DDUF archive.
  The content can be a string or a pathlib.Path representing a path to a file on the local disk or directly the content as bytes.</paramsdesc><paramgroups>0</paramgroups><raises>- - -- `DDUFExportError`: If anything goes wrong during the export (e.g. invalid entry name, missing 'model_index.json', etc.).</raises><raisederrors>-</raisederrors></docstring>
Write a DDUF file from an iterable of entries.

This is a lower-level helper than [export_folder_as_dduf()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.export_folder_as_dduf) that allows more flexibility when serializing data.
In particular, you don't need to save the data on disk before exporting it in the DDUF file.







<ExampleCodeBlock anchor="huggingface_hub.export_entries_as_dduf.example">

Example:
```python
# Export specific files from the local disk.
>>> from huggingface_hub import export_entries_as_dduf
>>> export_entries_as_dduf(
...     dduf_path="stable-diffusion-v1-4-FP16.dduf",
...     entries=[ # List entries to add to the DDUF file (here, only FP16 weights)
...         ("model_index.json", "path/to/model_index.json"),
...         ("vae/config.json", "path/to/vae/config.json"),
...         ("vae/diffusion_pytorch_model.fp16.safetensors", "path/to/vae/diffusion_pytorch_model.fp16.safetensors"),
...         ("text_encoder/config.json", "path/to/text_encoder/config.json"),
...         ("text_encoder/model.fp16.safetensors", "path/to/text_encoder/model.fp16.safetensors"),
...         # ... add more entries here
...     ]
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.export_entries_as_dduf.example-2">

```python
# Export state_dicts one by one from a loaded pipeline
>>> from diffusers import DiffusionPipeline
>>> from typing import Generator, Tuple
>>> import safetensors.torch
>>> from huggingface_hub import export_entries_as_dduf
>>> pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
... # ... do some work with the pipeline

>>> def as_entries(pipe: DiffusionPipeline) -> Generator[tuple[str, bytes], None, None]:
...     # Build a generator that yields the entries to add to the DDUF file.
...     # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file.
...     # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time)
...     yield "vae/config.json", pipe.vae.to_json_string().encode()
...     yield "vae/diffusion_pytorch_model.safetensors", safetensors.torch.save(pipe.vae.state_dict())
...     yield "text_encoder/config.json", pipe.text_encoder.config.to_json_string().encode()
...     yield "text_encoder/model.safetensors", safetensors.torch.save(pipe.text_encoder.state_dict())
...     # ... add more entries here

>>> export_entries_as_dduf(dduf_path="stable-diffusion-v1-4.dduf", entries=as_entries(pipe))
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.export_folder_as_dduf</name><anchor>huggingface_hub.export_folder_as_dduf</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L250</source><parameters>[{"name": "dduf_path", "val": ": typing.Union[str, os.PathLike]"}, {"name": "folder_path", "val": ": typing.Union[str, os.PathLike]"}]</parameters><paramsdesc>- **dduf_path** (`str` or `os.PathLike`) --
  The path to the DDUF file to write.
- **folder_path** (`str` or `os.PathLike`) --
  The path to the folder containing the diffusion model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Export a folder as a DDUF file.

AUses [export_entries_as_dduf()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.export_entries_as_dduf) under the hood.



<ExampleCodeBlock anchor="huggingface_hub.export_folder_as_dduf.example">

Example:
```python
>>> from huggingface_hub import export_folder_as_dduf
>>> export_folder_as_dduf(dduf_path="FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.read_dduf_file</name><anchor>huggingface_hub.read_dduf_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L90</source><parameters>[{"name": "dduf_path", "val": ": typing.Union[os.PathLike, str]"}]</parameters><paramsdesc>- **dduf_path** (`str` or `os.PathLike`) --
  The path to the DDUF file to read.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, DDUFEntry]`</rettype><retdesc>A dictionary of [DDUFEntry](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.DDUFEntry) indexed by filename.</retdesc><raises>- - -- `DDUFCorruptedFileError`: If the DDUF file is corrupted (i.e. doesn't follow the DDUF format).</raises><raisederrors>-</raisederrors></docstring>

Read a DDUF file and return a dictionary of entries.

Only the metadata is read, the data is not loaded in memory.











<ExampleCodeBlock anchor="huggingface_hub.read_dduf_file.example">

Example:
```python
>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

# Read DDUF metadata
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

# Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

# Load model index as JSON
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', ...

# Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
...     state_dict = safetensors.torch.load(mm)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DDUFEntry</name><anchor>huggingface_hub.DDUFEntry</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L35</source><parameters>[{"name": "filename", "val": ": str"}, {"name": "length", "val": ": int"}, {"name": "offset", "val": ": int"}, {"name": "dduf_path", "val": ": Path"}]</parameters><paramsdesc>- **filename** (str) --
  The name of the file in the DDUF archive.
- **offset** (int) --
  The offset of the file in the DDUF archive.
- **length** (int) --
  The length of the file in the DDUF archive.
- **dduf_path** (str) --
  The path to the DDUF archive (for internal use).</paramsdesc><paramgroups>0</paramgroups></docstring>
Object representing a file entry in a DDUF file.

See [read_dduf_file()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.read_dduf_file) for how to read a DDUF file.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>as_mmap</name><anchor>huggingface_hub.DDUFEntry.as_mmap</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L57</source><parameters>[]</parameters></docstring>
Open the file as a memory-mapped file.

Useful to load safetensors directly from the file.

<ExampleCodeBlock anchor="huggingface_hub.DDUFEntry.as_mmap.example">

Example:
```py
>>> import safetensors.torch
>>> with entry.as_mmap() as mm:
...     tensors = safetensors.torch.load(mm)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>read_text</name><anchor>huggingface_hub.DDUFEntry.read_text</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_dduf.py#L74</source><parameters>[{"name": "encoding", "val": ": str = 'utf-8'"}]</parameters></docstring>
Read the file as text.

Useful for '.txt' and '.json' entries.

<ExampleCodeBlock anchor="huggingface_hub.DDUFEntry.read_text.example">

Example:
```py
>>> import json
>>> index = json.loads(entry.read_text())
```

</ExampleCodeBlock>


</div></div>

### Errors[[huggingface_hub.errors.DDUFError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.DDUFError</name><anchor>huggingface_hub.errors.DDUFError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L337</source><parameters>""</parameters></docstring>
Base exception for errors related to the DDUF format.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.DDUFCorruptedFileError</name><anchor>huggingface_hub.errors.DDUFCorruptedFileError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L341</source><parameters>""</parameters></docstring>
Exception thrown when the DDUF file is corrupted.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.DDUFExportError</name><anchor>huggingface_hub.errors.DDUFExportError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L345</source><parameters>""</parameters></docstring>
Base exception for errors during DDUF export.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.DDUFInvalidEntryNameError</name><anchor>huggingface_hub.errors.DDUFInvalidEntryNameError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L349</source><parameters>""</parameters></docstring>
Exception thrown when the entry name is invalid.

</div>

## Saving tensors

The main helper of the `serialization` module takes a torch `nn.Module` as input and saves it to disk. It handles the logic to save shared tensors (see [safetensors explanation](https://huggingface.co/docs/safetensors/torch_shared_tensors)) as well as logic to split the state dictionary into shards, using [split_torch_state_dict_into_shards()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.split_torch_state_dict_into_shards) under the hood. At the moment, only `torch` framework is supported.

If you want to save a state dictionary (e.g. a mapping between layer names and related tensors) instead of a `nn.Module`, you can use [save_torch_state_dict()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.save_torch_state_dict) which provides the same features. This is useful for example if you want to apply custom logic to the state dict before saving it.

### save_torch_model[[huggingface_hub.save_torch_model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.save_torch_model</name><anchor>huggingface_hub.save_torch_model</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L39</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "save_directory", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "filename_pattern", "val": ": typing.Optional[str] = None"}, {"name": "force_contiguous", "val": ": bool = True"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}, {"name": "metadata", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "shared_tensors_to_discard", "val": ": typing.Optional[list[str]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to save on disk.
- **save_directory** (`str` or `Path`) --
  The directory in which the model will be saved.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
  Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization`
  parameter.
- **force_contiguous** (`boolean`, *optional*) --
  Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the
  model, but it could potentially change performance if the layout of the tensor was chosen specifically for
  that reason. Defaults to `True`.
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.
- **metadata** (`dict[str, str]`, *optional*) --
  Extra information to save along with the model. Some metadata will be added for each dropped tensors.
  This information will not be enough to recover the entire shared structure but might help understanding
  things.
- **safe_serialization** (`bool`, *optional*) --
  Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle.
  Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed
  in a future version.
- **is_main_process** (`bool`, *optional*) --
  Whether the process calling this is the main process or not. Useful when in distributed training like
  TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on
  the main process to avoid race conditions. Defaults to True.
- **shared_tensors_to_discard** (`list[str]`, *optional*) --
  List of tensor names to drop when saving shared tensors. If not provided and shared tensors are
  detected, it will drop the first name alphabetically.</paramsdesc><paramgroups>0</paramgroups></docstring>

Saves a given torch model to disk, handling sharding and shared tensors issues.

See also [save_torch_state_dict()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.save_torch_state_dict) to save a state dict with more flexibility.

For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors).

The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are
saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard,
an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses
[split_torch_state_dict_into_shards()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.split_torch_state_dict_into_shards) under the hood. If `safe_serialization` is `True`, the shards are saved as
safetensors (the default). Otherwise, the shards are saved as pickle.

Before saving the model, the `save_directory` is cleaned from any previous shard files.

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.

> [!WARNING]
> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving.



<ExampleCodeBlock anchor="huggingface_hub.save_torch_model.example">

Example:

```py
>>> from huggingface_hub import save_torch_model
>>> model = ... # A PyTorch model

# Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors.
>>> save_torch_model(model, "path/to/folder")

# Load model back
>>> from huggingface_hub import load_torch_model  # TODO
>>> load_torch_model(model, "path/to/folder")
>>>
```

</ExampleCodeBlock>


</div>

### save_torch_state_dict[[huggingface_hub.save_torch_state_dict]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.save_torch_state_dict</name><anchor>huggingface_hub.save_torch_state_dict</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L133</source><parameters>[{"name": "state_dict", "val": ": dict"}, {"name": "save_directory", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "filename_pattern", "val": ": typing.Optional[str] = None"}, {"name": "force_contiguous", "val": ": bool = True"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}, {"name": "metadata", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "shared_tensors_to_discard", "val": ": typing.Optional[list[str]] = None"}]</parameters><paramsdesc>- **state_dict** (`dict[str, torch.Tensor]`) --
  The state dictionary to save.
- **save_directory** (`str` or `Path`) --
  The directory in which the model will be saved.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
  Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization`
  parameter.
- **force_contiguous** (`boolean`, *optional*) --
  Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the
  model, but it could potentially change performance if the layout of the tensor was chosen specifically for
  that reason. Defaults to `True`.
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.
- **metadata** (`dict[str, str]`, *optional*) --
  Extra information to save along with the model. Some metadata will be added for each dropped tensors.
  This information will not be enough to recover the entire shared structure but might help understanding
  things.
- **safe_serialization** (`bool`, *optional*) --
  Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle.
  Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed
  in a future version.
- **is_main_process** (`bool`, *optional*) --
  Whether the process calling this is the main process or not. Useful when in distributed training like
  TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on
  the main process to avoid race conditions. Defaults to True.
- **shared_tensors_to_discard** (`list[str]`, *optional*) --
  List of tensor names to drop when saving shared tensors. If not provided and shared tensors are
  detected, it will drop the first name alphabetically.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save a model state dictionary to the disk, handling sharding and shared tensors issues.

See also [save_torch_model()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.save_torch_model) to directly save a PyTorch model.

For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors).

The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are
saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard,
an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses
[split_torch_state_dict_into_shards()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.split_torch_state_dict_into_shards) under the hood. If `safe_serialization` is `True`, the shards are saved as
safetensors (the default). Otherwise, the shards are saved as pickle.

Before saving the model, the `save_directory` is cleaned from any previous shard files.

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.

> [!WARNING]
> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving.



<ExampleCodeBlock anchor="huggingface_hub.save_torch_state_dict.example">

Example:

```py
>>> from huggingface_hub import save_torch_state_dict
>>> model = ... # A PyTorch model

# Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors.
>>> state_dict = model_to_save.state_dict()
>>> save_torch_state_dict(state_dict, "path/to/folder")
```

</ExampleCodeBlock>


</div>

The `serialization` module also contains low-level helpers to split a state dictionary into several shards, while creating a proper index in the process. These helpers are available for `torch` tensors and are designed to be easily extended to any other ML frameworks.

### split_torch_state_dict_into_shards[[huggingface_hub.split_torch_state_dict_into_shards]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.split_torch_state_dict_into_shards</name><anchor>huggingface_hub.split_torch_state_dict_into_shards</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L290</source><parameters>[{"name": "state_dict", "val": ": dict"}, {"name": "filename_pattern", "val": ": str = 'model{suffix}.safetensors'"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}]</parameters><paramsdesc>- **state_dict** (`dict[str, torch.Tensor]`) --
  The state dictionary to save.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
  Defaults to `"model{suffix}.safetensors"`.
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.</paramsdesc><paramgroups>0</paramgroups><rettype>`StateDictSplit`</rettype><retdesc>A `StateDictSplit` object containing the shards and the index to retrieve them.</retdesc></docstring>

Split a model state dictionary in shards so that each shard is smaller than a given size.

The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization
made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we
have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not
[6+2+2GB], [6+2GB], [6GB].


> [!TIP]
> To save a model state dictionary to the disk, see [save_torch_state_dict()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.save_torch_state_dict). This helper uses
> `split_torch_state_dict_into_shards` under the hood.

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.







<ExampleCodeBlock anchor="huggingface_hub.split_torch_state_dict_into_shards.example">

Example:
```py
>>> import json
>>> import os
>>> from safetensors.torch import save_file as safe_save_file
>>> from huggingface_hub import split_torch_state_dict_into_shards

>>> def save_state_dict(state_dict: dict[str, torch.Tensor], save_directory: str):
...     state_dict_split = split_torch_state_dict_into_shards(state_dict)
...     for filename, tensors in state_dict_split.filename_to_tensors.items():
...         shard = {tensor: state_dict[tensor] for tensor in tensors}
...         safe_save_file(
...             shard,
...             os.path.join(save_directory, filename),
...             metadata={"format": "pt"},
...         )
...     if state_dict_split.is_sharded:
...         index = {
...             "metadata": state_dict_split.metadata,
...             "weight_map": state_dict_split.tensor_to_filename,
...         }
...         with open(os.path.join(save_directory, "model.safetensors.index.json"), "w") as f:
...             f.write(json.dumps(index, indent=2))
```

</ExampleCodeBlock>


</div>

### split_state_dict_into_shards_factory[[huggingface_hub.split_state_dict_into_shards_factory]]

This is the underlying factory from which each framework-specific helper is derived. In practice, you are not expected to use this factory directly except if you need to adapt it to a framework that is not yet supported. If that is the case, please let us know by [opening a new issue](https://github.com/huggingface/huggingface_hub/issues/new) on the `huggingface_hub` repo.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.split_state_dict_into_shards_factory</name><anchor>huggingface_hub.split_state_dict_into_shards_factory</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_base.py#L49</source><parameters>[{"name": "state_dict", "val": ": dict"}, {"name": "get_storage_size", "val": ": typing.Callable[[~TensorT], int]"}, {"name": "filename_pattern", "val": ": str"}, {"name": "get_storage_id", "val": ": typing.Callable[[~TensorT], typing.Optional[typing.Any]] = <function <lambda> at 0x7f1786e33910>"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}]</parameters><paramsdesc>- **state_dict** (`dict[str, Tensor]`) --
  The state dictionary to save.
- **get_storage_size** (`Callable[[Tensor], int]`) --
  A function that returns the size of a tensor when saved on disk in bytes.
- **get_storage_id** (`Callable[[Tensor], Optional[Any]]`, *optional*) --
  A function that returns a unique identifier to a tensor storage. Multiple different tensors can share the
  same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage
  during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.</paramsdesc><paramgroups>0</paramgroups><rettype>`StateDictSplit`</rettype><retdesc>A `StateDictSplit` object containing the shards and the index to retrieve them.</retdesc></docstring>

Split a model state dictionary in shards so that each shard is smaller than a given size.

The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization
made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we
have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not
[6+2+2GB], [6+2GB], [6GB].

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.








</div>

## Loading tensors

The loading helpers support both single-file and sharded checkpoints in either safetensors or pickle format. [load_torch_model()](/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.load_torch_model) takes a `nn.Module` and a checkpoint path (either a single file or a directory) as input and load the weights into the model.

### load_torch_model[[huggingface_hub.load_torch_model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.load_torch_model</name><anchor>huggingface_hub.load_torch_model</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L363</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "checkpoint_path", "val": ": typing.Union[str, os.PathLike]"}, {"name": "strict", "val": ": bool = False"}, {"name": "safe", "val": ": bool = True"}, {"name": "weights_only", "val": ": bool = False"}, {"name": "map_location", "val": ": typing.Union[str, ForwardRef('torch.device'), NoneType] = None"}, {"name": "mmap", "val": ": bool = False"}, {"name": "filename_pattern", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model in which to load the checkpoint.
- **checkpoint_path** (`str` or `os.PathLike`) --
  Path to either the checkpoint file or directory containing the checkpoint(s).
- **strict** (`bool`, *optional*, defaults to `False`) --
  Whether to strictly enforce that the keys in the model state dict match the keys in the checkpoint.
- **safe** (`bool`, *optional*, defaults to `True`) --
  If `safe` is True, the safetensors files will be loaded. If `safe` is False, the function
  will first attempt to load safetensors files if they are available, otherwise it will fall back to loading
  pickle files. `filename_pattern` parameter takes precedence over `safe` parameter.
- **weights_only** (`bool`, *optional*, defaults to `False`) --
  If True, only loads the model weights without optimizer states and other metadata.
  Only supported in PyTorch >= 1.13.
- **map_location** (`str` or `torch.device`, *optional*) --
  A `torch.device` object, string or a dict specifying how to remap storage locations. It
  indicates the location where all tensors should be loaded.
- **mmap** (`bool`, *optional*, defaults to `False`) --
  Whether to use memory-mapped file loading. Memory mapping can improve loading performance
  for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints.
- **filename_pattern** (`str`, *optional*) --
  The pattern to look for the index file. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
  Defaults to `"model{suffix}.safetensors"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`NamedTuple`</rettype><retdesc>A named tuple with `missing_keys` and `unexpected_keys` fields.
- `missing_keys` is a list of str containing the missing keys, i.e. keys that are in the model but not in the checkpoint.
- `unexpected_keys` is a list of str containing the unexpected keys, i.e. keys that are in the checkpoint but not in the model.</retdesc><raises>- [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) -- 
  If the checkpoint file or directory does not exist.
- [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) -- 
  If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If the checkpoint path is invalid or if the checkpoint format cannot be determined.</raises><raisederrors>``FileNotFoundError`` or ``ImportError`` or ``ValueError``</raisederrors></docstring>

Load a checkpoint into a model, handling both sharded and non-sharded checkpoints.











<ExampleCodeBlock anchor="huggingface_hub.load_torch_model.example">

Example:
```python
>>> from huggingface_hub import load_torch_model
>>> model = ... # A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")
```

</ExampleCodeBlock>


</div>

### load_state_dict_from_file[[huggingface_hub.load_state_dict_from_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.load_state_dict_from_file</name><anchor>huggingface_hub.load_state_dict_from_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L540</source><parameters>[{"name": "checkpoint_file", "val": ": typing.Union[str, os.PathLike]"}, {"name": "map_location", "val": ": typing.Union[str, ForwardRef('torch.device'), NoneType] = None"}, {"name": "weights_only", "val": ": bool = False"}, {"name": "mmap", "val": ": bool = False"}]</parameters><paramsdesc>- **checkpoint_file** (`str` or `os.PathLike`) --
  Path to the checkpoint file to load. Can be either a safetensors or pickle (`.bin`) checkpoint.
- **map_location** (`str` or `torch.device`, *optional*) --
  A `torch.device` object, string or a dict specifying how to remap storage locations. It
  indicates the location where all tensors should be loaded.
- **weights_only** (`bool`, *optional*, defaults to `False`) --
  If True, only loads the model weights without optimizer states and other metadata.
  Only supported for pickle (`.bin`) checkpoints with PyTorch >= 1.13. Has no effect when
  loading safetensors files.
- **mmap** (`bool`, *optional*, defaults to `False`) --
  Whether to use memory-mapped file loading. Memory mapping can improve loading performance
  for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. Has no effect when
  loading safetensors files, as the `safetensors` library uses memory mapping by default.</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[dict[str, "torch.Tensor"], Any]`</rettype><retdesc>The loaded checkpoint.
- For safetensors files: always returns a dictionary mapping parameter names to tensors.
- For pickle files: returns any Python object that was pickled (commonly a state dict, but could be
  an entire model, optimizer state, or any other Python object).</retdesc><raises>- [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) -- 
  If the checkpoint file does not exist.
- [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) -- 
  If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- 
  If the checkpoint file format is invalid or if git-lfs files are not properly downloaded.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If the checkpoint file path is empty or invalid.</raises><raisederrors>``FileNotFoundError`` or ``ImportError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>

Loads a checkpoint file, handling both safetensors and pickle checkpoint formats.











<ExampleCodeBlock anchor="huggingface_hub.load_state_dict_from_file.example">

Example:
```python
>>> from huggingface_hub import load_state_dict_from_file

# Load a PyTorch checkpoint
>>> state_dict = load_state_dict_from_file("path/to/model.bin", map_location="cpu")
>>> model.load_state_dict(state_dict)

# Load a safetensors checkpoint
>>> state_dict = load_state_dict_from_file("path/to/model.safetensors")
>>> model.load_state_dict(state_dict)
```

</ExampleCodeBlock>


</div>

## Tensors helpers

### get_torch_storage_id[[huggingface_hub.get_torch_storage_id]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_torch_storage_id</name><anchor>huggingface_hub.get_torch_storage_id</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L726</source><parameters>[{"name": "tensor", "val": ": torch.Tensor"}]</parameters></docstring>

Return unique identifier to a tensor storage.

Multiple different tensors can share the same underlying storage. This identifier is
guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with
non-overlapping lifetimes may have the same id.
In the case of meta tensors, we return None since we can't tell if they share the same storage.

Taken from https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/src/transformers/pytorch_utils.py#L278.


</div>

### get_torch_storage_size[[huggingface_hub.get_torch_storage_size]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_torch_storage_size</name><anchor>huggingface_hub.get_torch_storage_size</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L743</source><parameters>[{"name": "tensor", "val": ": torch.Tensor"}]</parameters></docstring>

Taken from https://github.com/huggingface/safetensors/blob/08db34094e9e59e2f9218f2df133b7b4aaff5a99/bindings/python/py_src/safetensors/torch.py#L31C1-L41C59


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/serialization.md" />

### Create and manage a repository
https://huggingface.co/docs/huggingface_hub/main/guides/repository.md

# Create and manage a repository

The Hugging Face Hub is a collection of git repositories. [Git](https://git-scm.com/) is a widely used tool in software
development to easily version projects when working collaboratively. This guide will show you how to interact with the
repositories on the Hub, especially:

- Create and delete a repository.
- Manage branches and tags.
- Rename your repository.
- Update your repository visibility.
- Manage a local copy of your repository.

> [!WARNING]
> If you are used to working with platforms such as GitLab/GitHub/Bitbucket, your first instinct
> might be to use `git` CLI to clone your repo (`git clone`), commit changes (`git add, git commit`) and push them
> (`git push`). This is valid when using the Hugging Face Hub. However, software engineering and machine learning do
> not share the same requirements and workflows. Model repositories might maintain large model weight files for different
> frameworks and tools, so cloning the repository can lead to you maintaining large local folders with massive sizes. As
> a result, it may be more efficient to use our custom HTTP methods. You can read our [Git vs HTTP paradigm](../concepts/git_vs_http)
> explanation page for more details.

If you want to create and manage a repository on the Hub, your machine must be logged in. If you are not, please refer to
[this section](../quick-start#authentication). In the rest of this guide, we will assume that your machine is logged in.

## Repo creation and deletion

The first step is to know how to create and delete repositories. You can only manage repositories that you own (under
your username namespace) or from organizations in which you have write permissions.

### Create a repository

Create an empty repository with [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo) and give it a name with the `repo_id` parameter. The `repo_id` is your namespace followed by the repository name: `username_or_org/repo_name`.

```py
>>> from huggingface_hub import create_repo
>>> create_repo("lysandre/test-model")
'https://huggingface.co/lysandre/test-model'
```

Or via CLI:

```bash
>>> hf repo create lysandre/test-model
Successfully created lysandre/test-model on the Hub.
Your repo is now available at https://huggingface.co/lysandre/test-model
```

By default, [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo) creates a model repository. But you can use the `repo_type` parameter to specify another repository type. For example, if you want to create a dataset repository:

```py
>>> from huggingface_hub import create_repo
>>> create_repo("lysandre/test-dataset", repo_type="dataset")
'https://huggingface.co/datasets/lysandre/test-dataset'
```

Or via CLI:

```bash
>>> hf repo create lysandre/test-dataset --repo-type dataset
```

When you create a repository, you can set your repository visibility with the `private` parameter.

```py
>>> from huggingface_hub import create_repo
>>> create_repo("lysandre/test-private", private=True)
```

Or via CLI:

```bash
>>> hf repo create lysandre/test-private --private
```

If you want to change the repository visibility at a later time, you can use the [update_repo_settings()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_repo_settings) function.

> [!TIP]
> If you are part of an organization with an Enterprise plan, you can create a repo in a specific resource group by passing `resource_group_id` as parameter to [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo). Resource groups are a security feature to control which members from your org can access a given resource. You can get the resource group ID by copying it from your org settings page url on the Hub (e.g. `"https://huggingface.co/organizations/huggingface/settings/resource-groups/66670e5163145ca562cb1988"` => `"66670e5163145ca562cb1988"`). For more details about resource group, check out this [guide](https://huggingface.co/docs/hub/en/security-resource-groups).

### Delete a repository

Delete a repository with [delete_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_repo). Make sure you want to delete a repository because this is an irreversible process!

Specify the `repo_id` of the repository you want to delete:

```py
>>> delete_repo(repo_id="lysandre/my-corrupted-dataset", repo_type="dataset")
```

Or via CLI:

```bash
>>> hf repo delete lysandre/my-corrupted-dataset --repo-type dataset
```

### Duplicate a repository (only for Spaces)

In some cases, you want to copy someone else's repo to adapt it to your use case.
This is possible for Spaces using the [duplicate_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.duplicate_space) method. It will duplicate the whole repository.
You will still need to configure your own settings (hardware, sleep-time, storage, variables and secrets). Check out our [Manage your Space](./manage-spaces) guide for more details.

```py
>>> from huggingface_hub import duplicate_space
>>> duplicate_space("multimodalart/dreambooth-training", private=False)
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)
```

## Upload and download files

Now that you have created your repository, you are interested in pushing changes to it and downloading files from it.

These 2 topics deserve their own guides. Please refer to the [upload](./upload) and the [download](./download) guides
to learn how to use your repository.


## Branches and tags

Git repositories often make use of branches to store different versions of a same repository.
Tags can also be used to flag a specific state of your repository, for example, when releasing a version.
More generally, branches and tags are referred as [git references](https://git-scm.com/book/en/v2/Git-Internals-Git-References).

### Create branches and tags

You can create new branch and tags using [create_branch()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_branch) and [create_tag()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_tag):

```py
>>> from huggingface_hub import create_branch, create_tag

# Create a branch on a Space repo from `main` branch
>>> create_branch("Matthijs/speecht5-tts-demo", repo_type="space", branch="handle-dog-speaker")

# Create a tag on a Dataset repo from `v0.1-release` branch
>>> create_tag("bigcode/the-stack", repo_type="dataset", revision="v0.1-release", tag="v0.1.1", tag_message="Bump release version.")
```

Or via CLI:

```bash
>>> hf repo branch create Matthijs/speecht5-tts-demo handle-dog-speaker --repo-type space
>>> hf repo tag create bigcode/the-stack v0.1.1 --repo-type dataset --revision v0.1-release -m "Bump release version."
```

You can use the [delete_branch()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_branch) and [delete_tag()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_tag) functions in the same way to delete a branch or a tag, or `hf repo branch delete` and `hf repo tag delete` respectively in CLI.


### List all branches and tags

You can also list the existing git refs from a repository using [list_repo_refs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs):

```py
>>> from huggingface_hub import list_repo_refs
>>> list_repo_refs("bigcode/the-stack", repo_type="dataset")
GitRefs(
   branches=[
         GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
         GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
   ],
   converts=[],
   tags=[
         GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
   ]
)
```

## Change repository settings

Repositories come with some settings that you can configure. Most of the time, you will want to do that manually in the
repo settings page in your browser. You must have write access to a repo to configure it (either own it or being part of
an organization). In this section, we will see the settings that you can also configure programmatically using `huggingface_hub`.

Some settings are specific to Spaces (hardware, environment variables,...). To configure those, please refer to our [Manage your Spaces](../guides/manage-spaces) guide.

### Update visibility

A repository can be public or private. A private repository is only visible to you or members of the organization in which the repository is located. Change a repository to private as shown in the following:

```py
>>> from huggingface_hub import update_repo_settings
>>> update_repo_settings(repo_id=repo_id, private=True)
```

Or via CLI:

```bash
>>> hf repo settings lysandre/test-private --private true
```

### Setup gated access

To give more control over how repos are used, the Hub allows repo authors to enable **access requests** for their repos. User must agree to share their contact information (username and email address) with the repo authors to access the files when enabled. A repo with access requests enabled is called a **gated repo**.

You can set a repo as gated using [update_repo_settings()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_repo_settings):

```py
>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto")  # Set automatic gating for a model
```

Or via CLI:

```bash
>>> hf repo settings lysandre/test-private --gated auto
```

### Rename your repository

You can rename your repository on the Hub using [move_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.move_repo). Using this method, you can also move the repo from a user to
an organization. When doing so, there are a [few limitations](https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo)
that you should be aware of. For example, you can't transfer your repo to another user.

```py
>>> from huggingface_hub import move_repo
>>> move_repo(from_id="Wauplin/cool-model", to_id="huggingface/cool-model")
```

Or via CLI:

```bash
>>> hf repo move Wauplin/cool-model huggingface/cool-model
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/repository.md" />

### Collections
https://huggingface.co/docs/huggingface_hub/main/guides/collections.md

# Collections

A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this [guide](https://huggingface.co/docs/hub/collections) to understand in more detail what collections are and how they look on the Hub.

You can directly manage collections in the browser, but in this guide, we will focus on how to manage them programmatically.

## Fetch a collection

Use [get_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_collection) to fetch your collections or any public ones. You must have the collection's *slug* to retrieve a collection. A slug is an identifier for a collection based on the title and a unique ID. You can find the slug in the URL of the collection page.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hfh_collection_slug.png"/>
</div>

Let's fetch the collection with, `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`:

```py
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection
Collection(
  slug='TheBloke/recent-models-64f9a55bb3115b4f513ec026',
  title='Recent models',
  owner='TheBloke',
  items=[...],
  last_updated=datetime.datetime(2023, 10, 2, 22, 56, 48, 632000, tzinfo=datetime.timezone.utc),
  position=1,
  private=False,
  theme='green',
  upvotes=90,
  description="Models I've recently quantized. Please note that currently this list has to be updated manually, and therefore is not guaranteed to be up-to-date."
)
>>> collection.items[0]
CollectionItem(
  item_object_id='651446103cd773a050bf64c2',
  item_id='TheBloke/U-Amethyst-20B-AWQ',
  item_type='model',
  position=88,
  note=None
)
```

The [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection) object returned by [get_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_collection) contains:
- high-level metadata: `slug`, `owner`, `title`, `description`, etc.
- a list of [CollectionItem](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.CollectionItem) objects; each item represents a model, a dataset, a Space, or a paper.

All collection items are guaranteed to have:
- a unique `item_object_id`: this is the id of the collection item in the database
- an `item_id`: this is the id on the Hub of the underlying item (model, dataset, Space, paper); it is not necessarily unique, and only the `item_id`/`item_type` pair is unique
- an `item_type`: model, dataset, Space, paper
- the `position` of the item in the collection, which can be updated to reorganize your collection (see [update_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item) below)

A `note` can also be attached to the item. This is useful to add additional information about the item (a comment, a link to a blog post, etc.). The attribute still has a `None` value if an item doesn't have a note.

In addition to these base attributes, returned items can have additional attributes depending on their type: `author`, `private`, `lastModified`, `gated`, `title`, `likes`, `upvotes`, etc. None of these attributes are guaranteed to be returned.

## List collections

We can also retrieve collections using [list_collections()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_collections). Collections can be filtered using some parameters. Let's list all the collections from the user [`teknium`](https://huggingface.co/teknium).
```py
>>> from huggingface_hub import list_collections

>>> collections = list_collections(owner="teknium")
```

This returns an iterable of `Collection` objects. We can iterate over them to print, for example, the number of upvotes for each collection.

```py
>>> for collection in collections:
...   print("Number of upvotes:", collection.upvotes)
Number of upvotes: 1
Number of upvotes: 5
```

> [!WARNING]
> When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use [get_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_collection).

It is possible to do more advanced filtering. Let's get all collections containing the model [TheBloke/OpenHermes-2.5-Mistral-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF), sorted by trending, and limit the count to 5.
```py
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
...   print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
```

Parameter `sort` must be one of  `"last_modified"`,  `"trending"` or `"upvotes"`. Parameter `item` accepts any particular item. For example:
* `"models/teknium/OpenHermes-2.5-Mistral-7B"`
* `"spaces/julien-c/open-gpt-rhyming-robot"`
* `"datasets/squad"`
* `"papers/2311.12983"`

For more details, please check out [list_collections()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_collections) reference.

## Create a new collection

Now that we know how to get a [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection), let's create our own! Use [create_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_collection) with a title and description. To create a collection on an organization page, pass `namespace="my-cool-org"` when creating the collection. Finally, you can also create private collections by passing `private=True`.

```py
>>> from huggingface_hub import create_collection

>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
```

It will return a [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection) object with the high-level metadata (title, description, owner, etc.) and an empty list of items. You will now be able to refer to this collection using its `slug`.

```py
>>> collection.slug
'owner/iccv-2023-15e23b46cb98efca45'
>>> collection.title
"ICCV 2023"
>>> collection.owner
"username"
>>> collection.url
'https://huggingface.co/collections/owner/iccv-2023-15e23b46cb98efca45'
```

## Manage items in a collection

Now that we have a [Collection](/docs/huggingface_hub/main/en/package_reference/collections#huggingface_hub.Collection), we want to add items to it and organize them.

### Add items

Items have to be added one by one using [add_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.add_collection_item). You only need to know the `collection_slug`, `item_id` and `item_type`. Optionally, you can also add a `note` to the item (500 characters maximum).

```py
>>> from huggingface_hub import create_collection, add_collection_item

>>> collection = create_collection(title="OS Week Highlights - Sept 18 - 24", namespace="osanseviero")
>>> collection.slug
"osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"

>>> add_collection_item(collection.slug, item_id="coqui/xtts", item_type="space")
>>> add_collection_item(
...     collection.slug,
...     item_id="warp-ai/wuerstchen",
...     item_type="model",
...     note="Würstchen is a new fast and efficient high resolution text-to-image architecture and model"
... )
>>> add_collection_item(collection.slug, item_id="lmsys/lmsys-chat-1m", item_type="dataset")
>>> add_collection_item(collection.slug, item_id="warp-ai/wuerstchen", item_type="space") # same item_id, different item_type
```

If an item already exists in a collection (same `item_id`/`item_type` pair), an HTTP 409 error will be raised. You can choose to ignore this error by setting `exists_ok=True`.

### Add a note to an existing item

You can modify an existing item to add or modify the note attached to it using [update_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item). Let's reuse the example above:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Fetch collection with newly added items
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# Add note the `lmsys-chat-1m` dataset
>>> update_collection_item(
...     collection_slug=collection_slug,
...     item_object_id=collection.items[2].item_object_id,
...     note="This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.",
... )
```

### Reorder items

Items in a collection are ordered. The order is determined by the `position` attribute of each item. By default, items are ordered by appending new items at the end of the collection. You can update the order using [update_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item) the same way you would add a note.

Let's reuse our example above:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Fetch collection
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# Reorder to place the two `Wuerstchen` items together
>>> update_collection_item(
...     collection_slug=collection_slug,
...     item_object_id=collection.items[3].item_object_id,
...     position=2,
... )
```

### Remove items

Finally, you can also remove an item using [delete_collection_item()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_collection_item).

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Fetch collection
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# Remove `coqui/xtts` Space from the list
>>> delete_collection_item(collection_slug=collection_slug, item_object_id=collection.items[0].item_object_id)
```

## Delete collection

A collection can be deleted using [delete_collection()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_collection).

> [!WARNING]
> This is a non-revertible action. A deleted collection cannot be restored.

```py
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/collections.md" />

### Download files from the Hub
https://huggingface.co/docs/huggingface_hub/main/guides/download.md

# Download files from the Hub

The `huggingface_hub` library provides functions to download files from the repositories
stored on the Hub. You can use these functions independently or integrate them into your
own library, making it more convenient for your users to interact with the Hub. This
guide will show you how to:

* Download and cache a single file.
* Download and cache an entire repository.
* Download files to a local folder.

## Download a single file

The [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) function is the main function for downloading files from the Hub.
It downloads the remote file, caches it on disk (in a version-aware way), and returns its local file path.

> [!TIP]
> The returned filepath is a pointer to the HF local cache. Therefore, it is important to not modify the file to avoid
> having a corrupted cache. If you are interested in getting to know more about how files are cached, please refer to our
> [caching guide](./manage-cache).

### From latest version

Select the file to download using the `repo_id`, `repo_type` and `filename` parameters. By default, the file will
be considered as being part of a `model` repo.

```python
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json")
'/root/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade/config.json'

# Download from a dataset
>>> hf_hub_download(repo_id="google/fleurs", filename="fleurs.py", repo_type="dataset")
'/root/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34/fleurs.py'
```

### From specific version

By default, the latest version from the `main` branch is downloaded. However, in some cases you want to download a file
at a particular version (e.g. from a specific branch, a PR, a tag or a commit hash).
To do so, use the `revision` parameter:

```python
# Download from the `v1.0` tag
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="v1.0")

# Download from the `test-branch` branch
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="test-branch")

# Download from Pull Request #3
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="refs/pr/3")

# Download from a specific commit hash
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="877b84a8f93f2d619faa2a6e514a32beef88ab0a")
```

**Note:** When using the commit hash, it must be the full-length hash instead of a 7-character commit hash.

### Construct a download URL

In case you want to construct the URL used to download a file from a repo, you can use [hf_hub_url()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_url) which returns a URL.
Note that it is used internally by [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download).

## Download an entire repository

[snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) downloads an entire repository at a given revision. It uses internally [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) which
means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed up the process.

To download a whole repository, just pass the `repo_id` and `repo_type`:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp")
'/home/lysandre/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade'

# Or from a dataset
>>> snapshot_download(repo_id="google/fleurs", repo_type="dataset")
'/home/lysandre/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34'
```

[snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) downloads the latest revision by default. If you want a specific repository revision, use the
`revision` parameter:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", revision="refs/pr/1")
```

### Filter files to download

[snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) provides an easy way to download a repository. However, you don't always want to download the
entire content of a repository. For example, you might want to prevent downloading all `.bin` files if you know you'll
only use the `.safetensors` weights. You can do that using `allow_patterns` and `ignore_patterns` parameters.

These parameters accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing
patterns) as documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). The pattern matching is
based on [`fnmatch`](https://docs.python.org/3/library/fnmatch.html).

For example, you can use `allow_patterns` to only download JSON configuration files:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", allow_patterns="*.json")
```

On the other hand, `ignore_patterns` can exclude certain files from being downloaded. The
following example ignores the `.msgpack` and `.h5` file extensions:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", ignore_patterns=["*.msgpack", "*.h5"])
```

Finally, you can combine both to precisely filter your download. Here is an example to download all json and markdown
files except `vocab.json`.

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="gpt2", allow_patterns=["*.md", "*.json"], ignore_patterns="vocab.json")
```

## Download file(s) to a local folder

By default, we recommend using the [cache system](./manage-cache) to download files from the Hub. You can specify a custom cache location using the `cache_dir` parameter in [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) and [snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download), or by setting the [`HF_HOME`](../package_reference/environment_variables#hf_home) environment variable.

However, if you need to download files to a specific folder, you can pass a `local_dir` parameter to the download function. This is useful to get a workflow closer to what the `git` command offers. The downloaded files will maintain their original file structure within the specified folder. For example, if `filename="data/train.csv"` and `local_dir="path/to/folder"`, the resulting filepath will be `"path/to/folder/data/train.csv"`.

A `.cache/huggingface/` folder is created at the root of your local directory containing metadata about the downloaded files. This prevents re-downloading files if they're already up-to-date. If the metadata has changed, then the new file version is downloaded. This makes the `local_dir` optimized for pulling only the latest changes.

After completing the download, you can safely remove the `.cache/huggingface/` folder if you no longer need it. However, be aware that re-running your script without this folder may result in longer recovery times, as metadata will be lost. Rest assured that your local data will remain intact and unaffected.

> [!TIP]
> Don't worry about the `.cache/huggingface/` folder when committing changes to the Hub! This folder is automatically ignored by both `git` and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder).

## Download from the CLI

You can use the `hf download` command from the terminal to directly download files from the Hub.
Internally, it uses the same [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) and [snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) helpers described above and prints the
returned path to the terminal.

```bash
>>> hf download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

You can download multiple files at once which displays a progress bar and returns the snapshot path in which the files
are located:

```bash
>>> hf download gpt2 config.json model.safetensors
Fetching 2 files: 100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 23831.27it/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

For more details about the CLI download command, please refer to the [CLI guide](./cli#hf-download).

## Dry-run mode

In some cases, you would like to check which files would be downloaded before actually downloading them. You can check this using the `--dry-run` parameter. It lists all files to download on the repo and checks whether they are already downloaded or not. This gives an idea of how many files have to be downloaded and their sizes.

Here is an example, checking on a single file:

```sh
>>> hf download openai-community/gpt2 onnx/decoder_model_merged.onnx --dry-run
[dry-run] Will download 1 files (out of 1) totalling 655.2M
File                           Bytes to download
------------------------------ -----------------
onnx/decoder_model_merged.onnx 655.2M
```

And if the file is already cached:

```sh
>>> hf download openai-community/gpt2 onnx/decoder_model_merged.onnx --dry-run
[dry-run] Will download 0 files (out of 1) totalling 0.0.
File                           Bytes to download
------------------------------ -----------------
onnx/decoder_model_merged.onnx -
```

You can also execute a dry-run on an entire repository:

```sh
>>> hf download openai-community/gpt2 --dry-run
[dry-run] Fetching 26 files: 100%|█████████████| 26/26 [00:04<00:00,  6.26it/s]
[dry-run] Will download 11 files (out of 26) totalling 5.6G.
File                              Bytes to download
--------------------------------- -----------------
.gitattributes                    -
64-8bits.tflite                   125.2M
64-fp16.tflite                    248.3M
64.tflite                         495.8M
README.md                         -
config.json                       -
flax_model.msgpack                497.8M
generation_config.json            -
merges.txt                        -
model.safetensors                 548.1M
onnx/config.json                  -
onnx/decoder_model.onnx           653.7M
onnx/decoder_model_merged.onnx    655.2M
onnx/decoder_with_past_model.onnx 653.7M
onnx/generation_config.json       -
onnx/merges.txt                   -
onnx/special_tokens_map.json      -
onnx/tokenizer.json               -
onnx/tokenizer_config.json        -
onnx/vocab.json                   -
pytorch_model.bin                 548.1M
rust_model.ot                     702.5M
tf_model.h5                       497.9M
tokenizer.json                    -
tokenizer_config.json             -
vocab.json                        -
```

And with files filtering:

```sh
>>> hf download openai-community/gpt2 --include "*.json"  --dry-run
[dry-run] Fetching 11 files: 100%|█████████████| 11/11 [00:00<00:00, 80518.92it/s]
[dry-run] Will download 0 files (out of 11) totalling 0.0.
File                         Bytes to download
---------------------------- -----------------
config.json                  -
generation_config.json       -
onnx/config.json             -
onnx/generation_config.json  -
onnx/special_tokens_map.json -
onnx/tokenizer.json          -
onnx/tokenizer_config.json   -
onnx/vocab.json              -
tokenizer.json               -
tokenizer_config.json        -
vocab.json                   -
```

Finally, you can also make a dry-run programmatically by passing `dry_run=True` to [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) and [snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download). It will return a [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo) (respectively a list of [DryRunFileInfo](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.DryRunFileInfo)) with for each file, their commit hash, file name and file size, whether the file is cached and whether the file would be downloaded. In practice, the file will be downloaded if not cached or if `force_download=True` is passed.

## Faster downloads

Take advantage of faster downloads through `hf_xet`, the Python binding to the [`xet-core`](https://github.com/huggingface/xet-core) library that enables 
chunk-based deduplication for faster downloads and uploads. `hf_xet` integrates seamlessly with `huggingface_hub`, but uses the Rust `xet-core` library and Xet storage instead of LFS.

`hf_xet` uses the Xet storage system, which breaks files down into immutable chunks, storing collections of these chunks (called blocks or xorbs) remotely and retrieving them to reassemble the file when requested. When downloading, after confirming the user is authorized to access the files, `hf_xet` will query the Xet content-addressable service (CAS) with the LFS SHA256 hash for this file to receive the reconstruction metadata (ranges within xorbs) to assemble these files, along with presigned URLs to download the xorbs directly. Then `hf_xet` will efficiently download the xorb ranges necessary and will write out the files on disk.

To enable it, simply install the latest version of `huggingface_hub`:

```bash
pip install -U "huggingface_hub"
```

As of `huggingface_hub` 0.32.0, this will also install `hf_xet`.

All other `huggingface_hub` APIs will continue to work without any modification. To learn more about the benefits of Xet storage and `hf_xet`, refer to this [section](https://huggingface.co/docs/hub/xet/index).

Note: `hf_transfer` was formerly used with the LFS storage backend and is now deprecated; use `hf_xet` instead.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/download.md" />

### Run and manage Jobs
https://huggingface.co/docs/huggingface_hub/main/guides/jobs.md

# Run and manage Jobs

The Hugging Face Hub provides compute for AI and data workflows via Jobs.
A job runs on Hugging Face infrastructure and are defined with a command to run (e.g. a python command), a Docker Image from Hugging Face Spaces or Docker Hub, and a hardware flavor (CPU, GPU, TPU). This guide will show you how to interact with Jobs on the Hub, especially:

- Run a job.
- Check job status.
- Select the hardware.
- Configure environment variables and secrets.
- Run UV scripts.

If you want to run and manage a job on the Hub, your machine must be logged in. If you are not, please refer to
[this section](../quick-start#authentication). In the rest of this guide, we will assume that your machine is logged in.

> [!TIP]
> **Hugging Face Jobs** are available only to [Pro users](https://huggingface.co/pro) and [Team or Enterprise organizations](https://huggingface.co/enterprise). Upgrade your plan to get started!

## Jobs Command Line Interface

Use the [`hf jobs` CLI](./cli#hf-jobs) to run Jobs from the command line, and pass `--flavor` to specify your hardware.

`hf jobs run` runs Jobs with a Docker image and a command with a familiar Docker-like interface. Think `docker run`, but for running code on any hardware:

```bash
>>> hf jobs run python:3.12 python -c "print('Hello world!')"
>>> hf jobs run --flavor a10g-small pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel python -c "import torch; print(torch.cuda.get_device_name())"
```

Use `hf jobs uv run` to run local or remote UV scripts:

```bash
>>> hf jobs uv run my_script.py
>>> hf jobs uv run --flavor a10g-small "https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py" 
```

UV scripts are Python scripts that include their dependencies directly in the file using a special comment syntax defined in the [UV documentation](https://docs.astral.sh/uv/guides/scripts/).

Now the rest of this guide will show you the python API.
If you would like to view all the available `hf jobs` commands and options instead, check out the [guide on the `hf jobs` command line interface](./cli#hf-jobs).

## Run a Job

Run compute Jobs defined with a command and a Docker Image on Hugging Face infrastructure (including GPUs and TPUs).

You can only manage Jobs that you own (under your username namespace) or from organizations in which you have write permissions.
This feature is pay-as-you-go: you only pay for the seconds you use.

[run_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_job) lets you run any command on Hugging Face's infrastructure:

```python
# Directly run Python code
>>> from huggingface_hub import run_job
>>> run_job(
...     image="python:3.12",
...     command=["python", "-c", "print('Hello from the cloud!')"],
... )

# Use GPUs without any setup
>>> run_job(
...     image="pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel",
...     command=["python", "-c", "import torch; print(torch.cuda.get_device_name())"],
...     flavor="a10g-small",
... )

# Run in an organization account
>>> run_job(
...     image="python:3.12",
...     command=["python", "-c", "print('Running in an org account')"],
...     namespace="my-org-name",
... )

# Run from Hugging Face Spaces
>>> run_job(
...     image="hf.co/spaces/lhoestq/duckdb",
...     command=["duckdb", "-c", "select 'hello world'"],
... )

# Run a Python script with `uv` (experimental)
>>> from huggingface_hub import run_uv_job
>>> run_uv_job("my_script.py")
```

> [!WARNING]
> **Important**: Jobs have a default timeout (30 minutes), after which they will automatically stop. For long-running tasks like model training, make sure to set a custom timeout using the `timeout` parameter. See [Configure Job Timeout](#configure-job-timeout) for details.

[run_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_job) returns the [JobInfo](/docs/huggingface_hub/main/en/package_reference/jobs#huggingface_hub.JobInfo) which has the URL of the Job on Hugging Face, where you can see the Job status and the logs.
Save the Job ID from [JobInfo](/docs/huggingface_hub/main/en/package_reference/jobs#huggingface_hub.JobInfo) to manage the job:

```python
>>> from huggingface_hub import run_job
>>> job = run_job(
...     image="python:3.12",
...     command=["python", "-c", "print('Hello from the cloud!')"]
... )
>>> job.url
https://huggingface.co/jobs/lhoestq/687f911eaea852de79c4a50a
>>> job.id
687f911eaea852de79c4a50a
```

Jobs run in the background. The next section guides you through [inspect_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.inspect_job) to know a jobs' status and [fetch_job_logs()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.fetch_job_logs) to view the logs.

## Check Job status

```python
# List your jobs
>>> from huggingface_hub import list_jobs
>>> jobs = list_jobs()
>>> jobs[0]
JobInfo(id='687f911eaea852de79c4a50a', created_at=datetime.datetime(2025, 7, 22, 13, 24, 46, 909000, tzinfo=datetime.timezone.utc), docker_image='python:3.12', space_id=None, command=['python', '-c', "print('Hello from the cloud!')"], arguments=[], environment={}, secrets={}, flavor='cpu-basic', status=JobStatus(stage='COMPLETED', message=None), owner=JobOwner(id='5e9ecfc04957053f60648a3e', name='lhoestq'), endpoint='https://huggingface.co', url='https://huggingface.co/jobs/lhoestq/687f911eaea852de79c4a50a')

# List your running jobs
>>> running_jobs = [job for job in list_jobs() if job.status.stage == "RUNNING"]

# Inspect the status of a job
>>> from huggingface_hub import inspect_job
>>> inspect_job(job_id=job_id)
JobInfo(id='687f911eaea852de79c4a50a', created_at=datetime.datetime(2025, 7, 22, 13, 24, 46, 909000, tzinfo=datetime.timezone.utc), docker_image='python:3.12', space_id=None, command=['python', '-c', "print('Hello from the cloud!')"], arguments=[], environment={}, secrets={}, flavor='cpu-basic', status=JobStatus(stage='COMPLETED', message=None), owner=JobOwner(id='5e9ecfc04957053f60648a3e', name='lhoestq'), endpoint='https://huggingface.co', url='https://huggingface.co/jobs/lhoestq/687f911eaea852de79c4a50a')

# View logs from a job
>>> from huggingface_hub import fetch_job_logs
>>> for log in fetch_job_logs(job_id=job_id):
...     print(log)
Hello from the cloud!

# Cancel a job
>>> from huggingface_hub import cancel_job
>>> cancel_job(job_id=job_id)
```

Check the status of multiple jobs to know when they're all finished using a loop and [inspect_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.inspect_job):

```python
# Run multiple jobs in parallel and wait for their completions
>>> import time
>>> from huggingface_hub import inspect_job, run_job
>>> jobs = [run_job(image=image, command=command) for command in commands]
>>> for job in jobs:
...     while inspect_job(job_id=job.id).status.stage not in ("COMPLETED", "ERROR"):
...         time.sleep(10)
```

## Select the hardware

There are numerous cases where running Jobs on GPUs are useful:

- **Model Training**: Fine-tune or train models on GPUs (T4, A10G, A100) without managing infrastructure
- **Synthetic Data Generation**: Generate large-scale datasets using LLMs on powerful hardware
- **Data Processing**: Process massive datasets with high-CPU configurations for parallel workloads
- **Batch Inference**: Run offline inference on thousands of samples using optimized GPU setups
- **Experiments & Benchmarks**: Run ML experiments on consistent hardware for reproducible results
- **Development & Debugging**: Test GPU code without local CUDA setup

Run jobs on GPUs or TPUs with the `flavor` argument. For example, to run a PyTorch job on an A10G GPU:

```python
# Use an A10G GPU to check PyTorch CUDA
>>> from huggingface_hub import run_job
>>> run_job(
...     image="pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel",
...     command=["python", "-c", "import torch; print(f'This code ran with the following GPU: {torch.cuda.get_device_name()}')"],
...     flavor="a10g-small",
... )
```

Running this will show the following output!

```bash
This code ran with the following GPU: NVIDIA A10G
```

Use this to run a fine-tuning script like [trl/scripts/sft.py](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) with UV:

```python
>>> from huggingface_hub import run_uv_job
>>> run_uv_job(
...     "sft.py",
...     script_args=["--model_name_or_path", "Qwen/Qwen2-0.5B", ...],
...     dependencies=["trl"],
...     env={"HF_TOKEN": ...},
...     flavor="a10g-small",
... )
```

> [!TIP]
> For comprehensive guidance on running model training jobs with TRL on Hugging Face infrastructure, check out the [TRL Jobs Training documentation](https://huggingface.co/docs/trl/main/en/jobs_training). It covers fine-tuning recipes, hardware selection, and best practices for training models efficiently.

Available `flavor` options:

- CPU: `cpu-basic`, `cpu-upgrade`
- GPU: `t4-small`, `t4-medium`, `l4x1`, `l4x4`, `a10g-small`, `a10g-large`, `a10g-largex2`, `a10g-largex4`,`a100-large`
- TPU: `v5e-1x1`, `v5e-2x2`, `v5e-2x4`

(updated in 07/2025 from Hugging Face [suggested_hardware docs](https://huggingface.co/docs/hub/en/spaces-config-reference))

That's it! You're now running code on Hugging Face's infrastructure.

## Configure Job Timeout

Jobs have a default timeout (30 minutes), after which they will automatically stop. This is important to know when running long-running tasks like model training.

### Setting a custom timeout

You can specify a custom timeout value using the `timeout` parameter when running a job. The timeout can be specified in two ways:

1. **As a number** (interpreted as seconds):
```python
>>> from huggingface_hub import run_job
>>> job = run_job(
...     image="pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel",
...     command=["python", "train_model.py"],
...     flavor="a10g-large",
...     timeout=7200,  # 2 hours in seconds
... )
```

2. **As a string with time units**:
```python
>>> # Using different time units
>>> job = run_job(
...     image="pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel",
...     command=["python", "train_model.py"],
...     flavor="a10g-large",
...     timeout="2h",  # 2 hours
... )

>>> # Other examples:
>>> # timeout="30m"    # 30 minutes
>>> # timeout="1.5h"   # 1.5 hours
>>> # timeout="1d"     # 1 day
>>> # timeout="3600s"  # 3600 seconds
```

Supported time units:
- `s` - seconds
- `m` - minutes  
- `h` - hours
- `d` - days

### Using timeout with UV jobs

For UV jobs, you can also specify the timeout:

```python
>>> from huggingface_hub import run_uv_job
>>> job = run_uv_job(
...     "training_script.py",
...     flavor="a10g-large",
...     timeout="90m",  # 90 minutes
... )
```

> [!WARNING]
> If you don't specify a timeout, a default timeout will be applied to your job. For long-running tasks like model training that may take hours, make sure to set an appropriate timeout to avoid unexpected job terminations.

### Monitoring job duration

When running long tasks, it's good practice to:
- Estimate your job's expected duration and set a timeout with some buffer
- Monitor your job's progress through the logs
- Check the job status to ensure it hasn't timed out

```python
>>> from huggingface_hub import inspect_job, fetch_job_logs
>>> # Check job status
>>> job_info = inspect_job(job_id=job.id)
>>> if job_info.status.stage == "ERROR":
...     print(f"Job failed: {job_info.status.message}")
...     # Check logs for more details
...     for log in fetch_job_logs(job_id=job.id):
...         print(log)
```

For more details about the timeout parameter, see the [`run_job` API reference](https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.run_job.timeout).

## Pass Environment variables and Secrets

You can pass environment variables to your job using `env` and `secrets`:

```python
# Pass environment variables
>>> from huggingface_hub import run_job
>>> run_job(
...     image="python:3.12",
...     command=["python", "-c", "import os; print(os.environ['FOO'], os.environ['BAR'])"],
...     env={"FOO": "foo", "BAR": "bar"},
... )
```


```python
# Pass secrets - they will be encrypted server side
>>> from huggingface_hub import run_job
>>> run_job(
...     image="python:3.12",
...     command=["python", "-c", "import os; print(os.environ['MY_SECRET'])"],
...     secrets={"MY_SECRET": "psswrd"},
... )
```


### UV Scripts (Experimental)

> [!TIP]
> Looking for ready-to-use UV scripts? Check out the [uv-scripts organization](https://huggingface.co/uv-scripts) on the Hugging Face Hub, which offers a community collection of UV scripts for tasks like model training, synthetic data generation, data processing, and more.

Run UV scripts (Python scripts with inline dependencies) on HF infrastructure:

```python
# Run a UV script (creates temporary repo)
>>> from huggingface_hub import run_uv_job
>>> run_uv_job("my_script.py")

# Run with GPU
>>> run_uv_job("ml_training.py", flavor="gpu-t4-small")

# Run with dependencies
>>> run_uv_job("inference.py", dependencies=["transformers", "torch"])

# Run a script directly from a URL
>>> run_uv_job("https://huggingface.co/datasets/username/scripts/resolve/main/example.py")

# Run a command
>>> run_uv_job("python", script_args=["-c", "import lighteval"], dependencies=["lighteval"])
```

UV scripts are Python scripts that include their dependencies directly in the file using a special comment syntax. This makes them perfect for self-contained tasks that don't require complex project setups. Learn more about UV scripts in the [UV documentation](https://docs.astral.sh/uv/guides/scripts/).


#### Docker Images for UV Scripts

While UV scripts can specify their dependencies inline, ML workloads often have complex dependencies. Using pre-built Docker images with these libraries already installed can significantly speed up job startup and avoid dependency issues.

By default, when you run `hf jobs uv run` the `astral-sh/uv:python3.12-bookworm` image is used. This image is based on the Python 3.12 Bookworm distribution with uv pre-installed.

You can specify a different image using the `--image` flag:

```bash
hf jobs uv run \
 --flavor a10g-large \
 --image vllm/vllm-openai:latest \
...
```

The above command will run using the `vllm/vllm-openai:latest` image. This approach could be useful if you are using vLLM for synthetic data generation.

> [!TIP]
> Many inference frameworks provide optimized docker images. As uv is increasingly adopted in the Python ecosystem more of these will also have uv pre-installed meaning they will work when using hf jobs uv run.

### Scheduled Jobs

Schedule and manage jobs that will run on HF infrastructure.

Use [create_scheduled_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_scheduled_job) or [create_scheduled_uv_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_scheduled_uv_job) with a schedule of `@annually`, `@yearly`, `@monthly`, `@weekly`, `@daily`, `@hourly`, or a CRON schedule expression (e.g., `"0 9 * * 1"` for 9 AM every Monday):

```python
# Schedule a job that runs every hour
>>> from huggingface_hub import create_scheduled_job
>>> create_scheduled_job(
...     image="python:3.12",
...     command=["python",  "-c", "print('This runs every hour!')"],
...     schedule="@hourly"
... )

# Use the CRON syntax
>>> create_scheduled_job(
...     image="python:3.12",
...     command=["python",  "-c", "print('This runs every 5 minutes!')"],
...     schedule="*/5 * * * *"
... )

# Schedule with GPU
>>> create_scheduled_job(
...     image="pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel",
...     command=["python",  "-c", 'import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")'],
...     schedule="@hourly",
...     flavor="a10g-small",
... )

# Schedule a UV script
>>> from huggingface_hub import create_scheduled_uv_job
>>> create_scheduled_uv_job("my_script.py", schedule="@hourly")
```

Use the same parameters as [run_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_job) and [run_uv_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_uv_job) to pass environment variables, secrets, timeout, etc.

Manage scheduled jobs using `list_scheduled_jobs`, [inspect_scheduled_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.inspect_scheduled_job), [suspend_scheduled_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.suspend_scheduled_job), [resume_scheduled_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_scheduled_job), and [delete_scheduled_job()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_scheduled_job):

```python
# List your active scheduled jobs
>>> from huggingface_hub import list_scheduled_jobs
>>> list_scheduled_jobs()

# Inspect the status of a job
>>> from huggingface_hub import inspect_scheduled_job
>>> inspect_scheduled_job(scheduled_job_id)

# Suspend (pause) a scheduled job
>>> from huggingface_hub import suspend_scheduled_job
>>> suspend_scheduled_job(scheduled_job_id)

# Resume a scheduled job
>>> from huggingface_hub import resume_scheduled_job
>>> resume_scheduled_job(scheduled_job_id)

# Delete a scheduled job
>>> from huggingface_hub import delete_scheduled_job
>>> delete_scheduled_job(scheduled_job_id)
```

### Trigger Jobs with webhooks

Webhooks allow you to listen for new changes on specific repos or to all repos belonging to particular set of users/organizations (not just your repos, but any repo).

Use [create_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_webhook) to create a webhook that triggers a Job when a change happens in a Hugging Face repository:

```python
from huggingface_hub import create_webhook

# Example: Creating a webhook that triggers a Job
webhook = create_webhook(
    job_id=job_id,
    watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}],
    domains=["repo", "discussion"],
    secret="your-secret"
)
```

The webhook triggers the Job with the webhook payload in the environment variable `WEBHOOK_PAYLOAD`.
You can find more information on webhooks in the [Webhooks documentation](./webhooks).


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/jobs.md" />

### Webhooks
https://huggingface.co/docs/huggingface_hub/main/guides/webhooks.md

# Webhooks

Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to all repos belonging to particular users/organizations you're interested in following. This guide will first explain how to manage webhooks programmatically. Then we'll see how to leverage `huggingface_hub` to create a server listening to webhooks and deploy it to a Space.

This guide assumes you are familiar with the concept of webhooks on the Huggingface Hub. To learn more about webhooks themselves, you should read this [guide](https://huggingface.co/docs/hub/webhooks) first.

## Managing Webhooks

`huggingface_hub` allows you to manage your webhooks programmatically. You can list your existing webhooks, create new ones, and update, enable, disable or delete them. This section guides you through the procedures using the Hugging Face Hub's API functions.

### Creating a Webhook

To create a new webhook, use [create_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_webhook) and specify the URL where payloads should be sent, what events should be watched, and optionally set a domain and a secret for security.

```python
from huggingface_hub import create_webhook

# Example: Creating a webhook
webhook = create_webhook(
    url="https://webhook.site/your-custom-url",
    watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}],
    domains=["repo", "discussion"],
    secret="your-secret"
)
```

A webhook can also trigger a Job to run on Hugging face infrastructure instead of sending the payload to an URL.
In this case you need to pass the ID of a source Job.

```python
from huggingface_hub import create_webhook

# Example: Creating a webhook that triggers a Job
webhook = create_webhook(
    job_id=job_id,
    watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}],
    domains=["repo", "discussion"],
    secret="your-secret"
)
```

The webhook triggers the Job with the webhook payload in the environment variable `WEBHOOK_PAYLOAD`.
For more information on Hugging Face Jobs, available hardware (CPU, GPU) and UV scripts, see the [Jobs documentation](./jobs).

### Listing Webhooks

To see all the webhooks you have configured, you can list them with [list_webhooks()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_webhooks). This is useful to review their IDs, URLs, and statuses.

```python
from huggingface_hub import list_webhooks

# Example: Listing all webhooks
webhooks = list_webhooks()
for webhook in webhooks:
    print(webhook)
```

### Updating a Webhook

If you need to change the configuration of an existing webhook, such as the URL or the events it watches, you can update it using [update_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_webhook).

```python
from huggingface_hub import update_webhook

# Example: Updating a webhook
updated_webhook = update_webhook(
    webhook_id="your-webhook-id",
    url="https://new.webhook.site/url",
    watched=[{"type": "user", "name": "new-username"}],
    domains=["repo"]
)
```

### Enabling and Disabling Webhooks

You might want to temporarily disable a webhook without deleting it. This can be done using [disable_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.disable_webhook), and the webhook can be re-enabled later with [enable_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.enable_webhook).

```python
from huggingface_hub import enable_webhook, disable_webhook

# Example: Enabling a webhook
enabled_webhook = enable_webhook("your-webhook-id")
print("Enabled:", enabled_webhook)

# Example: Disabling a webhook
disabled_webhook = disable_webhook("your-webhook-id")
print("Disabled:", disabled_webhook)
```

### Deleting a Webhook

When a webhook is no longer needed, it can be permanently deleted using [delete_webhook()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_webhook).

```python
from huggingface_hub import delete_webhook

# Example: Deleting a webhook
delete_webhook("your-webhook-id")
```

## Webhooks Server

The base class that we will use in this guides section is [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer). It is a class for easily configuring a server that
can receive webhooks from the Huggingface Hub. The server is based on a [Gradio](https://gradio.app/) app. It has a UI
to display instructions for you or your users and an API to listen to webhooks.

> [!TIP]
> To see a running example of a webhook server, check out the [Spaces CI Bot](https://huggingface.co/spaces/spaces-ci-bot/webhook)
> one. It is a Space that launches ephemeral environments when a PR is opened on a Space.

> [!WARNING]
> This is an [experimental feature](../package_reference/environment_variables#hfhubdisableexperimentalwarning). This
> means that we are still working on improving the API. Breaking changes might be introduced in the future without prior
> notice. Make sure to pin the version of `huggingface_hub` in your requirements.


### Create an endpoint

Implementing a webhook endpoint is as simple as decorating a function. Let's see a first example to explain the main
concepts:

```python
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...
```

Save this snippet in a file called `'app.py'` and run it with `'python app.py'`. You should see a message like this:

```text
Webhook secret is not defined. This means your webhook endpoints will be open to everyone.
To add a secret, set `WEBHOOK_SECRET` as environment variable or pass it at initialization:
        `app = WebhooksServer(webhook_secret='my_secret', ...)`
For more details about webhook secrets, please refer to https://huggingface.co/docs/hub/webhooks#webhook-secret.
Running on local URL:  http://127.0.0.1:7860
Running on public URL: https://1fadb0f52d8bf825fc.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces

Webhooks are correctly setup and ready to use:
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training
Go to https://huggingface.co/settings/webhooks to setup your webhooks.
```

Good job! You just launched a webhook server! Let's break down what happened exactly:

1. By decorating a function with [webhook_endpoint()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.webhook_endpoint), a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) object has been created in the background.
As you can see, this server is a Gradio app running on http://127.0.0.1:7860. If you open this URL in your browser, you
will see a landing page with instructions about the registered webhooks.
2. A Gradio app is a FastAPI server under the hood. A new POST route `/webhooks/trigger_training` has been added to it.
This is the route that will listen to webhooks and run the `trigger_training` function when triggered. FastAPI will
automatically parse the payload and pass it to the function as a [WebhookPayload](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhookPayload) object. This is a `pydantic` object
that contains all the information about the event that triggered the webhook.
3. The Gradio app also opened a tunnel to receive requests from the internet. This is the interesting part: you can
configure a Webhook on https://huggingface.co/settings/webhooks pointing to your local machine. This is useful for
debugging your webhook server and quickly iterating before deploying it to a Space.
4. Finally, the logs also tell you that your server is currently not secured by a secret. This is not problematic for
local debugging but is to keep in mind for later.

> [!WARNING]
> By default, the server is started at the end of your script. If you are running it in a notebook, you can start the
> server manually by calling `decorated_function.run()`. Since a unique server is used, you only have to start the server
> once even if you have multiple endpoints.


### Configure a Webhook

Now that you have a webhook server running, you want to configure a Webhook to start receiving messages.
Go to https://huggingface.co/settings/webhooks, click on "Add a new webhook" and configure your Webhook. Set the target
repositories you want to watch and the Webhook URL, here `https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training`.

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/configure_webhook.png"/>
</div>

And that's it! You can now trigger that webhook by updating the target repository (e.g. push a commit). Check the
Activity tab of your Webhook to see the events that have been triggered. Now that you have a working setup, you can
test it and quickly iterate. If you modify your code and restart the server, your public URL might change. Make sure
to update the webhook configuration on the Hub if needed.

### Deploy to a Space

Now that you have a working webhook server, the goal is to deploy it to a Space. Go to https://huggingface.co/new-space
to create a Space. Give it a name, select the Gradio SDK and click on "Create Space". Upload your code to the Space
in a file called `app.py`. Your Space will start automatically! For more details about Spaces, please refer to this
[guide](https://huggingface.co/docs/hub/spaces-overview).

Your webhook server is now running on a public Space. If most cases, you will want to secure it with a secret. Go to
your Space settings > Section "Repository secrets" > "Add a secret". Set the `WEBHOOK_SECRET` environment variable to
the value of your choice. Go back to the [Webhooks settings](https://huggingface.co/settings/webhooks) and set the
secret in the webhook configuration. Now, only requests with the correct secret will be accepted by your server.

And this is it! Your Space is now ready to receive webhooks from the Hub. Please keep in mind that if you run the Space
on a free 'cpu-basic' hardware, it will be shut down after 48 hours of inactivity. If you need a permanent Space, you
should consider setting to an [upgraded hardware](https://huggingface.co/docs/hub/spaces-gpus#hardware-specs).

### Advanced usage

The guide above explained the quickest way to setup a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer). In this section, we will see how to customize
it further.

#### Multiple endpoints

You can register multiple endpoints on the same server. For example, you might want to have one endpoint to trigger
a training job and another one to trigger a model evaluation. You can do this by adding multiple `@webhook_endpoint`
decorators:

```python
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

@webhook_endpoint
async def trigger_evaluation(payload: WebhookPayload) -> None:
    if payload.repo.type == "model" and payload.event.action == "update":
        # Trigger an evaluation job if a model is updated
        ...
```

Which will create two endpoints:

```text
(...)
Webhooks are correctly setup and ready to use:
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_evaluation
```

#### Custom server

To get more flexibility, you can also create a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) object directly. This is useful if you want to
customize the landing page of your server. You can do this by passing a [Gradio UI](https://gradio.app/docs/#blocks)
that will overwrite the default one. For example, you can add instructions for your users or add a form to manually
trigger the webhooks. When creating a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer), you can register new webhooks using the
`add_webhook()` decorator.

Here is a complete example:

```python
import gradio as gr
from fastapi import Request
from huggingface_hub import WebhooksServer, WebhookPayload

# 1. Define  UI
with gr.Blocks() as ui:
    ...

# 2. Create WebhooksServer with custom UI and secret
app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")

# 3. Register webhook with explicit name
@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
    return {"message": "hello"}

# 4. Register webhook with implicit name
@app.add_webhook
async def goodbye(payload: WebhookPayload):
    return {"message": "goodbye"}

# 5. Start server (optional)
app.run()
```

1. We define a custom UI using Gradio blocks. This UI will be displayed on the landing page of the server.
2. We create a [WebhooksServer()](/docs/huggingface_hub/main/en/package_reference/webhooks_server#huggingface_hub.WebhooksServer) object with a custom UI and a secret. The secret is optional and can be set with
the `WEBHOOK_SECRET` environment variable.
3. We register a webhook with an explicit name. This will create an endpoint at `/webhooks/say_hello`.
4. We register a webhook with an implicit name. This will create an endpoint at `/webhooks/goodbye`.
5. We start the server. This is optional as your server will automatically be started at the end of the script.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/webhooks.md" />

### Manage your Space
https://huggingface.co/docs/huggingface_hub/main/guides/manage-spaces.md

# Manage your Space

In this guide, we will see how to manage your Space runtime
([secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets),
[hardware](https://huggingface.co/docs/hub/spaces-gpus), and [storage](https://huggingface.co/docs/hub/spaces-storage#persistent-storage)) using `huggingface_hub`.

## A simple example: configure secrets and hardware.

Here is an end-to-end example to create and set up a Space on the Hub.

**1. Create a Space on the Hub.**

```py
>>> from huggingface_hub import HfApi
>>> repo_id = "Wauplin/my-cool-training-space"
>>> api = HfApi()

# For example with a Gradio SDK
>>> api.create_repo(repo_id=repo_id, repo_type="space", space_sdk="gradio")
```

**1. (bis) Duplicate a Space.**

This can prove useful if you want to build up from an existing Space instead of starting from scratch.
It is also useful is you want control over the configuration/settings of a public Space. See [duplicate_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.duplicate_space) for more details.

```py
>>> api.duplicate_space("multimodalart/dreambooth-training")
```

**2. Upload your code using your preferred solution.**

Here is an example to upload the local folder `src/` from your machine to your Space:

```py
>>> api.upload_folder(repo_id=repo_id, repo_type="space", folder_path="src/")
```

At this step, your app should already be running on the Hub for free !
However, you might want to configure it further with secrets and upgraded hardware.

**3. Configure secrets and variables**

Your Space might require some secret keys, token or variables to work.
See [docs](https://huggingface.co/docs/hub/spaces-overview#managing-secrets) for more details.
For example, an HF token to upload an image dataset to the Hub once generated from your Space.

```py
>>> api.add_space_secret(repo_id=repo_id, key="HF_TOKEN", value="hf_api_***")
>>> api.add_space_variable(repo_id=repo_id, key="MODEL_REPO_ID", value="user/repo")
```

Secrets and variables can be deleted as well:
```py
>>> api.delete_space_secret(repo_id=repo_id, key="HF_TOKEN")
>>> api.delete_space_variable(repo_id=repo_id, key="MODEL_REPO_ID")
```

> [!TIP]
> From within your Space, secrets are available as environment variables (or
> Streamlit Secrets Management if using Streamlit). No need to fetch them via the API!

> [!WARNING]
> Any change in your Space configuration (secrets or hardware) will trigger a restart of your app.

**Bonus: set secrets and variables when creating or duplicating the Space!**

Secrets and variables can be set when creating or duplicating a space:

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio",
...     space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
```

```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
```

**4. Configure the hardware**

By default, your Space will run on a CPU environment for free. You can upgrade the hardware
to run it on GPUs. A payment card or a community grant is required to access upgrade your
Space. See [docs](https://huggingface.co/docs/hub/spaces-gpus) for more details.

```py
# Use `SpaceHardware` enum
>>> from huggingface_hub import SpaceHardware
>>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM)

# Or simply pass a string value
>>> api.request_space_hardware(repo_id=repo_id, hardware="t4-medium")
```

Hardware updates are not done immediately as your Space has to be reloaded on our servers.
At any time, you can check on which hardware your Space is running to see if your request
has been met.

```py
>>> runtime = api.get_space_runtime(repo_id=repo_id)
>>> runtime.stage
"RUNNING_BUILDING"
>>> runtime.hardware
"cpu-basic"
>>> runtime.requested_hardware
"t4-medium"
```

You now have a Space fully configured. Make sure to downgrade your Space back to "cpu-classic"
when you are done using it.

**Bonus: request hardware when creating or duplicating the Space!**

Upgraded hardware will be automatically assigned to your Space once it's built.

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_hardware="cpu-upgrade",
...     space_storage="small",
...     space_sleep_time="7200", # 2 hours in secs
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     hardware="cpu-upgrade",
...     storage="small",
...     sleep_time="7200", # 2 hours in secs
... )
```

**5. Pause and restart your Space**

By default if your Space is running on an upgraded hardware, it will never be stopped. However to avoid getting billed,
you might want to pause it when you are not using it. This is possible using [pause_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_space). A paused Space will be
inactive until the owner of the Space restarts it, either with the UI or via API using [restart_space()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.restart_space).
For more details about paused mode, please refer to [this section](https://huggingface.co/docs/hub/spaces-gpus#pause)

```py
# Pause your Space to avoid getting billed
>>> api.pause_space(repo_id=repo_id)
# (...)
# Restart it when you need it
>>> api.restart_space(repo_id=repo_id)
```

Another possibility is to set a timeout for your Space. If your Space is inactive for more than the timeout duration,
it will go to sleep. Any visitor landing on your Space will start it back up. You can set a timeout using
[set_space_sleep_time()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.set_space_sleep_time). For more details about sleeping mode, please refer to [this section](https://huggingface.co/docs/hub/spaces-gpus#sleep-time).

```py
# Put your Space to sleep after 1h of inactivity
>>> api.set_space_sleep_time(repo_id=repo_id, sleep_time=3600)
```

Note: if you are using a 'cpu-basic' hardware, you cannot configure a custom sleep time. Your Space will automatically
be paused after 48h of inactivity.

**Bonus: set a sleep time while requesting hardware**

Upgraded hardware will be automatically assigned to your Space once it's built.

```py
>>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM, sleep_time=3600)
```

**Bonus: set a sleep time when creating or duplicating the Space!**

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_hardware="t4-medium",
...     space_sleep_time="3600",
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     hardware="t4-medium",
...     sleep_time="3600",
... )
```

**6. Add persistent storage to your Space**

You can choose the storage tier of your choice to access disk space that persists across restarts of your Space. This means you can read and write from disk like you would with a traditional hard drive. See [docs](https://huggingface.co/docs/hub/spaces-storage#persistent-storage) for more details.

```py
>>> from huggingface_hub import SpaceStorage
>>> api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.LARGE)
```

You can also delete your storage, losing all the data permanently.
```py
>>> api.delete_space_storage(repo_id=repo_id)
```

Note: You cannot decrease the storage tier of your space once it's been granted. To do so,
you must delete the storage first then request the new desired tier.

**Bonus: request storage when creating or duplicating the Space!**

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_storage="large",
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     storage="large",
... )
```

## More advanced: temporarily upgrade your Space !

Spaces allow for a lot of different use cases. Sometimes, you might want
to temporarily run a Space on a specific hardware, do something and then shut it down. In
this section, we will explore how to benefit from Spaces to finetune a model on demand.
This is only one way of solving this particular problem. It has to be taken as a suggestion
and adapted to your use case.

Let's assume we have a Space to finetune a model. It is a Gradio app that takes as input
a model id and a dataset id. The workflow is as follows:

0. (Prompt the user for a model and a dataset)
1. Load the model from the Hub.
2. Load the dataset from the Hub.
3. Finetune the model on the dataset.
4. Upload the new model to the Hub.

Step 3. requires a custom hardware but you don't want your Space to be running all the time on a paid
GPU. A solution is to dynamically request hardware for the training and shut it
down afterwards. Since requesting hardware restarts your Space, your app must somehow "remember"
the current task it is performing. There are multiple ways of doing this. In this guide
we will see one solution using a Dataset as "task scheduler".

### App skeleton

Here is what your app would look like. On startup, check if a task is scheduled and if yes,
run it on the correct hardware. Once done, set back hardware to the free-plan CPU and
prompt the user for a new task.

> [!WARNING]
> Such a workflow does not support concurrent access as normal demos.
> In particular, the interface will be disabled when training occurs.
> It is preferable to set your repo as private to ensure you are the only user.

```py
# Space will need your token to request hardware: set it as a Secret !
HF_TOKEN = os.environ.get("HF_TOKEN")

# Space own repo_id
TRAINING_SPACE_ID = "Wauplin/dreambooth-training"

from huggingface_hub import HfApi, SpaceHardware
api = HfApi(token=HF_TOKEN)

# On Space startup, check if a task is scheduled. If yes, finetune the model. If not,
# display an interface to request a new task.
task = get_task()
if task is None:
    # Start Gradio app
    def gradio_fn(task):
        # On user request, add task and request hardware
        add_task(task)
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM)

    gr.Interface(fn=gradio_fn, ...).launch()
else:
    runtime = api.get_space_runtime(repo_id=TRAINING_SPACE_ID)
    # Check if Space is loaded with a GPU.
    if runtime.hardware == SpaceHardware.T4_MEDIUM:
        # If yes, finetune base model on dataset !
        train_and_upload(task)

        # Then, mark the task as "DONE"
        mark_as_done(task)

        # DO NOT FORGET: set back CPU hardware
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.CPU_BASIC)
    else:
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM)
```

### Task scheduler

Scheduling tasks can be done in many ways. Here is an example how it could be done using
a simple CSV stored as a Dataset.

```py
# Dataset ID in which a `tasks.csv` file contains the tasks to perform.
# Here is a basic example for `tasks.csv` containing inputs (base model and dataset)
# and status (PENDING or DONE).
#     multimodalart/sd-fine-tunable,Wauplin/concept-1,DONE
#     multimodalart/sd-fine-tunable,Wauplin/concept-2,PENDING
TASK_DATASET_ID = "Wauplin/dreambooth-task-scheduler"

def _get_csv_file():
    return hf_hub_download(repo_id=TASK_DATASET_ID, filename="tasks.csv", repo_type="dataset", token=HF_TOKEN)

def get_task():
    with open(_get_csv_file()) as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        for row in csv_reader:
            if row[2] == "PENDING":
                return row[0], row[1] # model_id, dataset_id

def add_task(task):
    model_id, dataset_id = task
    with open(_get_csv_file()) as csv_file:
        with open(csv_file, "r") as f:
            tasks = f.read()

    api.upload_file(
        repo_id=repo_id,
        repo_type=repo_type,
        path_in_repo="tasks.csv",
        # Quick and dirty way to add a task
        path_or_fileobj=(tasks + f"\n{model_id},{dataset_id},PENDING").encode()
    )

def mark_as_done(task):
    model_id, dataset_id = task
    with open(_get_csv_file()) as csv_file:
        with open(csv_file, "r") as f:
            tasks = f.read()

    api.upload_file(
        repo_id=repo_id,
        repo_type=repo_type,
        path_in_repo="tasks.csv",
        # Quick and dirty way to set the task as DONE
        path_or_fileobj=tasks.replace(
            f"{model_id},{dataset_id},PENDING",
            f"{model_id},{dataset_id},DONE"
        ).encode()
    )
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/manage-spaces.md" />

### Run Inference on servers
https://huggingface.co/docs/huggingface_hub/main/guides/inference.md

# Run Inference on servers

Inference is the process of using a trained model to make predictions on new data. Because this process can be compute-intensive, running on a dedicated or external service can be an interesting option.
The `huggingface_hub` library provides a unified interface to run inference across multiple services for models hosted on the Hugging Face Hub:

1.  [Inference Providers](https://huggingface.co/docs/inference-providers/index): a streamlined, unified access to hundreds of machine learning models, powered by our serverless inference partners. This new approach builds on our previous Serverless Inference API, offering more models, improved performance, and greater reliability thanks to world-class providers. Refer to the [documentation](https://huggingface.co/docs/inference-providers/index#partners) for a list of supported providers.
2.  [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice.
3.  Local endpoints: you can also run inference with local inference servers like [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://ollama.com/), [vLLM](https://github.com/vllm-project/vllm), [LiteLLM](https://docs.litellm.ai/docs/simple_proxy), or [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) by connecting the client to these local endpoints.

> [!TIP]
> [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) is a Python client making HTTP calls to our APIs. If you want to make the HTTP calls directly using
> your preferred tool (curl, postman,...), please refer to the [Inference Providers](https://huggingface.co/docs/inference-providers/index) documentation
> or to the [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) documentation pages.
>
> For web development, a [JS client](https://huggingface.co/docs/huggingface.js/inference/README) has been released.
> If you are interested in game development, you might have a look at our [C# project](https://github.com/huggingface/unity-api).

## Getting started

Let's get started with a text-to-image task:

```python
>>> from huggingface_hub import InferenceClient

# Example with an external provider (e.g. replicate)
>>> replicate_client = InferenceClient(
    provider="replicate",
    api_key="my_replicate_api_key",
)
>>> replicate_image = replicate_client.text_to_image(
    "A flying car crossing a futuristic cityscape.",
    model="black-forest-labs/FLUX.1-schnell",
)
>>> replicate_image.save("flying_car.png")

```

In the example above, we initialized an [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) with a third-party provider, [Replicate](https://replicate.com/). When using a provider, you must specify the model you want to use. The model id must be the id of the model on the Hugging Face Hub, not the id of the model from the third-party provider.
In our example, we generated an image from a text prompt. The returned value is a `PIL.Image` object that can be saved to a file. For more details, check out the [text_to_image()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_image) documentation.

Let's now see an example using the [chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) API. This task uses an LLM to generate a response from a list of messages:

```python
>>> from huggingface_hub import InferenceClient
>>> messages = [
    {
        "role": "user",
        "content": "What is the capital of France?",
    }
]
>>> client = InferenceClient(
    provider="together",
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    api_key="my_together_api_key",
)
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputComplete(
            finish_reason="eos_token",
            index=0,
            message=ChatCompletionOutputMessage(
                role="assistant", content="The capital of France is Paris.", name=None, tool_calls=None
            ),
            logprobs=None,
        )
    ],
    created=1719907176,
    id="",
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    object="text_completion",
    system_fingerprint="2.0.4-sha-f426a33",
    usage=ChatCompletionOutputUsage(completion_tokens=8, prompt_tokens=17, total_tokens=25),
)
```

In the example above, we used a third-party provider ([Together AI](https://www.together.ai/)) and specified which model we want to use (`"meta-llama/Meta-Llama-3-8B-Instruct"`). We then gave a list of messages to complete (here, a single question) and passed an additional parameter to the API (`max_token=100`). The output is a `ChatCompletionOutput` object that follows the OpenAI specification. The generated content can be accessed with `output.choices[0].message.content`. For more details, check out the [chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) documentation.


> [!WARNING]
> The API is designed to be simple. Not all parameters and options are available or described for the end user. Check out
> [this page](https://huggingface.co/docs/api-inference/detailed_parameters) if you are interested in learning more about
> all the parameters available for each task.

### Using a specific provider

If you want to use a specific provider, you can specify it when initializing the client. The default value is "auto" which will select the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers. Refer to the [Supported providers and tasks](#supported-providers-and-tasks) section for a list of supported providers.

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(provider="replicate", api_key="my_replicate_api_key")
```

### Using a specific model

What if you want to use a specific model? You can specify it either as a parameter or directly at an instance level:

```python
>>> from huggingface_hub import InferenceClient
# Initialize client for a specific model
>>> client = InferenceClient(provider="together", model="meta-llama/Llama-3.1-8B-Instruct")
>>> client.text_to_image(...)
# Or use a generic client but pass your model as an argument
>>> client = InferenceClient(provider="together")
>>> client.text_to_image(..., model="meta-llama/Llama-3.1-8B-Instruct")
```

> [!TIP]
> When using the "hf-inference" provider, each task comes with a recommended model from the 1M+ models available on the Hub.
> However, this recommendation can change over time, so it's best to explicitly set a model once you've decided which one to use.
> For third-party providers, you must always specify a model that is compatible with that provider.
>
> Visit the [Models](https://huggingface.co/models?inference=warm) page on the Hub to explore models available through Inference Providers.

### Using Inference Endpoints

The examples we saw above use inference providers. While these prove to be very useful for prototyping
and testing things quickly. Once you're ready to deploy your model to production, you'll need to use a dedicated infrastructure.
That's where [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) comes into play. It allows you to deploy
any model and expose it as a private API. Once deployed, you'll get a URL that you can connect to using exactly the same
code as before, changing only the `model` parameter:

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if")
# or
>>> client = InferenceClient()
>>> client.text_to_image(..., model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if")
```

Note that you cannot specify both a URL and a provider - they are mutually exclusive. URLs are used to connect directly to deployed endpoints.

### Using local endpoints

You can use [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) to run chat completion with local inference servers (llama.cpp, vllm, litellm server, TGI, mlx, etc.) running on your own machine. The API should be OpenAI API-compatible.

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(model="http://localhost:8080")

>>> response = client.chat.completions.create(
...     messages=[
...         {"role": "user", "content": "What is the capital of France?"}
...     ],
...     max_tokens=100
... )
>>> print(response.choices[0].message.content)
```

> [!TIP]
> Similarly to the OpenAI Python client, [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) can be used to run Chat Completion inference with any OpenAI REST API-compatible endpoint.

### Authentication

Authentication can be done in two ways:

**Routed through Hugging Face** : Use Hugging Face as a proxy to access third-party providers. The calls will be routed through Hugging Face's infrastructure using our provider keys, and the usage will be billed directly to your Hugging Face account.

You can authenticate using a [User Access Token](https://huggingface.co/docs/hub/security-tokens). You can provide your Hugging Face token directly using the `api_key` parameter:

```python
>>> client = InferenceClient(
    provider="replicate",
    api_key="hf_****"  # Your HF token
)
```

If you *don't* pass an `api_key`, the client will attempt to find and use a token stored locally on your machine. This typically happens if you've previously logged in. See the [Authentication Guide](https://huggingface.co/docs/huggingface_hub/quick-start#authentication) for details on login.

```python
>>> client = InferenceClient(
    provider="replicate",
    token="hf_****"  # Your HF token
)
```

**Direct access to provider**: Use your own API key to interact directly with the provider's service:
```python
>>> client = InferenceClient(
    provider="replicate",
    api_key="r8_****"  # Your Replicate API key
)
```

For more details, refer to the [Inference Providers pricing documentation](https://huggingface.co/docs/inference-providers/pricing#routed-requests-vs-direct-calls).

## Supported providers and tasks

[InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)'s goal is to provide the easiest interface to run inference on Hugging Face models, on any provider. It has a simple API that supports the most common tasks. Here is a table showing which providers support which tasks:

| Task                                                | Black Forest Labs | Cerebras | Clarifai | Cohere | fal-ai | Featherless AI | Fireworks AI | Groq | HF Inference | Hyperbolic | Nebius AI Studio | Novita AI | Nscale | Public AI | Replicate | Sambanova | Scaleway | Together | Wavespeed | Zai |
| --------------------------------------------------- | ----------------- | -------- | -------- | ------ | ------ | -------------- | ------------ | ---- | ------------ | ---------- | ---------------- | --------- | ------ | ---------- | --------- | --------- | --------- | -------- | --------- | ---- |
| [audio_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.audio_classification)           | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [audio_to_audio()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.audio_to_audio)                 | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [automatic_speech_recognition()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.automatic_speech_recognition)   | ❌                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion)                | ❌                 | ✅        | ✅        | ✅      | ❌      | ✅              | ✅            | ✅    | ✅            | ✅          | ✅                | ✅         | ✅      | ✅          | ❌         | ✅         | ✅         | ✅        | ❌         | ✅   |
| [document_question_answering()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.document_question_answering)    | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [feature_extraction()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.feature_extraction)             | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ✅                | ❌         | ❌      | ❌          | ❌         | ✅         | ✅         | ❌        | ❌         | ❌   |
| [fill_mask()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.fill_mask)                      | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [image_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.image_classification)           | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [image_segmentation()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.image_segmentation)             | ❌                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [image_to_image()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.image_to_image)                 | ❌                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌      | ❌          | ✅         | ❌         | ❌         | ❌        | ✅         | ❌   |
| [image_to_video()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.image_to_video)                 | ❌                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ❌            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ✅         | ❌   |
| [image_to_text()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.image_to_text)                  | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [object_detection()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.object_detection)               | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [question_answering()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.question_answering)             | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [sentence_similarity()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.sentence_similarity)            | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [summarization()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.summarization)                  | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [table_question_answering()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.table_question_answering)       | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [text_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_classification)            | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [text_generation()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation)                | ❌                 | ❌        | ❌        | ❌      | ❌      | ✅              | ❌            | ❌    | ✅            | ✅          | ✅                | ✅         | ❌      | ❌          | ❌         | ❌         | ❌         | ✅        | ❌         | ❌   |
| [text_to_image()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_image)                  | ✅                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ✅            | ✅          | ✅                | ❌         | ✅      | ❌          | ✅         | ❌         | ❌         | ✅        | ✅         | ❌   |
| [text_to_speech()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_speech)                 | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌      | ❌          | ✅         | ❌         | ❌         | ❌        | ❌         | ❌   |
| [text_to_video()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_video)                  | ❌                 | ❌        | ❌        | ❌      | ✅      | ❌              | ❌            | ❌    | ❌            | ❌          | ❌                | ✅         | ❌      | ❌          | ✅         | ❌         | ❌         | ❌        | ✅         | ❌   |
| [tabular_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.tabular_classification)         | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [tabular_regression()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.tabular_regression)             | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [token_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.token_classification)           | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [translation()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.translation)                    | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [visual_question_answering()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.visual_question_answering)      | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [zero_shot_image_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_image_classification) | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |
| [zero_shot_classification()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_classification)       | ❌                 | ❌        | ❌        | ❌      | ❌      | ❌              | ❌            | ❌    | ✅            | ❌          | ❌                | ❌         | ❌         | ❌         | ❌        | ❌      | ❌          | ❌         | ❌         | ❌   |

> [!TIP]
> Check out the [Tasks](https://huggingface.co/tasks) page to learn more about each task.

## OpenAI compatibility

The `chat_completion` task follows [OpenAI's Python client](https://github.com/openai/openai-python) syntax. What does it mean for you? It means that if you are used to play with `OpenAI`'s APIs you will be able to switch to `huggingface_hub.InferenceClient` to work with open-source models by updating just 2 line of code!

```diff
- from openai import OpenAI
+ from huggingface_hub import InferenceClient

- client = OpenAI(
+ client = InferenceClient(
    base_url=...,
    api_key=...,
)


output = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)
```

And that's it! The only required changes are to replace `from openai import OpenAI` by `from huggingface_hub import InferenceClient` and `client = OpenAI(...)` by `client = InferenceClient(...)`. You can choose any LLM model from the Hugging Face Hub by passing its model id as `model` parameter. [Here is a list](https://huggingface.co/models?pipeline_tag=text-generation&other=conversational,text-generation-inference&sort=trending) of supported models. For authentication, you should pass a valid [User Access Token](https://huggingface.co/settings/tokens) as `api_key` or authenticate using `huggingface_hub` (see the [authentication guide](https://huggingface.co/docs/huggingface_hub/quick-start#authentication)).

All input parameters and output format are strictly the same. In particular, you can pass `stream=True` to receive tokens as they are generated. You can also use the [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) to run inference using `asyncio`:

```diff
import asyncio
- from openai import AsyncOpenAI
+ from huggingface_hub import AsyncInferenceClient

- client = AsyncOpenAI()
+ client = AsyncInferenceClient()

async def main():
    stream = await client.chat.completions.create(
        model="meta-llama/Meta-Llama-3-8B-Instruct",
        messages=[{"role": "user", "content": "Say this is a test"}],
        stream=True,
    )
    async for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="")

asyncio.run(main())
```

You might wonder why using [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) instead of OpenAI's client? There are a few reasons for that:
1. [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) is configured for Hugging Face services. You don't need to provide a `base_url` to run models with Inference Providers. You also don't need to provide a `token` or `api_key` if your machine is already correctly logged in.
2. [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) is tailored for both Text-Generation-Inference (TGI) and `transformers` frameworks, meaning you are assured it will always be on-par with the latest updates.
3. [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) is integrated with our Inference Endpoints service, making it easier to launch an Inference Endpoint, check its status and run inference on it. Check out the [Inference Endpoints](./inference_endpoints.md) guide for more details.

> [!TIP]
> `InferenceClient.chat.completions.create` is simply an alias for `InferenceClient.chat_completion`. Check out the package reference of [chat_completion()](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) for more details. `base_url` and `api_key` parameters when instantiating the client are also aliases for `model` and `token`. These aliases have been defined to reduce friction when switching from `OpenAI` to `InferenceClient`.

## Function Calling

Function calling allows LLMs to interact with external tools, such as defined functions or APIs. This enables users to easily build applications tailored to specific use cases and real-world tasks.
`InferenceClient` implements the same tool calling interface as the OpenAI Chat Completions API. Here is a simple example of tool calling using [Nebius](https://nebius.com/) as the inference provider:

```python
from huggingface_hub import InferenceClient

tools = [
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get current temperature for a given location.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "City and country e.g. Paris, France"
                        }
                    },
                    "required": ["location"],
                },
            }
        }
]

client = InferenceClient(provider="nebius")

response = client.chat.completions.create(
    model="Qwen/Qwen2.5-72B-Instruct",
    messages=[
    {
        "role": "user",
        "content": "What's the weather like the next 3 days in London, UK?"
    }
    ],
    tools=tools,
    tool_choice="auto",
)

print(response.choices[0].message.tool_calls[0].function.arguments)

```

> [!TIP]
> Please refer to the providers' documentation to verify which models are supported by them for Function/Tool Calling.

## Structured Outputs & JSON Mode

InferenceClient supports JSON mode for syntactically valid JSON responses and Structured Outputs for schema-enforced responses. JSON mode provides machine-readable data without strict structure, while Structured Outputs guarantee both valid JSON and adherence to a predefined schema for reliable downstream processing.

We follow the OpenAI API specs for both JSON mode and Structured Outputs. You can enable them via the `response_format` argument. Here is an example of Structured Outputs using [Cerebras](https://www.cerebras.ai/) as the inference provider:

```python
from huggingface_hub import InferenceClient

json_schema = {
    "name": "book",
    "schema": {
        "properties": {
            "name": {
                "title": "Name",
                "type": "string",
            },
            "authors": {
                "items": {"type": "string"},
                "title": "Authors",
                "type": "array",
            },
        },
        "required": ["name", "authors"],
        "title": "Book",
        "type": "object",
    },
    "strict": True,
}

client = InferenceClient(provider="cerebras")


completion = client.chat.completions.create(
    model="Qwen/Qwen3-32B",
    messages=[
        {"role": "system", "content": "Extract the books information."},
        {"role": "user", "content": "I recently read 'The Great Gatsby' by F. Scott Fitzgerald."},
    ],
    response_format={
        "type": "json_schema",
        "json_schema": json_schema,
    },
)

print(completion.choices[0].message)
```
> [!TIP]
> Please refer to the providers' documentation to verify which models are supported by them for Structured Outputs and JSON Mode.

## Async client

An async version of the client is also provided, based on `asyncio` and `httpx`. All async API endpoints are available via [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient). Its initialization and APIs are strictly the same as the sync-only version.

```py
# Code must be run in an asyncio concurrent context.
# $ python -m asyncio
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> async for token in await client.text_generation("The Huggingface Hub is", stream=True):
...     print(token, end="")
 a platform for sharing and discussing ML-related content.
```

For more information about the `asyncio` module, please refer to the [official documentation](https://docs.python.org/3/library/asyncio.html).

## MCP Client

The `huggingface_hub` library now includes an experimental [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient), designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via the [Model Context Protocol](https://modelcontextprotocol.io) (MCP). This client extends an [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) to seamlessly integrate Tool usage.

The [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient) connects to MCP servers (either local `stdio` scripts or remote `http`/`sse` services) that expose tools. It feeds these tools to an LLM (via [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)). If the LLM decides to use a tool, [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient) manages the execution request to the MCP server and relays the Tool's output back to the LLM, often streaming results in real-time.

In the following example, we use [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model via [Nebius](https://nebius.com/) inference provider. We then add a remote MCP server, in this case, an SSE server which made the Flux image generation tool available to the LLM.

```python
import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient


async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")

        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]

        async for chunk in client.process_single_turn_with_tools(messages):
            # Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

            # Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())
```


For even simpler development, we offer a higher-level [Agent](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.Agent) class. This 'Tiny Agent' simplifies creating conversational Agents by managing the chat loop and state, essentially acting as a wrapper around [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient). It's designed to be a simple while loop built right on top of an [MCPClient](/docs/huggingface_hub/main/en/package_reference/mcp#huggingface_hub.MCPClient). You can run these Agents directly from the command line:


```bash
# install latest version of huggingface_hub with the mcp extra
pip install -U huggingface_hub[mcp]
# Run an agent that uses the Flux image generation tool
tiny-agents run julien-c/flux-schnell-generator

```

When launched, the Agent will load, list the Tools it has discovered from its connected MCP servers, and then it's ready for your prompts!

## Advanced tips

In the above section, we saw the main aspects of [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient). Let's dive into some more advanced tips.

### Billing

As an HF user, you get monthly credits to run inference through various providers on the Hub. The amount of credits you get depends on your type of account (Free or PRO or Enterprise Hub). You get charged for every inference request, depending on the provider's pricing table. By default, the requests are billed to your personal account. However, it is possible to set the billing so that requests are charged to an organization you are part of by simply passing `bill_to="<your_org_name>"` to `InferenceClient`. For this to work, your organization must be subscribed to Enterprise Hub. For more details about billing, check out [this guide](https://huggingface.co/docs/api-inference/pricing#features-using-inference-providers).

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(provider="fal-ai", bill_to="openai")
>>> image = client.text_to_image(
...     "A majestic lion in a fantasy forest",
...     model="black-forest-labs/FLUX.1-schnell",
... )
>>> image.save("lion.png")
```

Note that it is NOT possible to charge another user or an organization you are not part of. If you want to grant someone else some credits, you must create a joint organization with them.


### Timeout

Inference calls can take a significant amount of time. By default, [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) will wait "indefinitely" until the inference complete. If you want more control in your workflow, you can set the `timeout` parameter to a specific value in seconds. If the timeout delay expires, an [InferenceTimeoutError](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) is raised, which you can catch in your code:

```python
>>> from huggingface_hub import InferenceClient, InferenceTimeoutError
>>> client = InferenceClient(timeout=30)
>>> try:
...     client.text_to_image(...)
... except InferenceTimeoutError:
...     print("Inference timed out after 30s.")
```

### Binary inputs

Some tasks require binary inputs, for example, when dealing with images or audio files. In this case, [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient)
tries to be as permissive as possible and accept different types:
- raw `bytes`
- a file-like object, opened as binary (`with open("audio.flac", "rb") as f: ...`)
- a path (`str` or `Path`) pointing to a local file
- a URL (`str`) pointing to a remote file (e.g. `https://...`). In this case, the file will be downloaded locally before
being sent to the API.

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/inference.md" />

### How-to guides
https://huggingface.co/docs/huggingface_hub/main/guides/overview.md

# How-to guides

In this section, you will find practical guides to help you achieve a specific goal.
Take a look at these guides to learn how to use huggingface_hub to solve real-world problems:

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./repository">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Repository
      </div><p class="text-gray-700">
        How to create a repository on the Hub? How to configure it? How to interact with it?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./download">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Download files
      </div><p class="text-gray-700">
        How do I download a file from the Hub? How do I download a repository?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./upload">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Upload files
      </div><p class="text-gray-700">
        How to upload a file or a folder? How to make changes to an existing repository on the Hub?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./search">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Search
      </div><p class="text-gray-700">
        How to efficiently search through the 200k+ public models, datasets and spaces?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./hf_file_system">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        HfFileSystem
      </div><p class="text-gray-700">
        How to interact with the Hub through a convenient interface that mimics Python's file interface?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./inference">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Inference
      </div><p class="text-gray-700">
        How to make predictions using Hugging Face Inference Providers?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./community">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Community Tab
      </div><p class="text-gray-700">
        How to interact with the Community tab (Discussions and Pull Requests)?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./collections">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Collections
      </div><p class="text-gray-700">
        How to programmatically build collections?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./manage-cache">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Cache
      </div><p class="text-gray-700">
        How does the cache-system work? How to benefit from it?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./model-cards">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Model Cards
      </div><p class="text-gray-700">
        How to create and share Model Cards?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./manage-spaces">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Manage your Space
      </div><p class="text-gray-700">
        How to manage your Space hardware and configuration?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./integrations">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Integrate a library
      </div><p class="text-gray-700">
        What does it mean to integrate a library with the Hub? And how to do it?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./webhooks_server">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Webhooks server
      </div><p class="text-gray-700">
        How to create a server to receive Webhooks and deploy it as a Space?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./jobs">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Jobs
      </div><p class="text-gray-700">
        How to run and manage compute Jobs on Hugging Face infrastructure and select the hardware?
      </p>
    </a>

  </div>
</div>


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/overview.md" />

### Interact with Discussions and Pull Requests
https://huggingface.co/docs/huggingface_hub/main/guides/community.md

# Interact with Discussions and Pull Requests

The `huggingface_hub` library provides a Python interface to interact with Pull Requests and Discussions on the Hub.
Visit [the dedicated documentation page](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)
for a deeper view of what Discussions and Pull Requests on the Hub are, and how they work under the hood.

## Retrieve Discussions and Pull Requests from the Hub

The `HfApi` class allows you to retrieve Discussions and Pull Requests on a given repo:

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(repo_id="bigscience/bloom"):
...     print(f"{discussion.num} - {discussion.title}, pr: {discussion.is_pull_request}")

# 11 - Add Flax weights, pr: True
# 10 - Update README.md, pr: True
# 9 - Training languages in the model card, pr: True
# 8 - Update tokenizer_config.json, pr: True
# 7 - Slurm training script, pr: False
[...]
```

`HfApi.get_repo_discussions` supports filtering by author, type (Pull Request or Discussion) and status (`open` or `closed`):

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(
...    repo_id="bigscience/bloom",
...    author="ArthurZ",
...    discussion_type="pull_request",
...    discussion_status="open",
... ):
...     print(f"{discussion.num} - {discussion.title} by {discussion.author}, pr: {discussion.is_pull_request}")

# 19 - Add Flax weights by ArthurZ, pr: True
```

`HfApi.get_repo_discussions` returns a [generator](https://docs.python.org/3.7/howto/functional.html#generators) that yields
[Discussion](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.Discussion) objects. To get all the Discussions in a single list, run:

```python
>>> from huggingface_hub import get_repo_discussions
>>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))
```

The [Discussion](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.Discussion) object returned by [HfApi.get_repo_discussions()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_repo_discussions) contains high-level overview of the
Discussion or Pull Request. You can also get more detailed information using [HfApi.get_discussion_details()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details):

```python
>>> from huggingface_hub import get_discussion_details

>>> get_discussion_details(
...     repo_id="bigscience/bloom-1b3",
...     discussion_num=2
... )
DiscussionWithDetails(
    num=2,
    author='cakiki',
    title='Update VRAM memory for the V100s',
    status='open',
    is_pull_request=True,
    events=[
        DiscussionComment(type='comment', author='cakiki', ...),
        DiscussionCommit(type='commit', author='cakiki', summary='Update VRAM memory for the V100s', oid='1256f9d9a33fa8887e1c1bf0e09b4713da96773a', ...),
    ],
    conflicting_files=[],
    target_branch='refs/heads/main',
    merge_commit_oid=None,
    diff='diff --git a/README.md b/README.md\nindex a6ae3b9294edf8d0eda0d67c7780a10241242a7e..3a1814f212bc3f0d3cc8f74bdbd316de4ae7b9e3 100644\n--- a/README.md\n+++ b/README.md\n@@ -132,7 +132,7 [...]',
)
```

[HfApi.get_discussion_details()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details) returns a [DiscussionWithDetails](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.DiscussionWithDetails) object, which is a subclass of [Discussion](/docs/huggingface_hub/main/en/package_reference/community#huggingface_hub.Discussion)
with more detailed information about the Discussion or Pull Request. Information includes all the comments, status changes,
and renames of the Discussion via `DiscussionWithDetails.events`.

In case of a Pull Request, you can retrieve the raw git diff with `DiscussionWithDetails.diff`. All the commits of the
Pull Requests are listed in `DiscussionWithDetails.events`.


## Create and edit a Discussion or Pull Request programmatically

The [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class also offers ways to create and edit Discussions and Pull Requests.
You will need an [access token](https://huggingface.co/docs/hub/security-tokens) to create and edit Discussions
or Pull Requests.

The simplest way to propose changes on a repo on the Hub is via the [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) API: just
set the `create_pr` parameter to `True`. This parameter is also available on other methods that wrap [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit):

    * [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file)
    * [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder)
    * [delete_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_file)
    * [delete_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_folder)
    * [metadata_update()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.metadata_update)

```python
>>> from huggingface_hub import metadata_update

>>> metadata_update(
...     repo_id="username/repo_name",
...     metadata={"tags": ["computer-vision", "awesome-model"]},
...     create_pr=True,
... )
```

You can also use [HfApi.create_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_discussion) (respectively [HfApi.create_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)) to create a Discussion (respectively a Pull Request) on a repo.
Opening a Pull Request this way can be useful if you need to work on changes locally. Pull Requests opened this way will be in `"draft"` mode.

```python
>>> from huggingface_hub import create_discussion, create_pull_request

>>> create_discussion(
...     repo_id="username/repo-name",
...     title="Hi from the huggingface_hub library!",
...     token="<insert your access token here>",
... )
DiscussionWithDetails(...)

>>> create_pull_request(
...     repo_id="username/repo-name",
...     title="Hi from the huggingface_hub library!",
...     token="<insert your access token here>",
... )
DiscussionWithDetails(..., is_pull_request=True)
```

Managing Pull Requests and Discussions can be done entirely with the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) class. For example:

    * [comment_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.comment_discussion) to add comments
    * [edit_discussion_comment()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.edit_discussion_comment) to edit comments
    * [rename_discussion()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.rename_discussion) to rename a Discussion or Pull Request
    * [change_discussion_status()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.change_discussion_status) to open or close a Discussion / Pull Request
    * [merge_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.merge_pull_request) to merge a Pull Request


Visit the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) documentation page for an exhaustive reference of all available methods.

## Push changes to a Pull Request

*Coming soon !*

## See also

For a more detailed reference, visit the [Discussions and Pull Requests](../package_reference/community) and the [hf_api](../package_reference/hf_api) documentation page.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/community.md" />

### Interact with the Hub through the Filesystem API
https://huggingface.co/docs/huggingface_hub/main/guides/hf_file_system.md

# Interact with the Hub through the Filesystem API

In addition to the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi), the `huggingface_hub` library provides [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem), a pythonic [fsspec-compatible](https://filesystem-spec.readthedocs.io/en/latest/) file interface to the Hugging Face Hub. The [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem) builds on top of the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) and offers typical filesystem style operations like `cp`, `mv`, `ls`, `du`, `glob`, `get_file`, and `put_file`.

> [!WARNING]
> [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem) provides fsspec compatibility, which is useful for libraries that require it (e.g., reading
>   Hugging Face datasets directly with `pandas`). However, it introduces additional overhead due to this compatibility
>   layer. For better performance and reliability, it's recommended to use [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) methods when possible.

## Usage

```python
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()

>>> # List all files in a directory
>>> fs.ls("datasets/my-username/my-dataset-repo/data", detail=False)
['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv']

>>> # List all ".csv" files in a repo
>>> fs.glob("datasets/my-username/my-dataset-repo/**/*.csv")
['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv']

>>> # Read a remote file
>>> with fs.open("datasets/my-username/my-dataset-repo/data/train.csv", "r") as f:
...     train_data = f.readlines()

>>> # Read the content of a remote file as a string
>>> train_data = fs.read_text("datasets/my-username/my-dataset-repo/data/train.csv", revision="dev")

>>> # Write a remote file
>>> with fs.open("datasets/my-username/my-dataset-repo/data/validation.csv", "w") as f:
...     f.write("text,label")
...     f.write("Fantastic movie!,good")
```

The optional `revision` argument can be passed to run an operation from a specific commit such as a branch, tag name, or a commit hash.

Unlike Python's built-in `open`, `fsspec`'s `open` defaults to binary mode, `"rb"`. This means you must explicitly set mode as `"r"` for reading and `"w"` for writing in text mode. Appending to a file (modes `"a"` and `"ab"`) is not supported yet.

## Integrations

The [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem) can be used with any library that integrates `fsspec`, provided the URL follows the scheme:

```
hf://[<repo_type_prefix>]<repo_id>[@<revision>]/<path/in/repo>
```

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/huggingface_hub/hf_urls.png"/>
</div>

The `repo_type_prefix` is `datasets/` for datasets, `spaces/` for spaces, and models don't need a prefix in the URL.

Some interesting integrations where [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem) simplifies interacting with the Hub are listed below:

* Reading/writing a [Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#reading-writing-remote-files) DataFrame from/to a Hub repository:

  ```python
  >>> import pandas as pd

  >>> # Read a remote CSV file into a dataframe
  >>> df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")

  >>> # Write a dataframe to a remote CSV file
  >>> df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")
  ```

The same workflow can also be used for [Dask](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html) and [Polars](https://pola-rs.github.io/polars/py-polars/html/reference/io.html) DataFrames.

* Querying (remote) Hub files with [DuckDB](https://duckdb.org/docs/guides/python/filesystems):

  ```python
  >>> from huggingface_hub import HfFileSystem
  >>> import duckdb

  >>> fs = HfFileSystem()
  >>> duckdb.register_filesystem(fs)
  >>> # Query a remote file and get the result back as a dataframe
  >>> fs_query_file = "hf://datasets/my-username/my-dataset-repo/data_dir/data.parquet"
  >>> df = duckdb.query(f"SELECT * FROM '{fs_query_file}' LIMIT 10").df()
  ```

* Using the Hub as an array store with [Zarr](https://zarr.readthedocs.io/en/stable/tutorial.html#io-with-fsspec):

  ```python
  >>> import numpy as np
  >>> import zarr

  >>> embeddings = np.random.randn(50000, 1000).astype("float32")

  >>> # Write an array to a repo
  >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="w") as root:
  ...    foo = root.create_group("embeddings")
  ...    foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4')
  ...    foobar[:] = embeddings

  >>> # Read an array from a repo
  >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="r") as root:
  ...    first_row = root["embeddings/experiment_0"][0]
  ```

## Authentication

In many cases, you must be logged in with a Hugging Face account to interact with the Hub. Refer to the [Authentication](../quick-start#authentication) section of the documentation to learn more about authentication methods on the Hub.

It is also possible to log in programmatically by passing your `token` as an argument to [HfFileSystem](/docs/huggingface_hub/main/en/package_reference/hf_file_system#huggingface_hub.HfFileSystem):

```python
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem(token=token)
```

If you log in this way, be careful not to accidentally leak the token when sharing your source code!


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/hf_file_system.md" />

### Inference Endpoints
https://huggingface.co/docs/huggingface_hub/main/guides/inference_endpoints.md

# Inference Endpoints

Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
In this guide, we will learn how to programmatically manage Inference Endpoints with `huggingface_hub`. For more information about the Inference Endpoints product itself, check out its [official documentation](https://huggingface.co/docs/inference-endpoints/index).

This guide assumes `huggingface_hub` is correctly installed and that your machine is logged in. Check out the [Quick Start guide](https://huggingface.co/docs/huggingface_hub/quick-start#quickstart) if that's not the case yet. The minimal version supporting Inference Endpoints API is `v0.19.0`.


> [!TIP]
> **New:** it is now possible to deploy an Inference Endpoint from the [HF model catalog](https://endpoints.huggingface.co/catalog) with a simple API call. The catalog is a carefully curated list of models that can be deployed with optimized settings. You don't need to configure anything, we take all the heavy stuff on us! All models and settings are guaranteed to have been tested to provide best cost/performance balance.  [create_inference_endpoint_from_catalog()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint_from_catalog) works the same as [create_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint), with much less parameters to pass. You can use [list_inference_catalog()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_inference_catalog) to programmatically retrieve the catalog.
>
> Note that this is still an experimental feature. Let us know what you think if you use it!


## Create an Inference Endpoint

The first step is to create an Inference Endpoint using [create_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint):

```py
>>> from huggingface_hub import create_inference_endpoint

>>> endpoint = create_inference_endpoint(
...     "my-endpoint-name",
...     repository="gpt2",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl"
... )
```

Or via CLI:

```bash
hf endpoints deploy my-endpoint-name --repo gpt2 --framework pytorch --accelerator cpu --vendor aws --region us-east-1 --instance-size x2 --instance-type intel-icl --task text-generation

# Deploy from the catalog with a single command
hf endpoints catalog deploy my-endpoint-name --repo openai/gpt-oss-120b
```


In this example, we created a `protected` Inference Endpoint named `"my-endpoint-name"`, to serve [gpt2](https://huggingface.co/gpt2) for `text-generation`. A `protected` Inference Endpoint means your token is required to access the API. We also need to provide additional information to configure the hardware requirements, such as vendor, region, accelerator, instance type, and size. You can check out the list of available resources [here](https://api.endpoints.huggingface.cloud/#/v2%3A%3Aprovider/list_vendors). Alternatively, you can create an Inference Endpoint manually using the [Web interface](https://ui.endpoints.huggingface.co/new) for convenience. Refer to this [guide](https://huggingface.co/docs/inference-endpoints/guides/advanced) for details on advanced settings and their usage.

The value returned by [create_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint) is an [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) object:

```py
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
```

Or via CLI:

```bash
hf endpoints describe my-endpoint-name
```

It's a dataclass that holds information about the endpoint. You can access important attributes such as `name`, `repository`, `status`, `task`, `created_at`, `updated_at`, etc. If you need it, you can also access the raw response from the server with `endpoint.raw`.

Once your Inference Endpoint is created, you can find it on your [personal dashboard](https://ui.endpoints.huggingface.co/).

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/huggingface_hub/inference_endpoints_created.png)

#### Using a custom image

By default the Inference Endpoint is built from a docker image provided by Hugging Face. However, it is possible to specify any docker image using the `custom_image` parameter. A common use case is to run LLMs using the [text-generation-inference](https://github.com/huggingface/text-generation-inference) framework. This can be done like this:

```python
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x1",
...     instance_type="nvidia-a10g",
...     custom_image={
...         "health_route": "/health",
...         "env": {
...             "MAX_BATCH_PREFILL_TOKENS": "2048",
...             "MAX_INPUT_LENGTH": "1024",
...             "MAX_TOTAL_TOKENS": "1512",
...             "MODEL_ID": "/repository"
...         },
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
... )
```

The value to pass as `custom_image` is a dictionary containing a url to the docker container and configuration to run it. For more details about it, checkout the [Swagger documentation](https://api.endpoints.huggingface.cloud/#/v2%3A%3Aendpoint/create_endpoint).

### Get or list existing Inference Endpoints

In some cases, you might need to manage Inference Endpoints you created previously. If you know the name, you can fetch it using [get_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.get_inference_endpoint), which returns an [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) object. Alternatively, you can use [list_inference_endpoints()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_inference_endpoints) to retrieve a list of all Inference Endpoints. Both methods accept an optional `namespace` parameter. You can set the `namespace` to any organization you are a part of. Otherwise, it defaults to your username.

```py
>>> from huggingface_hub import get_inference_endpoint, list_inference_endpoints

# Get one
>>> get_inference_endpoint("my-endpoint-name")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)

# List all endpoints from an organization
>>> list_inference_endpoints(namespace="huggingface")
[InferenceEndpoint(name='aws-starchat-beta', namespace='huggingface', repository='HuggingFaceH4/starchat-beta', status='paused', url=None), ...]

# List all endpoints from all organizations the user belongs to
>>> list_inference_endpoints(namespace="*")
[InferenceEndpoint(name='aws-starchat-beta', namespace='huggingface', repository='HuggingFaceH4/starchat-beta', status='paused', url=None), ...]
```

Or via CLI: 

```bash
hf endpoints describe my-endpoint-name
hf endpoints ls --namespace huggingface
hf endpoints ls --namespace '*'
```

## Check deployment status

In the rest of this guide, we will assume that we have a [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) object called `endpoint`. You might have noticed that the endpoint has a `status` attribute of type [InferenceEndpointStatus](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointStatus). When the Inference Endpoint is deployed and accessible, the status should be `"running"` and the `url` attribute is set:

```py
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='running', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
```

Before reaching a `"running"` state, the Inference Endpoint typically goes through an `"initializing"` or `"pending"` phase. You can fetch the new state of the endpoint by running [fetch()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.fetch). Like every other method from [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) that makes a request to the server, the internal attributes of `endpoint` are mutated in place:

```py
>>> endpoint.fetch()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
```

Or via CLI:

```bash
hf endpoints describe my-endpoint-name
```

Instead of fetching the Inference Endpoint status while waiting for it to run, you can directly call [wait()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.wait). This helper takes as input a `timeout` and a `fetch_every` parameter (in seconds) and will block the thread until the Inference Endpoint is deployed. Default values are respectively `None` (no timeout) and `5` seconds.

```py
# Pending endpoint
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)

# Wait 10s => raises a InferenceEndpointTimeoutError
>>> endpoint.wait(timeout=10)
    raise InferenceEndpointTimeoutError("Timeout while waiting for Inference Endpoint to be deployed.")
huggingface_hub._inference_endpoints.InferenceEndpointTimeoutError: Timeout while waiting for Inference Endpoint to be deployed.

# Wait more
>>> endpoint.wait()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='running', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
```

If `timeout` is set and the Inference Endpoint takes too much time to load, a `InferenceEndpointTimeoutError` timeout error is raised.

## Run inference

Once your Inference Endpoint is up and running, you can finally run inference on it!

[InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) has two properties `client` and `async_client` returning respectively an [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient) and an [AsyncInferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) objects.

```py
# Run text_generation task:
>>> endpoint.client.text_generation("I am")
' not a fan of the idea of a "big-budget" movie. I think it\'s a'

# Or in an asyncio context:
>>> await endpoint.async_client.text_generation("I am")
```

If the Inference Endpoint is not running, an [InferenceEndpointError](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) exception is raised:

```py
>>> endpoint.client
huggingface_hub._inference_endpoints.InferenceEndpointError: Cannot create a client for this Inference Endpoint as it is not yet deployed. Please wait for the Inference Endpoint to be deployed using `endpoint.wait()` and try again.
```

For more details about how to use the [InferenceClient](/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient), check out the [Inference guide](../guides/inference).

## Manage lifecycle

Now that we saw how to create an Inference Endpoint and run inference on it, let's see how to manage its lifecycle.

> [!TIP]
> In this section, we will see methods like [pause()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause), [resume()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume), [scale_to_zero()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero), [update()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update) and [delete()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.delete). All of those methods are aliases added to [InferenceEndpoint](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) for convenience. If you prefer, you can also use the generic methods defined in `HfApi`: [pause_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint), [resume_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint), [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint), [update_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint), and [delete_inference_endpoint()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint).

### Pause or scale to zero

To reduce costs when your Inference Endpoint is not in use, you can choose to either pause it using [pause()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause) or scale it to zero using [scale_to_zero()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero).

> [!TIP]
> An Inference Endpoint that is *paused* or *scaled to zero* doesn't cost anything. The difference between those two is that a *paused* endpoint needs to be explicitly *resumed* using [resume()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume). On the contrary, a *scaled to zero* endpoint will automatically start if an inference call is made to it, with an additional cold start delay. An Inference Endpoint can also be configured to scale to zero automatically after a certain period of inactivity.

```py
# Pause and resume endpoint
>>> endpoint.pause()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='paused', url=None)
>>> endpoint.resume()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
>>> endpoint.wait().client.text_generation(...)
...

# Scale to zero
>>> endpoint.scale_to_zero()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='scaledToZero', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
# Endpoint is not 'running' but still has a URL and will restart on first call.
```

Or via CLI:

```bash
hf endpoints pause my-endpoint-name
hf endpoints resume my-endpoint-name
hf endpoints scale-to-zero my-endpoint-name
```

### Update model or hardware requirements

In some cases, you might also want to update your Inference Endpoint without creating a new one. You can either update the hosted model or the hardware requirements to run the model. You can do this using [update()](/docs/huggingface_hub/main/en/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update):

```py
# Change target model
>>> endpoint.update(repository="gpt2-large")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)

# Update number of replicas
>>> endpoint.update(min_replica=2, max_replica=6)
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)

# Update to larger instance
>>> endpoint.update(accelerator="cpu", instance_size="x4", instance_type="intel-icl")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)
```

Or via CLI:

```bash
hf endpoints update my-endpoint-name --repo gpt2-large
hf endpoints update my-endpoint-name --min-replica 2 --max-replica 6
hf endpoints update my-endpoint-name --accelerator cpu --instance-size x4 --instance-type intel-icl
```

### Delete the endpoint

Finally if you won't use the Inference Endpoint anymore, you can simply call `~InferenceEndpoint.delete()`.

> [!WARNING]
> This is a non-revertible action that will completely remove the endpoint, including its configuration, logs and usage metrics. You cannot restore a deleted Inference Endpoint.


## An end-to-end example

A typical use case of Inference Endpoints is to process a batch of jobs at once to limit the infrastructure costs. You can automate this process using what we saw in this guide:

```py
>>> import asyncio
>>> from huggingface_hub import create_inference_endpoint

# Start endpoint + wait until initialized
>>> endpoint = create_inference_endpoint(name="batch-endpoint",...).wait()

# Run inference
>>> client = endpoint.client
>>> results = [client.text_generation(...) for job in jobs]

# Or with asyncio
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

# Pause endpoint
>>> endpoint.pause()
```

Or if your Inference Endpoint already exists and is paused:

```py
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint

# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()

# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

# Pause endpoint
>>> endpoint.pause()
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/inference_endpoints.md" />

### Search the Hub
https://huggingface.co/docs/huggingface_hub/main/guides/search.md

# Search the Hub

In this tutorial, you will learn how to search models, datasets and spaces on the Hub using `huggingface_hub`.

## How to list repositories ?

`huggingface_hub` library includes an HTTP client [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) to interact with the Hub.
Among other things, it can list models, datasets and spaces stored on the Hub:

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> models = api.list_models()
```

The output of [list_models()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_models) is an iterator over the models stored on the Hub.

Similarly, you can use [list_datasets()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_datasets) to list datasets and [list_spaces()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_spaces) to list Spaces.

## How to filter repositories ?

Listing repositories is great but now you might want to filter your search.
The list helpers have several attributes like:
- `filter`
- `author`
- `search`
- ...

Let's see an example to get all models on the Hub that does image classification, have been trained on the imagenet dataset and that runs with PyTorch.

```py
models = hf_api.list_models(filter=["image-classification", "pytorch", "imagenet"])
```

While filtering, you can also sort the models and take only the top results. For example,
the following example fetches the top 5 most downloaded datasets on the Hub:

```py
>>> list(list_datasets(sort="downloads", direction=-1, limit=5))
[DatasetInfo(
	id='argilla/databricks-dolly-15k-curated-en',
	author='argilla',
	sha='4dcd1dedbe148307a833c931b21ca456a1fc4281',
	last_modified=datetime.datetime(2023, 10, 2, 12, 32, 53, tzinfo=datetime.timezone.utc),
	private=False,
	downloads=8889377,
	(...)
```



To explore available filters on the Hub, visit [models](https://huggingface.co/models) and [datasets](https://huggingface.co/datasets) pages
in your browser, search for some parameters and look at the values in the URL.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/search.md" />

### Understand caching
https://huggingface.co/docs/huggingface_hub/main/guides/manage-cache.md

# Understand caching

`huggingface_hub` utilizes the local disk as two caches, which avoid re-downloading items again. The first cache is a file-based cache, which caches individual files downloaded from the Hub and ensures that the same file is not downloaded again when a repo gets updated. The second cache is a chunk cache, where each chunk represents a byte range from a file and ensures that chunks that are shared across files are only downloaded once.

## File-based caching

The Hugging Face Hub cache-system is designed to be the central cache shared across libraries
that depend on the Hub. It has been updated in v0.8.0 to prevent re-downloading same files
between revisions.

The caching system is designed as follows:

```
<CACHE_DIR>
├─ <MODELS>
├─ <DATASETS>
├─ <SPACES>
```

The default `<CACHE_DIR>` is `~/.cache/huggingface/hub`. However, it is customizable with the `cache_dir` argument on all methods, or by specifying either `HF_HOME` or `HF_HUB_CACHE` environment variable.

Models, datasets and spaces share a common root. Each of these repositories contains the
repository type, the namespace (organization or username) if it exists and the
repository name:

```
<CACHE_DIR>
├─ models--julien-c--EsperBERTo-small
├─ models--lysandrejik--arxiv-nlp
├─ models--bert-base-cased
├─ datasets--glue
├─ datasets--huggingface--DataMeasurementsFiles
├─ spaces--dalle-mini--dalle-mini
```

It is within these folders that all files will now be downloaded from the Hub. Caching ensures that
a file isn't downloaded twice if it already exists and wasn't updated; but if it was updated,
and you're asking for the latest file, then it will download the latest file (while keeping
the previous file intact in case you need it again).

In order to achieve this, all folders contain the same skeleton:

```
<CACHE_DIR>
├─ datasets--glue
│  ├─ refs
│  ├─ blobs
│  ├─ snapshots
...
```

Each folder is designed to contain the following:

### Refs

The `refs` folder contains files which indicates the latest revision of the given reference. For example,
if we have previously fetched a file from the `main` branch of a repository, the `refs`
folder will contain a file named `main`, which will itself contain the commit identifier of the current head.

If the latest commit of `main` has `aaaaaa` as identifier, then it will contain `aaaaaa`.

If that same branch gets updated with a new commit, that has `bbbbbb` as an identifier, then
re-downloading a file from that reference will update the `refs/main` file to contain `bbbbbb`.

### Blobs

The `blobs` folder contains the actual files that we have downloaded. The name of each file is their hash.

### Snapshots

The `snapshots` folder contains symlinks to the blobs mentioned above. It is itself made up of several folders:
one per known revision!

In the explanation above, we had initially fetched a file from the `aaaaaa` revision, before fetching a file from
the `bbbbbb` revision. In this situation, we would now have two folders in the `snapshots` folder: `aaaaaa`
and `bbbbbb`.

In each of these folders, live symlinks that have the names of the files that we have downloaded. For example,
if we had downloaded the `README.md` file at revision `aaaaaa`, we would have the following path:

```
<CACHE_DIR>/<REPO_NAME>/snapshots/aaaaaa/README.md
```

That `README.md` file is actually a symlink linking to the blob that has the hash of the file.

By creating the skeleton this way we open the mechanism to file sharing: if the same file was fetched in
revision `bbbbbb`, it would have the same hash and the file would not need to be re-downloaded.

### .no_exist (advanced)

In addition to the `blobs`, `refs` and `snapshots` folders, you might also find a `.no_exist` folder
in your cache. This folder keeps track of files that you've tried to download once but don't exist
on the Hub. Its structure is the same as the `snapshots` folder with 1 subfolder per known revision:

```
<CACHE_DIR>/<REPO_NAME>/.no_exist/aaaaaa/config_that_does_not_exist.json
```

Unlike the `snapshots` folder, files are simple empty files (no symlinks). In this example,
the file `"config_that_does_not_exist.json"` does not exist on the Hub for the revision `"aaaaaa"`.
As it only stores empty files, this folder is neglectable in term of disk usage.

So now you might wonder, why is this information even relevant?
In some cases, a framework tries to load optional files for a model. Saving the non-existence
of optional files makes it faster to load a model as it saves 1 HTTP call per possible optional file.
This is for example the case in `transformers` where each tokenizer can support additional files.
The first time you load the tokenizer on your machine, it will cache which optional files exist (and
which doesn't) to make the loading time faster for the next initializations.

To test if a file is cached locally (without making any HTTP request), you can use the [try_to_load_from_cache()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.try_to_load_from_cache)
helper. It will either return the filepath (if exists and cached), the object `_CACHED_NO_EXIST` (if non-existence
is cached) or `None` (if we don't know).

```python
from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST

filepath = try_to_load_from_cache()
if isinstance(filepath, str):
    # file exists and is cached
    ...
elif filepath is _CACHED_NO_EXIST:
    # non-existence of file is cached
    ...
else:
    # file is not cached
    ...
```

### In practice

In practice, your cache should look like the following tree:

```text
    [  96]  .
    └── [ 160]  models--julien-c--EsperBERTo-small
        ├── [ 160]  blobs
        │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
        │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
        ├── [  96]  refs
        │   └── [  40]  main
        └── [ 128]  snapshots
            ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
            │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
            │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
            └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
                ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
                └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
```

### Limitations

In order to have an efficient cache-system, `huggingface-hub` uses symlinks. However,
symlinks are not supported on all machines. This is a known limitation especially on
Windows. When this is the case, `huggingface_hub` do not use the `blobs/` directory but
directly stores the files in the `snapshots/` directory instead. This workaround allows
users to download and cache files from the Hub exactly the same way. Tools to inspect
and delete the cache (see below) are also supported. However, the cache-system is less
efficient as a single file might be downloaded several times if multiple revisions of
the same repo is downloaded.

If you want to benefit from the symlink-based cache-system on a Windows machine, you
either need to [activate Developer Mode](https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development)
or to run Python as an administrator.

When symlinks are not supported, a warning message is displayed to the user to alert
them they are using a degraded version of the cache-system. This warning can be disabled
by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable to true.

## Chunk-based caching (Xet)

To provide more efficient file transfers, `hf_xet` adds a `xet` directory to the existing `huggingface_hub` cache, creating additional caching layer to enable chunk-based deduplication. This cache holds chunks (immutable byte ranges of files ~64KB in size) and shards (a data structure that maps files to chunks). For more information on the Xet Storage system, see this [section](https://huggingface.co/docs/hub/xet/index).

The `xet` directory, located at `~/.cache/huggingface/xet` by default, contains two caches, utilized for uploads and downloads. It has the following structure:

```bash
<CACHE_DIR>
├─ xet
│  ├─ environment_identifier
│  │  ├─ chunk_cache
│  │  ├─ shard_cache
│  │  ├─ staging
```

The `environment_identifier` directory is an encoded string (it may appear on your machine as `https___cas_serv-tGqkUaZf_CBPHQ6h`). This is used during development allowing for local and production versions of the cache to exist alongside each other simultaneously. It is also used when downloading from repositories that reside in different [storage regions](https://huggingface.co/docs/hub/storage-regions). You may see multiple such entries in the `xet` directory, each corresponding to a different environment, but their internal structure is the same. 

The internal directories serve the following purposes:
* `chunk-cache` contains cached data chunks that are used to speed up downloads.
* `shard-cache` contains cached shards that are utilized on the upload path. 
* `staging` is a workspace designed to support resumable uploads.

These are documented below.

Note that the `xet` caching system, like the rest of `hf_xet` is fully integrated with `huggingface_hub`.  If you use the existing APIs for interacting with cached assets, there is no need to update your workflow. The `xet` caches are built as an optimization layer on top of the existing `hf_xet` chunk-based deduplication and `huggingface_hub` cache system. 


### `chunk_cache`

This cache is used on the download path. The cache directory structure is based on a base-64 encoded hash from the content-addressed store (CAS) that backs each Xet-enabled repository. A CAS hash serves as the key to lookup the offsets of where the data is stored. Note: as of `hf_xet` 1.2.0 the chunk_cache is disabled by default. To enable it, set the `HF_XET_CHUNK_CACHE_SIZE_BYTES` environment variable to the appropriate size prior to launching the Python process.

At the topmost level, the first two letters of the base 64 encoded CAS hash are used to create a subdirectory in the `chunk_cache` (keys that share these first two letters are grouped here).  The inner levels are comprised of subdirectories with the full key as the directory name. At the base are the cache items which are ranges of blocks that contain the cached chunks.

```bash
<CACHE_DIR>
├─ xet
│  ├─ chunk_cache
│  │  ├─ A1
│  │  │  ├─ A1GerURLUcISVivdseeoY1PnYifYkOaCCJ7V5Q9fjgxkZWZhdWx0
│  │  │  │  ├─ AAAAAAEAAAA5DQAAAAAAAIhRLjDI3SS5jYs4ysNKZiJy9XFI8CN7Ww0UyEA9KPD9
│  │  │  │  ├─ AQAAAAIAAABzngAAAAAAAPNqPjd5Zby5aBvabF7Z1itCx0ryMwoCnuQcDwq79jlB

```

When requesting a file, the first thing `hf_xet` does is communicate with Xet storage’s content addressed store (CAS) for reconstruction information. The reconstruction information contains information about the CAS keys required to download the file in its entirety. 

Before executing the requests for the CAS keys, the `chunk_cache` is consulted. If a key in the cache matches a CAS key, then there is no reason to issue a request for that content. `hf_xet` uses the chunks stored in the directory instead.

As the `chunk_cache` is purely an optimization, not a guarantee, `hf_xet` utilizes a computationally efficient eviction policy. When the `chunk_cache` is full (see `Limits and Limitations` below), `hf_xet` implements a random eviction policy when selecting an eviction candidate. This significantly reduces the overhead of managing a robust caching system (e.g., LRU) while still providing most of the benefits of caching chunks. 

### `shard_cache`

This cache is used when uploading content to the Hub. The directory is flat, comprising only of shard files, each using an ID for the shard name. 

```sh
<CACHE_DIR>
├─ xet
│  ├─ shard_cache
│  │  ├─ 1fe4ffd5cf0c3375f1ef9aec5016cf773ccc5ca294293d3f92d92771dacfc15d.mdb
│  │  ├─ 906ee184dc1cd0615164a89ed64e8147b3fdccd1163d80d794c66814b3b09992.mdb
│  │  ├─ ceeeb7ea4cf6c0a8d395a2cf9c08871211fbbd17b9b5dc1005811845307e6b8f.mdb
│  │  ├─ e8535155b1b11ebd894c908e91a1e14e3461dddd1392695ddc90ae54a548d8b2.mdb
```

The `shard_cache` contains shards that are: 

- Locally generated and successfully uploaded to the CAS
- Downloaded from CAS as part of the global deduplication algorithm

Shards provide a mapping between files and chunks. During uploads, each file is chunked and the hash of the chunk is saved. Every shard in the cache is then consulted. If a shard contains a chunk hash that is present in the local file being uploaded, then that chunk can be discarded as it is already stored in CAS. 

All shards have an expiration date of 3-4 weeks from when they are downloaded. Shards that are expired are not loaded during upload and are deleted one week after expiration. 

### `staging`

When an upload terminates before the new content has been committed to the repository, you will need to resume the file transfer. However, it is possible that some chunks were successfully uploaded prior to the interruption. 

So that you do not have to restart from the beginning, the `staging` directory acts as a workspace during uploads, storing metadata for successfully uploaded chunks. The `staging` directory has the following shape:

```
<CACHE_DIR>
├─ xet
│  ├─ staging
│  │  ├─ shard-session
│  │  │  ├─ 906ee184dc1cd0615164a89ed64e8147b3fdccd1163d80d794c66814b3b09992.mdb
│  │  │  ├─ xorb-metadata
│  │  │  │  ├─ 1fe4ffd5cf0c3375f1ef9aec5016cf773ccc5ca294293d3f92d92771dacfc15d.mdb
```

As files are processed and chunks successfully uploaded, their metadata is stored in `xorb-metadata` as a shard. Upon resuming an upload session, each file is processed again and the shards in this directory are consulted. Any content that was successfully uploaded is skipped, and any new content is uploaded (and its metadata saved). 

Meanwhile, `shard-session` stores file and chunk information for processed files. On successful completion of an upload, the content from these shards is moved to the more persistent `shard-cache`.

### Limits and Limitations

The `chunk_cache` is limited to 10GB in size while the `shard_cache` has a soft limit of 4GB.  By design, both caches are without high-level APIs, although their size is configurable through the `HF_XET_CHUNK_CACHE_SIZE_BYTES` and `HF_XET_SHARD_CACHE_SIZE_LIMIT` environment variables. 

These caches are used primarily to facilitate the reconstruction (download) or upload of a file. To interact with the assets themselves, it’s recommended that you use the [`huggingface_hub` cache system APIs](https://huggingface.co/docs/huggingface_hub/guides/manage-cache).

If you need to reclaim the space utilized by either cache or need to debug any potential cache-related issues, simply remove the `xet` cache entirely by running `rm -rf ~/<cache_dir>/xet` where `<cache_dir>` is the location of your Hugging Face cache, typically `~/.cache/huggingface` 

Example full `xet`cache directory tree:

```sh
<CACHE_DIR>
├─ xet
│  ├─ chunk_cache
│  │  ├─ L1
│  │  │  ├─ L1GerURLUcISVivdseeoY1PnYifYkOaCCJ7V5Q9fjgxkZWZhdWx0
│  │  │  │  ├─ AAAAAAEAAAA5DQAAAAAAAIhRLjDI3SS5jYs4ysNKZiJy9XFI8CN7Ww0UyEA9KPD9
│  │  │  │  ├─ AQAAAAIAAABzngAAAAAAAPNqPjd5Zby5aBvabF7Z1itCx0ryMwoCnuQcDwq79jlB
│  ├─ shard_cache
│  │  ├─ 1fe4ffd5cf0c3375f1ef9aec5016cf773ccc5ca294293d3f92d92771dacfc15d.mdb
│  │  ├─ 906ee184dc1cd0615164a89ed64e8147b3fdccd1163d80d794c66814b3b09992.mdb
│  │  ├─ ceeeb7ea4cf6c0a8d395a2cf9c08871211fbbd17b9b5dc1005811845307e6b8f.mdb
│  │  ├─ e8535155b1b11ebd894c908e91a1e14e3461dddd1392695ddc90ae54a548d8b2.mdb
│  ├─ staging
│  │  ├─ shard-session
│  │  │  ├─ 906ee184dc1cd0615164a89ed64e8147b3fdccd1163d80d794c66814b3b09992.mdb
│  │  │  ├─ xorb-metadata
│  │  │  │  ├─ 1fe4ffd5cf0c3375f1ef9aec5016cf773ccc5ca294293d3f92d92771dacfc15d.mdb
```

To learn more about Xet Storage, see this [section](https://huggingface.co/docs/hub/xet/index).

## Caching assets

In addition to caching files from the Hub, downstream libraries often requires to cache
other files related to HF but not handled directly by `huggingface_hub` (example: file
downloaded from GitHub, preprocessed data, logs,...). In order to cache those files,
called `assets`, one can use [cached_assets_path()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.cached_assets_path). This small helper generates paths
in the HF cache in a unified way based on the name of the library requesting it and
optionally on a namespace and a subfolder name. The goal is to let every downstream
libraries manage its assets its own way (e.g. no rule on the structure) as long as it
stays in the right assets folder. Those libraries can then leverage tools from
`huggingface_hub` to manage the cache, in particular scanning and deleting parts of the
assets from a CLI command.

```py
from huggingface_hub import cached_assets_path

assets_path = cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="download")
something_path = assets_path / "something.json" # Do anything you like in your assets folder !
```

> [!TIP]
> [cached_assets_path()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.cached_assets_path) is the recommended way to store assets but is not mandatory. If
> your library already uses its own cache, feel free to use it!

### Assets in practice

In practice, your assets cache should look like the following tree:

```text
    assets/
    └── datasets/
    │   ├── SQuAD/
    │   │   ├── downloaded/
    │   │   ├── extracted/
    │   │   └── processed/
    │   ├── Helsinki-NLP--tatoeba_mt/
    │       ├── downloaded/
    │       ├── extracted/
    │       └── processed/
    └── transformers/
        ├── default/
        │   ├── something/
        ├── bert-base-cased/
        │   ├── default/
        │   └── training/
    hub/
    └── models--julien-c--EsperBERTo-small/
        ├── blobs/
        │   ├── (...)
        │   ├── (...)
        ├── refs/
        │   └── (...)
        └── [ 128]  snapshots/
            ├── 2439f60ef33a0d46d85da5001d52aeda5b00ce9f/
            │   ├── (...)
            └── bbc77c8132af1cc5cf678da3f1ddf2de43606d48/
                └── (...)
```

## Manage your file-based cache

### Inspect your cache

At the moment, cached files are never deleted from your local directory: when you download
a new revision of a branch, previous files are kept in case you need them again.
Therefore it can be useful to inspect your cache directory in order to know which repos
and revisions are taking the most disk space. `huggingface_hub` provides helpers you can
use from the `hf` CLI or from Python.

**Inspect cache from the terminal**

Run `hf cache ls` to explore what is stored locally. By default the command aggregates
information by repository:

```text
➜ hf cache ls
ID                                   SIZE   LAST_ACCESSED LAST_MODIFIED REFS
------------------------------------ ------- ------------- ------------- -------------------
dataset/glue                         116.3K 4 days ago     4 days ago     2.4.0 main 1.17.0
dataset/google/fleurs                 64.9M 1 week ago     1 week ago     main refs/pr/1
model/Jean-Baptiste/camembert-ner    441.0M 2 weeks ago    16 hours ago   main
model/bert-base-cased                  1.9G 1 week ago     2 years ago
model/t5-base                          10.1K 3 months ago   3 months ago   main
model/t5-small                        970.7M 3 days ago     3 days ago     main refs/pr/1

Found 6 repo(s) for a total of 12 revision(s) and 3.4G on disk.
```

Add `--revisions` to list every cached snapshot and chain filters to focus on what
matters. Filters understand human-friendly sizes and durations, so expressions such as
`size>1GB` or `accessed>30d` work out of the box:

```text
➜ hf cache ls --revisions --filter "size>1GB" --filter "accessed>30d"
ID                                   REVISION            SIZE   LAST_MODIFIED REFS
------------------------------------ ------------------ ------- ------------- -------------------
model/bert-base-cased                6d1d7a1a2a6cf4c2    1.9G  2 years ago
model/t5-small                       1c610f6b3f5e7d8a    1.1G  3 months ago  main

Found 2 repo(s) for a total of 2 revision(s) and 3.0G on disk.
```

Need machine-friendly output? Use `--format json` to get structured objects or
`--format csv` for spreadsheets. Alternatively `--quiet` prints only identifiers (one
per line) so you can pipe them into other tooling. Use `--sort` to order entries by `accessed`, `modified`, `name`, or `size` (append `:asc` or `:desc` to control order), and `--limit` to restrict results to the top N entries. Combine these options with
`--cache-dir` when you need to inspect a cache stored outside of `HF_HOME`.

**Filter with common shell tools**

Tabular output means you can keep using the tooling you already know. For instance, the
snippet below finds every cached revision related to `t5-small`:

```text
➜ eval "hf cache ls --revisions" | grep "t5-small"
model/t5-small                       1c610f6b3f5e7d8a    1.1G  3 months ago  main
model/t5-small                       8f3ad1c90fed7a62    820.1M 2 weeks ago   refs/pr/1
```

**Inspect cache from Python**

For a more advanced usage, use [scan_cache_dir()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.scan_cache_dir) which is the python utility called by
the CLI tool.

You can use it to get a detailed report structured around 4 dataclasses:

- [HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo): complete report returned by [scan_cache_dir()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.scan_cache_dir)
- [CachedRepoInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedRepoInfo): information about a cached repo
- [CachedRevisionInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedRevisionInfo): information about a cached revision (e.g. "snapshot") inside a repo
- [CachedFileInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.CachedFileInfo): information about a cached file in a snapshot

Here is a simple usage example. See reference for details.

```py
>>> from huggingface_hub import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(
    size_on_disk=3398085269,
    repos=frozenset({
        CachedRepoInfo(
            repo_id='t5-small',
            repo_type='model',
            repo_path=PosixPath(...),
            size_on_disk=970726914,
            nb_files=11,
            last_accessed=1662971707.3567169,
            last_modified=1662971107.3567169,
            revisions=frozenset({
                CachedRevisionInfo(
                    commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5',
                    size_on_disk=970726339,
                    snapshot_path=PosixPath(...),
                    # No `last_accessed` as blobs are shared among revisions
                    last_modified=1662971107.3567169,
                    files=frozenset({
                        CachedFileInfo(
                            file_name='config.json',
                            size_on_disk=1197
                            file_path=PosixPath(...),
                            blob_path=PosixPath(...),
                            blob_last_accessed=1662971707.3567169,
                            blob_last_modified=1662971107.3567169,
                        ),
                        CachedFileInfo(...),
                        ...
                    }),
                ),
                CachedRevisionInfo(...),
                ...
            }),
        ),
        CachedRepoInfo(...),
        ...
    }),
    warnings=[
        CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."),
        CorruptedCacheException(...),
        ...
    ],
)
```

### Verify your cache

`huggingface_hub` can verify that your cached files match the checksums on the Hub. Use `hf cache verify` CLI to validate file consistency for a specific revision of a specific repository:


```bash
>>> hf cache verify meta-llama/Llama-3.2-1B-Instruct
✅ Verified 13 file(s) for 'meta-llama/Llama-3.2-1B-Instruct' (model) in ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct/snapshots/9213176726f574b556790deb65791e0c5aa438b6
  All checksums match.
```

Verify a specific cached revision:

```bash
>>> hf cache verify meta-llama/Llama-3.1-8B-Instruct --revision 0e9e39f249a16976918f6564b8830bc894c89659
```

> [!TIP]
> Check the [`hf cache verify` CLI reference](../package_reference/cli#hf-cache-verify) for more details about the usage and a complete list of options.

### Clean your cache

Scanning your cache is interesting but what you really want to do next is usually to
delete some portions to free up some space on your drive. This is possible using the
`hf cache rm` and `hf cache prune` CLI commands. One can also programmatically use the
[delete_revisions()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions) helper from the [HFCacheInfo](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo) object returned when
scanning the cache.

**Delete strategy**

To delete some cache, you need to pass a list of revisions to delete. The tool will
define a strategy to free up the space based on this list. It returns a
[DeleteCacheStrategy](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.DeleteCacheStrategy) object that describes which files and folders will be deleted.
The [DeleteCacheStrategy](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.DeleteCacheStrategy) allows give you how much space is expected to be freed.
Once you agree with the deletion, you must execute it to make the deletion effective. In
order to avoid discrepancies, you cannot edit a strategy object manually.

The strategy to delete revisions is the following:

- the `snapshot` folder containing the revision symlinks is deleted.
- blobs files that are targeted only by revisions to be deleted are deleted as well.
- if a revision is linked to 1 or more `refs`, references are deleted.
- if all revisions from a repo are deleted, the entire cached repository is deleted.

> [!TIP]
> Revision hashes are unique across all repositories. `hf cache rm` therefore accepts either
> a repo identifier (for example `model/bert-base-uncased`) or a bare revision hash; when
> passing a hash you don't need to specify the repo separately.

> [!WARNING]
> If a revision is not found in the cache, it will be silently ignored. Besides, if a file
> or folder cannot be found while trying to delete it, a warning will be logged but no
> error is thrown. The deletion continues for other paths contained in the
> [DeleteCacheStrategy](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.DeleteCacheStrategy) object.

**Clean cache from the terminal**

Use `hf cache rm` to permanently delete repositories or revisions from your cache. Pass
one or more repo identifiers (for example `model/bert-base-uncased`) or revision hashes:

```text
➜ hf cache rm model/bert-base-cased
About to delete 1 repo(s) totalling 1.9G.
  - model/bert-base-cased (entire repo)
Proceed with deletion? [y/N]: y
Deleted 1 repo(s) and 1 revision(s); freed 1.9G.
```

You can also use `hf cache rm` in combination with `hf cache ls --quiet` to bulk-delete entries identified by a filter:

```bash
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q) -y
About to delete 2 repo(s) totalling 5.31G.
  - model/meta-llama/Llama-3.2-1B-Instruct (entire repo)
  - model/hexgrad/Kokoro-82M (entire repo)
Delete repo: ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct
Delete repo: ~/.cache/huggingface/hub/models--hexgrad--Kokoro-82M
Cache deletion done. Saved 5.31G.
Deleted 2 repo(s) and 2 revision(s); freed 5.31G.
```

Mix repositories and revisions in the same call. Add `--dry-run` to preview the impact,
or `--yes` to skip the confirmation prompt when scripting:

```text
➜ hf cache rm model/t5-small 8f3ad1c --dry-run
About to delete 1 repo(s) and 1 revision(s) totalling 1.1G.
  - model/t5-small:
      8f3ad1c [main] 1.1G
Dry run: no files were deleted.
```

When working outside the default cache location, pair the command with
`--cache-dir PATH`.

To clean up detached snapshots in bulk, run `hf cache prune`. It automatically selects
revisions that are no longer referenced by a branch or tag:

```text
➜ hf cache prune
About to delete 3 unreferenced revision(s) (2.4G total).
  - model/t5-small:
      1c610f6b [refs/pr/1] 820.1M
      d4ec9b72 [(detached)] 640.5M
  - dataset/google/fleurs:
      2b91c8dd [(detached)] 937.6M
Proceed? [y/N]: y
Deleted 3 unreferenced revision(s); freed 2.4G.
```

Both commands support `--dry-run`, `--yes`, and `--cache-dir` so you can preview, automate,
and target alternate cache directories as needed.

**Clean cache from Python**

For more flexibility, you can also use the [delete_revisions()](/docs/huggingface_hub/main/en/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions) method
programmatically. Here is a simple example. See reference for details.

```py
>>> from huggingface_hub import scan_cache_dir

>>> delete_strategy = scan_cache_dir().delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa"
...     "e2983b237dccf3ab4937c97fa717319a9ca1a96d",
...     "6c0e6080953db56375760c0471a8c5f2929baf11",
... )
>>> print("Will free " + delete_strategy.expected_freed_size_str)
Will free 8.6G

>>> delete_strategy.execute()
Cache deletion done. Saved 8.6G.
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/manage-cache.md" />

### Command Line Interface (CLI)
https://huggingface.co/docs/huggingface_hub/main/guides/cli.md

# Command Line Interface (CLI)

The `huggingface_hub` Python package comes with a built-in CLI called `hf`. This tool allows you to interact with the Hugging Face Hub directly from a terminal. For example, you can log in to your account, create a repository, upload and download files, etc. It also comes with handy features to configure your machine or manage your cache. In this guide, we will have a look at the main features of the CLI and how to use them.

> [!TIP]
> This guide covers the most important features of the `hf` CLI.
> For a complete reference of all commands and options, see the [CLI reference](../package_reference/cli.md).

## Getting started

First of all, let's install the CLI:

```
>>> pip install -U "huggingface_hub"
```

> [!TIP]
> The CLI ships with the core `huggingface_hub` package.

Alternatively, you can install the `hf` CLI with a single command:

On macOS and Linux:

```bash
>>> curl -LsSf https://hf.co/cli/install.sh | bash
```

On Windows:

```powershell
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"
```

Alternatively, you can install the `hf` CLI with a single command:

On macOS and Linux:

```bash
>>> curl -LsSf https://hf.co/cli/install.sh | sh
```

On Windows:

```powershell
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"
```

Once installed, you can check that the CLI is correctly setup:

```
>>> hf --help
Usage: hf [OPTIONS] COMMAND [ARGS]...

  Hugging Face Hub CLI

Options:
  --install-completion  Install completion for the current shell.
  --show-completion     Show completion for the current shell, to copy it or
                        customize the installation.
  --help                Show this message and exit.

Commands:
  auth                 Manage authentication (login, logout, etc.).
  cache                Manage local cache directory.
  download             Download files from the Hub.
  env                  Print information about the environment.
  jobs                 Run and manage Jobs on the Hub.
  repo                 Manage repos on the Hub.
  repo-files           Manage files in a repo on the Hub.
  upload               Upload a file or a folder to the Hub.
  upload-large-folder  Upload a large folder to the Hub.
  version              Print information about the hf version.
```

If the CLI is correctly installed, you should see a list of all the options available in the CLI. If you get an error message such as `command not found: hf`, please refer to the [Installation](../installation) guide.

> [!TIP]
> The `--help` option is very convenient for getting more details about a command. You can use it anytime to list all available options and their details. For example, `hf upload --help` provides more information on how to upload files using the CLI.

### Other installation methods

#### Using uv

You can install and run the `hf` CLI with [uv](https://docs.astral.sh/uv/). 

Make sure uv is installed (adds `uv` and `uvx` to your PATH):

```bash
>>> curl -LsSf https://astral.sh/uv/install.sh | sh
```

Then install the CLI globally and use it anywhere:

```bash
>>> uv tool install "huggingface_hub"
>>> hf auth whoami
```

Alternatively, run the CLI ephemerally with `uvx` (no global install):

```bash
>>> uvx --from huggingface_hub hf auth whoami
```

#### Using Homebrew

You can also install the CLI using [Homebrew](https://brew.sh/):

```bash
>>> brew install huggingface-cli
```

Check out the Homebrew huggingface page [here](https://formulae.brew.sh/formula/huggingface-cli) for more details.

## hf auth login

In many cases, you must be logged in to a Hugging Face account to interact with the Hub (download private repos, upload files, create PRs, etc.). To do so, you need a [User Access Token](https://huggingface.co/docs/hub/security-tokens) from your [Settings page](https://huggingface.co/settings/tokens). The User Access Token is used to authenticate your identity to the Hub. Make sure to set a token with write access if you want to upload or modify content.

Once you have your token, run the following command in your terminal:

```bash
>>> hf auth login
```

This command will prompt you for a token. Copy-paste yours and press *Enter*. Then, you'll be asked if the token should also be saved as a git credential. Press *Enter* again (default to yes) if you plan to use `git` locally. Finally, it will call the Hub to check that your token is valid and save it locally.

```
_|    _|  _|    _|    _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|_|_|_|    _|_|      _|_|_|  _|_|_|_|
_|    _|  _|    _|  _|        _|          _|    _|_|    _|  _|            _|        _|    _|  _|        _|
_|_|_|_|  _|    _|  _|  _|_|  _|  _|_|    _|    _|  _|  _|  _|  _|_|      _|_|_|    _|_|_|_|  _|        _|_|_|
_|    _|  _|    _|  _|    _|  _|    _|    _|    _|    _|_|  _|    _|      _|        _|    _|  _|        _|
_|    _|    _|_|      _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|        _|    _|    _|_|_|  _|_|_|_|

To log in, `huggingface_hub` requires a token generated from https://huggingface.co/settings/tokens .
Enter your token (input will not be visible):
Add token as git credential? (Y/n)
Token is valid (permission: write).
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/wauplin/.cache/huggingface/token
Login successful
```

Alternatively, if you want to log-in without being prompted, you can pass the token directly from the command line. To be more secure, we recommend passing your token as an environment variable to avoid pasting it in your command history.

```bash
# Or using an environment variable
>>> hf auth login --token $HF_TOKEN --add-to-git-credential
Token is valid (permission: write).
The token `token_name` has been saved to /home/wauplin/.cache/huggingface/stored_tokens
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/wauplin/.cache/huggingface/token
Login successful
The current active token is: `token_name`
```

For more details about authentication, check out [this section](../quick-start#authentication).

## hf auth whoami

If you want to know if you are logged in, you can use `hf auth whoami`. This command doesn't have any options and simply prints your username and the organizations you are a part of on the Hub:

```bash
hf auth whoami
Wauplin
orgs:  huggingface,eu-test,OAuthTesters,hf-accelerate,HFSmolCluster
```

If you are not logged in, an error message will be printed.

## hf auth logout

This command logs you out. In practice, it will delete all tokens stored on your machine. If you want to remove a specific token, you can specify the token name as an argument.

This command will not log you out if you are logged in using the `HF_TOKEN` environment variable (see [reference](../package_reference/environment_variables#hftoken)). If that is the case, you must unset the environment variable in your machine configuration.

## hf download


Use the `hf download` command to download files from the Hub directly. Internally, it uses the same [hf_hub_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download) and [snapshot_download()](/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) helpers described in the [Download](./download) guide and prints the returned path to the terminal. In the examples below, we will walk through the most common use cases. For a full list of available options, you can run:

```bash
hf download --help
```

### Download a single file

To download a single file from a repo, simply provide the repo_id and filename as follows:

```bash
>>> hf download gpt2 config.json
downloading https://huggingface.co/gpt2/resolve/main/config.json to /home/wauplin/.cache/huggingface/hub/tmpwrq8dm5o
(…)ingface.co/gpt2/resolve/main/config.json: 100%|██████████████████████████████████| 665/665 [00:00<00:00, 2.49MB/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

The command will always print on the last line the path to the file on your local machine.

To download a file located in a subdirectory of the repo, you should provide the path of the file in the repo in posix format like this:

```bash
>>> hf download HiDream-ai/HiDream-I1-Full text_encoder/model.safetensors
```

### Download an entire repository

In some cases, you just want to download all the files from a repository. This can be done by just specifying the repo id:

```bash
>>> hf download HuggingFaceH4/zephyr-7b-beta
Fetching 23 files:   0%|                                                | 0/23 [00:00<?, ?it/s]
...
...
/home/wauplin/.cache/huggingface/hub/models--HuggingFaceH4--zephyr-7b-beta/snapshots/3bac358730f8806e5c3dc7c7e19eb36e045bf720
```

### Download multiple files

You can also download a subset of the files from a repository with a single command. This can be done in two ways. If you already have a precise list of the files you want to download, you can simply provide them sequentially:

```bash
>>> hf download gpt2 config.json model.safetensors
Fetching 2 files:   0%|                                                                        | 0/2 [00:00<?, ?it/s]
downloading https://huggingface.co/gpt2/resolve/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors to /home/wauplin/.cache/huggingface/hub/tmpdachpl3o
(…)8f278a7049802950aedb10/model.safetensors: 100%|██████████████████████████████| 8.09k/8.09k [00:00<00:00, 40.5MB/s]
Fetching 2 files: 100%|████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  3.76it/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

The other approach is to provide patterns to filter which files you want to download using `--include` and `--exclude`. For example, if you want to download all safetensors files from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), except the files in FP16 precision:

```bash
>>> hf download stabilityai/stable-diffusion-xl-base-1.0 --include "*.safetensors" --exclude "*.fp16.*"*
Fetching 8 files:   0%|                                                                         | 0/8 [00:00<?, ?it/s]
...
...
Fetching 8 files: 100%|█████████████████████████████████████████████████████████████████████████| 8/8 (...)
/home/wauplin/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/snapshots/462165984030d82259a11f4367a4eed129e94a7b
```

### Download a dataset or a Space

The examples above show how to download from a model repository. To download a dataset or a Space, use the `--repo-type` option:

```bash
# https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k
>>> hf download HuggingFaceH4/ultrachat_200k --repo-type dataset

# https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
>>> hf download HuggingFaceH4/zephyr-chat --repo-type space

...
```

### Download a specific revision

The examples above show how to download from the latest commit on the main branch. To download from a specific revision (commit hash, branch name or tag), use the `--revision` option:

```bash
>>> hf download bigcode/the-stack --repo-type dataset --revision v1.1
...
```

### Download to a local folder

The recommended (and default) way to download files from the Hub is to use the cache-system. However, in some cases you want to download files and move them to a specific folder. This is useful to get a workflow closer to what git commands offer. You can do that using the `--local-dir` option.

A `.cache/huggingface/` folder is created at the root of your local directory containing metadata about the downloaded files. This prevents re-downloading files if they're already up-to-date. If the metadata has changed, then the new file version is downloaded. This makes the `local-dir` optimized for pulling only the latest changes.

> [!TIP]
> For more details on how downloading to a local file works, check out the [download](./download#download-files-to-a-local-folder) guide.

```bash
>>> hf download adept/fuyu-8b model-00001-of-00002.safetensors --local-dir fuyu
...
fuyu/model-00001-of-00002.safetensors
```

### Dry-run mode

In some cases, you would like to check which files would be downloaded before actually downloading them. You can check this using the `--dry-run` parameter. It lists all files to download on the repo and checks whether they are already downloaded or not. This gives an idea of how many files have to be downloaded and their sizes.

```sh
>>> hf download openai-community/gpt2 --dry-run
[dry-run] Fetching 26 files: 100%|█████████████| 26/26 [00:04<00:00,  6.26it/s]
[dry-run] Will download 11 files (out of 26) totalling 5.6G.
File                              Bytes to download
--------------------------------- -----------------
.gitattributes                    -
64-8bits.tflite                   125.2M
64-fp16.tflite                    248.3M
64.tflite                         495.8M
README.md                         -
config.json                       -
flax_model.msgpack                497.8M
generation_config.json            -
merges.txt                        -
model.safetensors                 548.1M
onnx/config.json                  -
onnx/decoder_model.onnx           653.7M
onnx/decoder_model_merged.onnx    655.2M
onnx/decoder_with_past_model.onnx 653.7M
onnx/generation_config.json       -
onnx/merges.txt                   -
onnx/special_tokens_map.json      -
onnx/tokenizer.json               -
onnx/tokenizer_config.json        -
onnx/vocab.json                   -
pytorch_model.bin                 548.1M
rust_model.ot                     702.5M
tf_model.h5                       497.9M
tokenizer.json                    -
tokenizer_config.json             -
vocab.json                        -
```

For more details, check out the [download guide](./download.md#dry-run-mode).

### Specify cache directory

If not using `--local-dir`, all files will be downloaded by default to the cache directory defined by the `HF_HOME` [environment variable](../package_reference/environment_variables#hfhome). You can specify a custom cache using `--cache-dir`:

```bash
>>> hf download adept/fuyu-8b --cache-dir ./path/to/cache
...
./path/to/cache/models--adept--fuyu-8b/snapshots/ddcacbcf5fdf9cc59ff01f6be6d6662624d9c745
```

### Specify a token

To access private or gated repositories, you must use a token. By default, the token saved locally (using `hf auth login`) will be used. If you want to authenticate explicitly, use the `--token` option:

```bash
>>> hf download gpt2 config.json --token=hf_****
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

### Quiet mode

By default, the `hf download` command will be verbose. It will print details such as warning messages, information about the downloaded files, and progress bars. If you want to silence all of this, use the `--quiet` option. Only the last line (i.e. the path to the downloaded files) is printed. This can prove useful if you want to pass the output to another command in a script.

```bash
>>> hf download gpt2 --quiet
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

### Download timeout

On machines with slow connections, you might encounter timeout issues like this one:
```bash
`httpx.TimeoutException: (TimeoutException("HTTPSConnectionPool(host='cdn-lfs-us-1.huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a33d910c-84c6-4514-8362-c705e2039d38)')`
```

To mitigate this issue, you can set the `HF_HUB_DOWNLOAD_TIMEOUT` environment variable to a higher value (default is 10):
```bash
export HF_HUB_DOWNLOAD_TIMEOUT=30
```

For more details, check out the [environment variables reference](../package_reference/environment_variables#hfhubdownloadtimeout). And rerun your download command.

## hf upload

Use the `hf upload` command to upload files to the Hub directly. Internally, it uses the same [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) helpers described in the [Upload](./upload) guide. In the examples below, we will walk through the most common use cases. For a full list of available options, you can run:

```bash
>>> hf upload --help
```

### Upload an entire folder

The default usage for this command is:

```bash
# Usage:  hf upload [repo_id] [local_path] [path_in_repo]
```

To upload the current directory at the root of the repo, use:

```bash
>>> hf upload my-cool-model . .
https://huggingface.co/Wauplin/my-cool-model/tree/main/
```

> [!TIP]
> If the repo doesn't exist yet, it will be created automatically.

You can also upload a specific folder:

```bash
>>> hf upload my-cool-model ./models .
https://huggingface.co/Wauplin/my-cool-model/tree/main/
```

Finally, you can upload a folder to a specific destination on the repo:

```bash
>>> hf upload my-cool-model ./path/to/curated/data /data/train
https://huggingface.co/Wauplin/my-cool-model/tree/main/data/train
```

### Upload a single file

You can also upload a single file by setting `local_path` to point to a file on your machine. If that's the case, `path_in_repo` is optional and will default to the name of your local file:

```bash
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors
```

If you want to upload a single file to a specific directory, set `path_in_repo` accordingly:

```bash
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors /vae/model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/vae/model.safetensors
```

### Upload multiple files

To upload multiple files from a folder at once without uploading the entire folder, use the `--include` and `--exclude` patterns. It can also be combined with the `--delete` option to delete files on the repo while uploading new ones. In the example below, we sync the local Space by deleting remote files and uploading all files except the ones in `/logs`:

```bash
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
>>> hf upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
...
```

### Upload to a dataset or Space

To upload to a dataset or a Space, use the `--repo-type` option:

```bash
>>> hf upload Wauplin/my-cool-dataset ./data /train --repo-type=dataset
...
```

### Upload to an organization

To upload content to a repo owned by an organization instead of a personal repo, you must explicitly specify it in the `repo_id`:

```bash
>>> hf upload MyCoolOrganization/my-cool-model . .
https://huggingface.co/MyCoolOrganization/my-cool-model/tree/main/
```

### Upload to a specific revision

By default, files are uploaded to the `main` branch. If you want to upload files to another branch or reference, use the `--revision` option:

```bash
# Upload files to a PR
>>> hf upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104
...
```

**Note:** if `revision` does not exist and `--create-pr` is not set, a branch will be created automatically from the `main` branch.

### Upload and create a PR

If you don't have the permission to push to a repo, you must open a PR and let the authors know about the changes you want to make. This can be done by setting the `--create-pr` option:

```bash
# Create a PR and upload the files to it
>>> hf upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104
https://huggingface.co/datasets/bigcode/the-stack/blob/refs%2Fpr%2F104/
```

### Upload at regular intervals

In some cases, you might want to push regular updates to a repo. For example, this is useful if you're training a model and you want to upload the logs folder every 10 minutes. You can do this using the `--every` option:

```bash
# Upload new logs every 10 minutes
hf upload training-model logs/ --every=10
```

### Specify a commit message

Use the `--commit-message` and `--commit-description` to set a custom message and description for your commit instead of the default one

```bash
>>> hf upload Wauplin/my-cool-model ./models . --commit-message="Epoch 34/50" --commit-description="Val accuracy: 68%. Check tensorboard for more details."
...
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

### Specify a token

To upload files, you must use a token. By default, the token saved locally (using `hf auth login`) will be used. If you want to authenticate explicitly, use the `--token` option:

```bash
>>> hf upload Wauplin/my-cool-model ./models . --token=hf_****
...
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

### Quiet mode

By default, the `hf upload` command will be verbose. It will print details such as warning messages, information about the uploaded files, and progress bars. If you want to silence all of this, use the `--quiet` option. Only the last line (i.e. the URL to the uploaded files) is printed. This can prove useful if you want to pass the output to another command in a script.

```bash
>>> hf upload Wauplin/my-cool-model ./models . --quiet
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

## hf repo

`hf repo` lets you create, delete, move repositories and update their settings on the Hugging Face Hub. It also includes subcommands to manage branches and tags.

### Create a repo

```bash
>>> hf repo create Wauplin/my-cool-model
Successfully created Wauplin/my-cool-model on the Hub.
Your repo is now available at https://huggingface.co/Wauplin/my-cool-model
```

Create a private dataset or a Space:

```bash
>>> hf repo create my-cool-dataset --repo-type dataset --private
>>> hf repo create my-gradio-space --repo-type space --space-sdk gradio
```

Use `--exist-ok` if the repo may already exist, and `--resource-group-id` to target an Enterprise resource group.

### Delete a repo

```bash
>>> hf repo delete Wauplin/my-cool-model
```

Datasets and Spaces:

```bash
>>> hf repo delete my-cool-dataset --repo-type dataset
>>> hf repo delete my-gradio-space --repo-type space
```

### Move a repo

```bash
>>> hf repo move old-namespace/my-model new-namespace/my-model
```

### Update repo settings

```bash
>>> hf repo settings Wauplin/my-cool-model --gated auto
>>> hf repo settings Wauplin/my-cool-model --private true
>>> hf repo settings Wauplin/my-cool-model --private false
```

- `--gated`: one of `auto`, `manual`, `false`
- `--private true|false`: set repository privacy

### Manage branches

```bash
>>> hf repo branch create Wauplin/my-cool-model dev
>>> hf repo branch create Wauplin/my-cool-model release-1 --revision refs/pr/104
>>> hf repo branch delete Wauplin/my-cool-model dev
```

> [!TIP]
> All commands accept `--repo-type` (one of `model`, `dataset`, `space`) and `--token` if you need to authenticate explicitly. Use `--help` on any command to see all options.


## hf repo-files

If you want to delete files from a Hugging Face repository, use the `hf repo-files` command.

### Delete files

The `hf repo-files delete <repo_id>` sub-command allows you to delete files from a repository. Here are some usage examples.

Delete a folder :
```bash
>>> hf repo-files delete Wauplin/my-cool-model folder/
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
```

Delete multiple files:
```bash
>>> hf repo-files delete Wauplin/my-cool-model file.txt folder/pytorch_model.bin
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
```

Use Unix-style wildcards to delete sets of files:
```bash
>>> hf repo-files delete Wauplin/my-cool-model "*.txt" "folder/*.bin"
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
```

### Specify a token

To delete files from a repo you must be authenticated and authorized. By default, the token saved locally (using `hf auth login`) will be used. If you want to authenticate explicitly, use the `--token` option:

```bash
>>> hf repo-files delete --token=hf_**** Wauplin/my-cool-model file.txt
```

## hf cache ls

Use `hf cache ls` to inspect what is stored locally in your Hugging Face cache. By default it aggregates information by repository:

```bash
>>> hf cache ls
ID                          SIZE     LAST_ACCESSED LAST_MODIFIED REFS        
--------------------------- -------- ------------- ------------- ----------- 
dataset/nyu-mll/glue          157.4M 2 days ago    2 days ago    main script 
model/LiquidAI/LFM2-VL-1.6B     3.2G 4 days ago    4 days ago    main        
model/microsoft/UserLM-8b      32.1G 4 days ago    4 days ago    main  

Found 3 repo(s) for a total of 5 revision(s) and 35.5G on disk.
```

Add `--revisions` to drill down to specific snapshots, and chain filters to focus on what matters:

```bash
>>> hf cache ls --filter "size>30g" --revisions
ID                        REVISION                                 SIZE     LAST_MODIFIED REFS 
------------------------- ---------------------------------------- -------- ------------- ---- 
model/microsoft/UserLM-8b be8f2069189bdf443e554c24e488ff3ff6952691    32.1G 4 days ago    main 

Found 1 repo(s) for a total of 1 revision(s) and 32.1G on disk.
```

The command supports several output formats for scripting: `--format json` prints structured objects, `--format csv` writes comma-separated rows, and `--quiet` prints only IDs. Use `--sort` to order entries by `accessed`, `modified`, `name`, or `size` (append `:asc` or `:desc` to control order), and `--limit` to restrict results to the top N entries. Combine these with `--cache-dir` to target alternative cache locations. See the [Manage your cache](./manage-cache) guide for advanced workflows.

Delete cache entries selected with `hf cache ls --q` by piping the IDs into `hf cache rm`:

```bash
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q) -y
About to delete 2 repo(s) totalling 5.31G.
  - model/meta-llama/Llama-3.2-1B-Instruct (entire repo)
  - model/hexgrad/Kokoro-82M (entire repo)
Delete repo: ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct
Delete repo: ~/.cache/huggingface/hub/models--hexgrad--Kokoro-82M
Cache deletion done. Saved 5.31G.
Deleted 2 repo(s) and 2 revision(s); freed 5.31G.
```

## hf cache rm

`hf cache rm` removes cached repositories or individual revisions. Pass one or more repo IDs (`model/bert-base-uncased`) or revision hashes:

```bash
>>> hf cache rm model/LiquidAI/LFM2-VL-1.6B
About to delete 1 repo(s) totalling 3.2G.
  - model/LiquidAI/LFM2-VL-1.6B (entire repo)
Proceed with deletion? [y/N]: y
Delete repo: ~/.cache/huggingface/hub/models--LiquidAI--LFM2-VL-1.6B
Cache deletion done. Saved 3.2G.
Deleted 1 repo(s) and 2 revision(s); freed 3.2G.
```

Mix repositories and specific revisions in the same call. Use `--dry-run` to preview the impact, or `--yes` to skip the confirmation prompt—handy in automated scripts:

```bash
>>> hf cache rm model/t5-small 8f3ad1c --dry-run
About to delete 1 repo(s) and 1 revision(s) totalling 1.1G.
  - model/t5-small:
      8f3ad1c [main] 1.1G
Dry run: no files were deleted.
```

When working outside the default cache location, pair the command with `--cache-dir PATH`.

## hf cache prune

`hf cache prune` is a convenience shortcut that deletes every detached (unreferenced) revision in your cache. This keeps only revisions that are still reachable through a branch or tag:

```bash
>>> hf cache prune
About to delete 3 unreferenced revision(s) (2.4G total).
  - model/t5-small:
      1c610f6b [refs/pr/1] 820.1M
      d4ec9b72 [(detached)] 640.5M
  - dataset/google/fleurs:
      2b91c8dd [(detached)] 937.6M
Proceed? [y/N]: y
Deleted 3 unreferenced revision(s); freed 2.4G.
```

As with the other cache commands, `--dry-run`, `--yes`, and `--cache-dir` are available. Refer to the [Manage your cache](./manage-cache) guide for more examples.

## hf cache verify

Use `hf cache verify` to validate local files against their checksums on the Hub. You can verify either a cache snapshot or a regular local directory.

Examples:

```bash
# Verify main revision of a model in cache
>>> hf cache verify deepseek-ai/DeepSeek-OCR

# Verify a specific revision
>>> hf cache verify deepseek-ai/DeepSeek-OCR --revision refs/pr/5
>>> hf cache verify deepseek-ai/DeepSeek-OCR --revision ef93bf4a377c5d5ed9dca78e0bc4ea50b26fe6a4

# Verify a private repo
>>> hf cache verify me/private-model --token hf_***

# Verify a dataset
>>> hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset

# Verify files in a local directory
>>> hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo
```

By default, the command warns about missing or extra files. Use flags to turn these warnings into errors:

```bash
>>> hf cache verify deepseek-ai/DeepSeek-OCR --fail-on-missing-files --fail-on-extra-files
```

On success, you will see a summary:

```text
✅ Verified 13 file(s) for 'deepseek-ai/DeepSeek-OCR' (model) in ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct/snapshots/9213176726f574b556790deb65791e0c5aa438b6
  All checksums match.
```

If mismatches are detected, the command prints a detailed list and exits with a non-zero status.

## hf repo tag create

The `hf repo tag create` command allows you to tag, untag, and list tags for repositories.

### Tag a model

To tag a repo, you need to provide the `repo_id` and the `tag` name:

```bash
>>> hf repo tag create Wauplin/my-cool-model v1.0
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
```

### Tag a model at a specific revision

If you want to tag a specific revision, you can use the `--revision` option. By default, the tag will be created on the `main` branch:

```bash
>>> hf repo tag create Wauplin/my-cool-model v1.0 --revision refs/pr/104
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
```

### Tag a dataset or a Space

If you want to tag a dataset or Space, you must specify the `--repo-type` option:

```bash
>>> hf repo tag create bigcode/the-stack v1.0 --repo-type dataset
You are about to create tag v1.0 on dataset bigcode/the-stack
Tag v1.0 created on bigcode/the-stack
```

### List tags

To list all tags for a repository, use the `-l` or `--list` option:

```bash
>>> hf repo tag create Wauplin/gradio-space-ci -l --repo-type space
Tags for space Wauplin/gradio-space-ci:
0.2.2
0.2.1
0.2.0
0.1.2
0.0.2
0.0.1
```

### Delete a tag

To delete a tag, use the `-d` or `--delete` option:

```bash
>>> hf repo tag create -d Wauplin/my-cool-model v1.0
You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model
```

You can also pass `-y` to skip the confirmation step.

## hf env

The `hf env` command prints details about your machine setup. This is useful when you open an issue on [GitHub](https://github.com/huggingface/huggingface_hub) to help the maintainers investigate your problem.

```bash
>>> hf env

Copy-and-paste the text below in your GitHub issue.

- huggingface_hub version: 1.0.0.rc6
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.11.14
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /home/wauplin/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: Wauplin
- Configured git credential helpers: store
- Installation method: unknown
- Torch: N/A
- httpx: 0.28.1
- hf_xet: 1.1.10
- gradio: 5.41.1
- tensorboard: N/A
- pydantic: 2.11.7
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/wauplin/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/wauplin/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/wauplin/.cache/huggingface/token
- HF_STORED_TOKENS_PATH: /home/wauplin/.cache/huggingface/stored_tokens
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_DISABLE_XET: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```

## hf jobs

Run compute jobs on Hugging Face infrastructure with a familiar Docker-like interface.

`hf jobs` is a command-line tool that lets you run anything on Hugging Face's infrastructure (including GPUs and TPUs!) with simple commands. Think `docker run`, but for running code on A100s.

```bash
# Directly run Python code
>>> hf jobs run python:3.12 python -c 'print("Hello from the cloud!")'

# Use GPUs without any setup
>>> hf jobs run --flavor a10g-small pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel \
... python -c "import torch; print(torch.cuda.get_device_name())"

# Run in an organization account
>>> hf jobs run --namespace my-org-name python:3.12 python -c "print('Running in an org account')"

# Run from Hugging Face Spaces
>>> hf jobs run hf.co/spaces/lhoestq/duckdb duckdb -c "select 'hello world'"

# Run a Python script with `uv` (experimental)
>>> hf jobs uv run my_script.py
```

### ✨ Key Features

- 🐳 **Docker-like CLI**: Familiar commands (`run`, `ps`, `logs`, `inspect`) to run and manage jobs
- 🔥 **Any Hardware**: From CPUs to A100 GPUs and TPU pods - switch with a simple flag
- 📦 **Run Anything**: Use Docker images, HF Spaces, or your custom containers
- 🔐 **Simple Auth**: Just use your HF token
- 📊 **Live Monitoring**: Stream logs in real-time, just like running locally
- 💰 **Pay-as-you-go**: Only pay for the seconds you use

> [!TIP]
> **Hugging Face Jobs** are available only to [Pro users](https://huggingface.co/pro) and [Team or Enterprise organizations](https://huggingface.co/enterprise). Upgrade your plan to get started!

### Quick Start

#### 1. Run your first job

```bash
# Run a simple Python script
>>> hf jobs run python:3.12 python -c "print('Hello from HF compute!')"
```

This command runs the job and shows the logs. You can pass `--detach` to run the Job in the background and only print the Job ID.

#### 2. Check job status

```bash
# List your running jobs
>>> hf jobs ps

# Inspect the status of a job
>>> hf jobs inspect <job_id>

# View logs from a job
>>> hf jobs logs <job_id>

# Cancel a job
>>> hf jobs cancel <job_id>
```

#### 3. Run on GPU

You can also run jobs on GPUs or TPUs with the `--flavor` option. For example, to run a PyTorch job on an A10G GPU:

```bash
# Use an A10G GPU to check PyTorch CUDA
>>> hf jobs run --flavor a10g-small pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel \
... python -c "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"
```

Running this will show the following output!

```bash
This code ran with the following GPU: NVIDIA A10G
```

That's it! You're now running code on Hugging Face's infrastructure.

### Common Use Cases

- **Model Training**: Fine-tune or train models on GPUs (T4, A10G, A100) without managing infrastructure
- **Synthetic Data Generation**: Generate large-scale datasets using LLMs on powerful hardware
- **Data Processing**: Process massive datasets with high-CPU configurations for parallel workloads
- **Batch Inference**: Run offline inference on thousands of samples using optimized GPU setups
- **Experiments & Benchmarks**: Run ML experiments on consistent hardware for reproducible results
- **Development & Debugging**: Test GPU code without local CUDA setup

### Pass Environment variables and Secrets

You can pass environment variables to your job using 

```bash
# Pass environment variables
>>> hf jobs run -e FOO=foo -e BAR=bar python:3.12 python -c "import os; print(os.environ['FOO'], os.environ['BAR'])"
```

```bash
# Pass an environment from a local .env file
>>> hf jobs run --env-file .env python:3.12 python -c "import os; print(os.environ['FOO'], os.environ['BAR'])"
```

```bash
# Pass secrets - they will be encrypted server side
>>> hf jobs run -s MY_SECRET=psswrd python:3.12 python -c "import os; print(os.environ['MY_SECRET'])"
```

```bash
# Pass secrets from a local .env.secrets file - they will be encrypted server side
>>> hf jobs run --secrets-file .env.secrets python:3.12 python -c "import os; print(os.environ['MY_SECRET'])"
```

> [!TIP]
> Use `--secrets HF_TOKEN` to pass your local Hugging Face token implicitly.
> With this syntax, the secret is retrieved from the environment variable.
> For `HF_TOKEN`, it may read the token file located in the Hugging Face home folder if the environment variable is unset.

### Hardware

Available `--flavor` options:

- CPU: `cpu-basic`, `cpu-upgrade`
- GPU: `t4-small`, `t4-medium`, `l4x1`, `l4x4`, `a10g-small`, `a10g-large`, `a10g-largex2`, `a10g-largex4`,`a100-large`
- TPU: `v5e-1x1`, `v5e-2x2`, `v5e-2x4`

(updated in 07/2025 from Hugging Face [suggested_hardware docs](https://huggingface.co/docs/hub/en/spaces-config-reference))

### UV Scripts (Experimental)

Run UV scripts (Python scripts with inline dependencies) on HF infrastructure:

```bash
# Run a UV script (creates temporary repo)
>>> hf jobs uv run my_script.py

# Run with persistent repo
>>> hf jobs uv run my_script.py --repo my-uv-scripts

# Run with GPU
>>> hf jobs uv run ml_training.py --flavor gpu-t4-small

# Pass arguments to script
>>> hf jobs uv run process.py input.csv output.parquet

# Add dependencies
>>> hf jobs uv run --with transformers --with torch train.py

# Run a script directly from a URL
>>> hf jobs uv run https://huggingface.co/datasets/username/scripts/resolve/main/example.py

# Run a command
>>> hf jobs uv run --with lighteval python -c "import lighteval"
```

UV scripts are Python scripts that include their dependencies directly in the file using a special comment syntax. This makes them perfect for self-contained tasks that don't require complex project setups. Learn more about UV scripts in the [UV documentation](https://docs.astral.sh/uv/guides/scripts/).

### Scheduled Jobs

Schedule and manage jobs that will run on HF infrastructure.

The schedule should be one of `@annually`, `@yearly`, `@monthly`, `@weekly`, `@daily`, `@hourly`, or a CRON schedule expression (e.g., `"0 9 * * 1"` for 9 AM every Monday).

```bash
# Schedule a job that runs every hour
>>> hf jobs scheduled run @hourly python:3.12 python -c 'print("This runs every hour!")'

# Use the CRON syntax
>>> hf jobs scheduled run "*/5 * * * *" python:3.12 python -c 'print("This runs every 5 minutes!")'

# Schedule with GPU
>>> hf jobs scheduled run @hourly --flavor a10g-small pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel \
... python -c "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"

# Schedule a UV script
>>> hf jobs scheduled uv run @hourly my_script.py
```

Use the same parameters as `hf jobs run` to pass environment variables, secrets, timeout, etc.

Manage scheduled jobs using

```bash
# List your active scheduled jobs
>>> hf jobs scheduled ps

# Inspect the status of a job
>>> hf jobs scheduled inspect <scheduled_job_id>

# Suspend (pause) a scheduled job
>>> hf jobs scheduled suspend <scheduled_job_id>

# Resume a scheduled job
>>> hf jobs scheduled resume <scheduled_job_id>

# Delete a scheduled job
>>> hf jobs scheduled delete <scheduled_job_id>
```

## hf endpoints

Use `hf endpoints` to list, deploy, describe, and manage Inference Endpoints directly from the terminal. The legacy
`hf inference-endpoints` alias remains available for compatibility.

```bash
# Lists endpoints in your namespace
>>> hf endpoints ls

# Deploy an endpoint from Model Catalog
>>> hf endpoints catalog deploy --repo openai/gpt-oss-120b --name my-endpoint

# Deploy an endpoint from the Hugging Face Hub 
>>> hf endpoints deploy my-endpoint --repo gpt2 --framework pytorch --accelerator cpu --instance-size x2 --instance-type intel-icl

# List catalog entries
>>> hf endpoints catalog ls

# Show status and metadata
>>> hf endpoints describe my-endpoint

# Pause the endpoint
>>> hf endpoints pause my-endpoint

# Delete without confirmation prompt
>>> hf endpoints delete my-endpoint --yes
```

> [!TIP]
> Add `--namespace` to target an organization, `--token` to override authentication.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/cli.md" />

### Integrate any ML framework with the Hub
https://huggingface.co/docs/huggingface_hub/main/guides/integrations.md

# Integrate any ML framework with the Hub

The Hugging Face Hub makes hosting and sharing models with the community easy. It supports
[dozens of libraries](https://huggingface.co/docs/hub/models-libraries) in the Open Source ecosystem. We are always
working on expanding this support to push collaborative Machine Learning forward. The `huggingface_hub` library plays a
key role in this process, allowing any Python script to easily push and load files.

There are four main ways to integrate a library with the Hub:
1. **Push to Hub:** implement a method to upload a model to the Hub. This includes the model weights, as well as
   [the model card](https://huggingface.co/docs/huggingface_hub/how-to-model-cards) and any other relevant information
   or data necessary to run the model (for example, training logs). This method is often called `push_to_hub()`.
2. **Download from Hub:** implement a method to load a model from the Hub. The method should download the model
   configuration/weights and load the model. This method is often called `from_pretrained` or `load_from_hub()`.
3. **Widgets:** display a widget on the landing page of your models on the Hub. It allows users to quickly try a model
   from the browser.

In this guide, we will focus on the first two topics. We will present the two main approaches you can use to integrate
a library, with their advantages and drawbacks. Everything is summarized at the end of the guide to help you choose
between the two. Please keep in mind that these are only guidelines that you are free to adapt to you requirements.

If you are interested in Inference and Widgets, you can follow [this guide](https://huggingface.co/docs/hub/models-adding-libraries#set-up-the-inference-api).
In both cases, you can reach out to us if you are integrating a library with the Hub and want to be listed
[in our docs](https://huggingface.co/docs/hub/models-libraries).

## A flexible approach: helpers

The first approach to integrate a library to the Hub is to actually implement the `push_to_hub` and `from_pretrained`
methods by yourself. This gives you full flexibility on which files you need to upload/download and how to handle inputs
specific to your framework. You can refer to the two [upload files](./upload) and [download files](./download) guides
to learn more about how to do that. This is, for example how the FastAI integration is implemented (see [push_to_hub_fastai()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.push_to_hub_fastai)
and [from_pretrained_fastai()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.from_pretrained_fastai)).

Implementation can differ between libraries, but the workflow is often similar.

### from_pretrained

This is how a `from_pretrained` method usually looks like:

```python
def from_pretrained(model_id: str) -> MyModelClass:
   # Download model from Hub
   cached_model = hf_hub_download(
      repo_id=repo_id,
      filename="model.pkl",
      library_name="fastai",
      library_version=get_fastai_version(),
   )

   # Load model
    return load_model(cached_model)
```

### push_to_hub

The `push_to_hub` method often requires a bit more complexity to handle repo creation, generate the model card and save weights.
A common approach is to save all of these files in a temporary folder, upload it and then delete it.

```python
def push_to_hub(model: MyModelClass, repo_name: str) -> None:
   api = HfApi()

   # Create repo if not existing yet and get the associated repo_id
   repo_id = api.create_repo(repo_name, exist_ok=True)

   # Save all files in a temporary directory and push them in a single commit
   with TemporaryDirectory() as tmpdir:
      tmpdir = Path(tmpdir)

      # Save weights
      save_model(model, tmpdir / "model.safetensors")

      # Generate model card
      card = generate_model_card(model)
      (tmpdir / "README.md").write_text(card)

      # Save logs
      # Save figures
      # Save evaluation metrics
      # ...

      # Push to hub
      return api.upload_folder(repo_id=repo_id, folder_path=tmpdir)
```

This is of course only an example. If you are interested in more complex manipulations (delete remote files, upload
weights on the fly, persist weights locally, etc.) please refer to the [upload files](./upload) guide.

### Limitations

While being flexible, this approach has some drawbacks, especially in terms of maintenance. Hugging Face users are often
used to additional features when working with `huggingface_hub`. For example, when loading files from the Hub, it is
common to offer parameters like:
- `token`: to download from a private repo
- `revision`: to download from a specific branch
- `cache_dir`: to cache files in a specific directory
- `force_download`/`local_files_only`: to reuse the cache or not
- `proxies`: configure HTTP session

When pushing models, similar parameters are supported:
- `commit_message`: custom commit message
- `private`: create a private repo if missing
- `create_pr`: create a PR instead of pushing to `main`
- `branch`: push to a branch instead of the `main` branch
- `allow_patterns`/`ignore_patterns`: filter which files to upload
- `token`
- ...

All of these parameters can be added to the implementations we saw above and passed to the `huggingface_hub` methods.
However, if a parameter changes or a new feature is added, you will need to update your package. Supporting those
parameters also means more documentation to maintain on your side. To see how to mitigate these limitations, let's jump
to our next section **class inheritance**.

## A more complex approach: class inheritance

As we saw above, there are two main methods to include in your library to integrate it with the Hub: upload files
(`push_to_hub`) and download files (`from_pretrained`). You can implement those methods by yourself but it comes with
caveats. To tackle this, `huggingface_hub` provides a tool that uses class inheritance. Let's see how it works!

In a lot of cases, a library already implements its model using a Python class. The class contains the properties of
the model and methods to load, run, train, and evaluate it. Our approach is to extend this class to include upload and
download features using mixins. A [Mixin](https://stackoverflow.com/a/547714) is a class that is meant to extend an
existing class with a set of specific features using multiple inheritance. `huggingface_hub` provides its own mixin,
the [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin). The key here is to understand its behavior and how to customize it.

The [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) class implements 3 *public* methods (`push_to_hub`, `save_pretrained` and `from_pretrained`). Those
are the methods that your users will call to load/save models with your library. [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) also defines 2
*private* methods (`_save_pretrained` and `_from_pretrained`). Those are the ones you must implement. So to integrate
your library, you should:

1. Make your Model class inherit from [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin).
2. Implement the private methods:
    - [_save_pretrained()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin._save_pretrained): method taking as input a path to a directory and saving the model to it.
    You must write all the logic to dump your model in this method: model card, model weights, configuration files,
    training logs, and figures. Any relevant information for this model must be handled by this method.
    [Model Cards](https://huggingface.co/docs/hub/model-cards) are particularly important to describe your model. Check
    out [our implementation guide](./model-cards) for more details.
    - [_from_pretrained()](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin._from_pretrained): **class method** taking as input a `model_id` and returning an instantiated
    model. The method must download the relevant files and load them.
3. You are done!

The advantage of using [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) is that once you take care of the serialization/loading of the files, you are ready to go. You don't need to worry about stuff like repo creation, commits, PRs, or revisions. The [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) also ensures public methods are documented and type annotated, and you'll be able to view your model's download count on the Hub. All of this is handled by the [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) and available to your users.

### A concrete example: PyTorch

A good example of what we saw above is [PyTorchModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin), our integration for the PyTorch framework. This is a ready-to-use integration.

#### How to use it?

Here is how any user can load/save a PyTorch model from/to the Hub:

```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin


# Define your Pytorch model exactly the same way you are used to
>>> class MyModel(
...         nn.Module,
...         PyTorchModelHubMixin, # multiple inheritance
...         library_name="keras-nlp",
...         tags=["keras"],
...         repo_url="https://github.com/keras-team/keras-nlp",
...         docs_url="https://keras.io/keras_nlp/",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
...         super().__init__()
...         self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
...         self.linear = nn.Linear(output_size, vocab_size)

...     def forward(self, x):
...         return self.linear(x + self.param)

# 1. Create model
>>> model = MyModel(hidden_size=128)

# Config is automatically created based on input + default values
>>> model.param.shape[0]
128

# 2. (optional) Save model to local directory
>>> model.save_pretrained("path/to/my-awesome-model")

# 3. Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# 4. Initialize model from the Hub => config has been preserved
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.param.shape[0]
128

# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["keras", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"keras-nlp"
```

#### Implementation

The implementation is actually very straightforward, and the full implementation can be found [here](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py).

1. First, inherit your class from `ModelHubMixin`:

```python
from huggingface_hub import ModelHubMixin

class PyTorchModelHubMixin(ModelHubMixin):
   (...)
```

2. Implement the `_save_pretrained` method:

```py
from huggingface_hub import ModelHubMixin

class PyTorchModelHubMixin(ModelHubMixin):
   (...)

    def _save_pretrained(self, save_directory: Path) -> None:
        """Save weights from a Pytorch model to a local directory."""
        save_model_as_safetensor(self.module, str(save_directory / SAFETENSORS_SINGLE_FILE))

```

3. Implement the `_from_pretrained` method:

```python
class PyTorchModelHubMixin(ModelHubMixin):
   (...)

   @classmethod # Must be a classmethod!
   def _from_pretrained(
      cls,
      *,
      model_id: str,
      revision: str,
      cache_dir: str,
      force_download: bool,
      local_files_only: bool,
      token: Union[str, bool, None],
      map_location: str = "cpu", # additional argument
      strict: bool = False, # additional argument
      **model_kwargs,
   ):
      """Load Pytorch pretrained weights and return the loaded model."""
        model = cls(**model_kwargs)
        if os.path.isdir(model_id):
            print("Loading weights from local directory")
            model_file = os.path.join(model_id, SAFETENSORS_SINGLE_FILE)
            return cls._load_as_safetensor(model, model_file, map_location, strict)

         model_file = hf_hub_download(
            repo_id=model_id,
            filename=SAFETENSORS_SINGLE_FILE,
            revision=revision,
            cache_dir=cache_dir,
            force_download=force_download,
            token=token,
            local_files_only=local_files_only,
            )
         return cls._load_as_safetensor(model, model_file, map_location, strict)
```

And that's it! Your library now enables users to upload and download files to and from the Hub.

### Advanced usage

In the section above, we quickly discussed how the [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) works. In this section, we will see some of its more advanced features to improve your library integration with the Hugging Face Hub.

#### Model card

[ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) generates the model card for you. Model cards are files that accompany the models and provide important information about them. Under the hood, model cards are simple Markdown files with additional metadata. Model cards are essential for discoverability, reproducibility, and sharing! Check out the [Model Cards guide](https://huggingface.co/docs/hub/model-cards) for more details.

Generating model cards semi-automatically is a good way to ensure that all models pushed with your library will share common metadata: `library_name`, `tags`, `license`, `pipeline_tag`, etc. This makes all models backed by your library easily searchable on the Hub and provides some resource links for users landing on the Hub. You can define the metadata directly when inheriting from [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin):

```py
class UniDepthV1(
   nn.Module,
   PyTorchModelHubMixin,
   library_name="unidepth",
   repo_url="https://github.com/lpiccinelli-eth/UniDepth",
   docs_url=...,
   pipeline_tag="depth-estimation",
   license="cc-by-nc-4.0",
   tags=["monocular-metric-depth-estimation", "arxiv:1234.56789"]
):
   ...
```

By default, a generic model card will be generated with the info you've provided (example: [pyp1/VoiceCraft_giga830M](https://huggingface.co/pyp1/VoiceCraft_giga830M)). But you can define your own model card template as well!

In this example, all models pushed with the `VoiceCraft` class will automatically include a citation section and license details. For more details on how to define a model card template, please check the [Model Cards guide](./model-cards).

```py
MODEL_CARD_TEMPLATE = """
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{{ card_data }}
---

This is a VoiceCraft model. For more details, please check out the official Github repo: https://github.com/jasonppy/VoiceCraft. This model is shared under a Attribution-NonCommercial-ShareAlike 4.0 International license.

## Citation

@article{peng2024voicecraft,
  author    = {Peng, Puyuan and Huang, Po-Yao and Li, Daniel and Mohamed, Abdelrahman and Harwath, David},
  title     = {VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild},
  journal   = {arXiv},
  year      = {2024},
}
"""

class VoiceCraft(
   nn.Module,
   PyTorchModelHubMixin,
   library_name="voicecraft",
   model_card_template=MODEL_CARD_TEMPLATE,
   ...
):
   ...
```


Finally, if you want to extend the model card generation process with dynamic values, you can override the `generate_model_card()` method:

```py
from huggingface_hub import ModelCard, PyTorchModelHubMixin

class UniDepthV1(nn.Module, PyTorchModelHubMixin, ...):
   (...)

   def generate_model_card(self, *args, **kwargs) -> ModelCard:
      card = super().generate_model_card(*args, **kwargs)
      card.data.metrics = ...  # add metrics to the metadata
      card.text += ... # append section to the modelcard
      return card
```

#### Config

[ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) handles the model configuration for you. It automatically checks the input values when you instantiate the model and serializes them in a `config.json` file. This provides 2 benefits:
1. Users will be able to reload the model with the exact same parameters as you.
2. Having a `config.json` file automatically enables analytics on the Hub (i.e. the "downloads" count).

But how does it work in practice? Several rules make the process as smooth as possible from a user perspective:
- if your `__init__` method expects a `config` input, it will be automatically saved in the repo as `config.json`.
- if the `config` input parameter is annotated with a dataclass type (e.g. `config: Optional[MyConfigClass] = None`), then the `config` value will be correctly deserialized for you.
- all values passed at initialization will also be stored in the config file. This means you don't necessarily have to expect a `config` input to benefit from it.

Example:

```py
class MyModel(ModelHubMixin):
   def __init__(value: str, size: int = 3):
      self.value = value
      self.size = size

   (...) # implement _save_pretrained / _from_pretrained

model = MyModel(value="my_value")
model.save_pretrained(...)

# config.json contains passed and default values
{"value": "my_value", "size": 3}
```

But what if a value cannot be serialized as JSON? By default, the value will be ignored when saving the config file. However, in some cases your library already expects a custom object as input that cannot be serialized, and you don't want to update your internal logic to update its type. No worries! You can pass custom encoders/decoders for any type when inheriting from [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin). This is a bit more work but ensures your internal logic is untouched when integrating your library with the Hub.

Here is a concrete example where a class expects a `argparse.Namespace` config as input:

```py
class VoiceCraft(nn.Module):
    def __init__(self, args):
      self.pattern = self.args.pattern
      self.hidden_size = self.args.hidden_size
      ...
```

One solution can be to update the `__init__` signature to `def __init__(self, pattern: str, hidden_size: int)` and update all snippets that instantiate your class. This is a perfectly valid way to fix it but it might break downstream applications using your library.

Another solution is to provide a simple encoder/decoder to convert `argparse.Namespace` to a dictionary.

```py
from argparse import Namespace

class VoiceCraft(
   nn.Module,
   PyTorchModelHubMixin,  # inherit from mixin
   coders={
      Namespace : (
         lambda x: vars(x),  # Encoder: how to convert a `Namespace` to a valid jsonable value?
         lambda data: Namespace(**data),  # Decoder: how to reconstruct a `Namespace` from a dictionary?
      )
   }
):
    def __init__(self, args: Namespace): # annotate `args`
      self.pattern = self.args.pattern
      self.hidden_size = self.args.hidden_size
      ...
```

In the snippet above, both the internal logic and the `__init__` signature of the class did not change. This means all existing code snippets for your library will continue to work. To achieve this, we had to:
1. Inherit from the mixin (`PytorchModelHubMixin` in this case).
2. Pass a `coders` parameter in the inheritance. This is a dictionary where keys are custom types you want to process. Values are a tuple `(encoder, decoder)`.
   - The encoder expects an object of the specified type as input and returns a jsonable value. This will be used when saving a model with `save_pretrained`.
   - The decoder expects raw data (typically a dictionary) as input and reconstructs the initial object. This will be used when loading the model with `from_pretrained`.
3. Add a type annotation to the `__init__` signature. This is important to let the mixin know which type is expected by the class and, therefore, which decoder to use.

For the sake of simplicity, the encoder/decoder functions in the example above are not robust. For a concrete implementation, you would most likely have to handle corner cases properly.

## Quick comparison

Let's quickly sum up the two approaches we saw with their advantages and drawbacks. The table below is only indicative.
Your framework might have some specificities that you need to address. This guide is only here to give guidelines and
ideas on how to handle integration. In any case, feel free to contact us if you have any questions!

<!-- Generated using https://www.tablesgenerator.com/markdown_tables -->
|           Integration           |                                                      Using helpers                                                       |                                     Using [ModelHubMixin](/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin)                                     |
| :-----------------------------: | :----------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------: |
|         User experience         |                                `model = load_from_hub(...)`<br>`push_to_hub(model, ...)`                                 |               `model = MyModel.from_pretrained(...)`<br>`model.push_to_hub(...)`                |
|           Flexibility           |                                 Very flexible.<br>You fully control the implementation.                                  |                    Less flexible.<br>Your framework must have a model class.                    |
|           Maintenance           | More maintenance to add support for configuration, and new features. Might also require fixing issues reported by users. | Less maintenance as most of the interactions with the Hub are implemented in `huggingface_hub`. |
| Documentation / Type annotation |                                                 To be written manually.                                                  |                             Partially handled by `huggingface_hub`.                             |
|        Download counter         |                                                 To be handled manually.                                                  |                      Enabled by default if class has a `config` attribute.                      |
|           Model card            |                                                  To be handled manually                                                  |                       Generated by default with library_name, tags, etc.                        |


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/integrations.md" />

### Create and share Model Cards
https://huggingface.co/docs/huggingface_hub/main/guides/model-cards.md

# Create and share Model Cards

The `huggingface_hub` library provides a Python interface to create, share, and update Model Cards.
Visit [the dedicated documentation page](https://huggingface.co/docs/hub/models-cards)
for a deeper view of what Model Cards on the Hub are, and how they work under the hood.

## Load a Model Card from the Hub

To load an existing card from the Hub, you can use the [ModelCard.load()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard.load) function. Here, we'll load the card from [`nateraw/vit-base-beans`](https://huggingface.co/nateraw/vit-base-beans).

```python
from huggingface_hub import ModelCard

card = ModelCard.load('nateraw/vit-base-beans')
```

This card has some helpful attributes that you may want to access/leverage:
  - `card.data`: Returns a [ModelCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCardData) instance with the model card's metadata. Call `.to_dict()` on this instance to get the representation as a dictionary.
  - `card.text`: Returns the text of the card, *excluding the metadata header*.
  - `card.content`: Returns the text content of the card, *including the metadata header*.

## Create Model Cards

### From Text

To initialize a Model Card from text, just pass the text content of the card to the `ModelCard` on init.

```python
content = """
---
language: en
license: mit
---

# My Model Card
"""

card = ModelCard(content)
card.data.to_dict() == {'language': 'en', 'license': 'mit'}  # True
```

Another way you might want to do this is with f-strings. In the following example, we:

- Use [ModelCardData.to_yaml()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.CardData.to_yaml) to convert metadata we defined to YAML so we can use it to insert the YAML block in the model card.
- Show how you might use a template variable via Python f-strings.

```python
card_data = ModelCardData(language='en', license='mit', library='timm')

example_template_var = 'nateraw'
content = f"""
---
{ card_data.to_yaml() }
---

# My Model Card

This model created by [@{example_template_var}](https://github.com/{example_template_var})
"""

card = ModelCard(content)
print(card)
```

The above example would leave us with a card that looks like this:

```
---
language: en
license: mit
library: timm
---

# My Model Card

This model created by [@nateraw](https://github.com/nateraw)
```

### From a Jinja Template

If you have `Jinja2` installed, you can create Model Cards from a jinja template file. Let's see a basic example:

```python
from pathlib import Path

from huggingface_hub import ModelCard, ModelCardData

# Define your jinja template
template_text = """
---
{{ card_data }}
---

# Model Card for MyCoolModel

This model does this and that.

This model was created by [@{{ author }}](https://hf.co/{{author}}).
""".strip()

# Write the template to a file
Path('custom_template.md').write_text(template_text)

# Define card metadata
card_data = ModelCardData(language='en', license='mit', library_name='keras')

# Create card from template, passing it any jinja template variables you want.
# In our case, we'll pass author
card = ModelCard.from_template(card_data, template_path='custom_template.md', author='nateraw')
card.save('my_model_card_1.md')
print(card)
```

The resulting card's markdown looks like this:

```
---
language: en
license: mit
library_name: keras
---

# Model Card for MyCoolModel

This model does this and that.

This model was created by [@nateraw](https://hf.co/nateraw).
```

If you update any card.data, it'll reflect in the card itself.

```
card.data.library_name = 'timm'
card.data.language = 'fr'
card.data.license = 'apache-2.0'
print(card)
```

Now, as you can see, the metadata header has been updated:

```
---
language: fr
license: apache-2.0
library_name: timm
---

# Model Card for MyCoolModel

This model does this and that.

This model was created by [@nateraw](https://hf.co/nateraw).
```

As you update the card data, you can validate the card is still valid against the Hub by calling [ModelCard.validate()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard.validate). This ensures that the card passes any validation rules set up on the Hugging Face Hub.

### From the Default Template

Instead of using your own template, you can also use the [default template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), which is a fully featured model card with tons of sections you may want to fill out. Under the hood, it uses [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/) to fill out a template file.

> [!TIP]
> Note that you will have to have Jinja2 installed to use `from_template`. You can do so with `pip install Jinja2`.

```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
    card_data,
    model_id='my-cool-model',
    model_description="this model does this and that",
    developers="Nate Raw",
    repo="https://github.com/huggingface/huggingface_hub",
)
card.save('my_model_card_2.md')
print(card)
```

## Share Model Cards

If you're authenticated with the Hugging Face Hub (either by using `hf auth login` or [login()](/docs/huggingface_hub/main/en/package_reference/authentication#huggingface_hub.login)), you can push cards to the Hub by simply calling [ModelCard.push_to_hub()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.RepoCard.push_to_hub). Let's take a look at how to do that...

First, we'll create a new repo called 'hf-hub-modelcards-pr-test' under the authenticated user's namespace:

```python
from huggingface_hub import whoami, create_repo

user = whoami()['name']
repo_id = f'{user}/hf-hub-modelcards-pr-test'
url = create_repo(repo_id, exist_ok=True)
```

Then, we'll create a card from the default template (same as the one defined in the section above):

```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
    card_data,
    model_id='my-cool-model',
    model_description="this model does this and that",
    developers="Nate Raw",
    repo="https://github.com/huggingface/huggingface_hub",
)
```

Finally, we'll push that up to the hub

```python
card.push_to_hub(repo_id)
```

You can check out the resulting card [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/blob/main/README.md).

If you instead wanted to push a card as a pull request, you can just say `create_pr=True` when calling `push_to_hub`:

```python
card.push_to_hub(repo_id, create_pr=True)
```

A resulting PR created from this command can be seen [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/discussions/3).

## Update metadata

In this section we will see what metadata are in repo cards and how to update them.

`metadata` refers to a hash map (or key value) context that provides some high-level information about a model, dataset or Space. That information can include details such as the model's `pipeline type`, `model_id` or `model_description`. For more detail you can take a look to these guides: [Model Card](https://huggingface.co/docs/hub/model-cards#model-card-metadata), [Dataset Card](https://huggingface.co/docs/hub/datasets-cards#dataset-card-metadata) and [Spaces Settings](https://huggingface.co/docs/hub/spaces-settings#spaces-settings).
Now lets see some examples on how to update those metadata.


Let's start with a first example:

```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "image-classification"})
```

With these two lines of code you will update the metadata to set a new `pipeline_tag`.

By default, you cannot update a key that is already existing on the card. If you want to do so, you must pass
`overwrite=True` explicitly:


```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "text-generation"}, overwrite=True)
```

It often happens that you want to suggest some changes to a repository
on which you don't have write permission. You can do that by creating a PR on that repo which will allow the owners to
review and merge your suggestions.

```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("someone/model", {"pipeline_tag": "text-classification"}, create_pr=True)
```

## Include Evaluation Results

To include evaluation results in the metadata `model-index`, you can pass an [EvalResult](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.EvalResult) or a list of `EvalResult` with your associated evaluation results. Under the hood it'll create the `model-index` when you call `card.data.to_dict()`. For more information on how this works, you can check out [this section of the Hub docs](https://huggingface.co/docs/hub/models-cards#evaluation-results).

> [!TIP]
> Note that using this function requires you to include the `model_name` attribute in [ModelCardData](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.ModelCardData).

```python
card_data = ModelCardData(
    language='en',
    license='mit',
    model_name='my-cool-model',
    eval_results = EvalResult(
        task_type='image-classification',
        dataset_type='beans',
        dataset_name='Beans',
        metric_type='accuracy',
        metric_value=0.7
    )
)

card = ModelCard.from_template(card_data)
print(card.data)
```

The resulting `card.data` should look like this:

```
language: en
license: mit
model-index:
- name: my-cool-model
  results:
  - task:
      type: image-classification
    dataset:
      name: Beans
      type: beans
    metrics:
    - type: accuracy
      value: 0.7
```

If you have more than one evaluation result you'd like to share, just pass a list of `EvalResult`:

```python
card_data = ModelCardData(
    language='en',
    license='mit',
    model_name='my-cool-model',
    eval_results = [
        EvalResult(
            task_type='image-classification',
            dataset_type='beans',
            dataset_name='Beans',
            metric_type='accuracy',
            metric_value=0.7
        ),
        EvalResult(
            task_type='image-classification',
            dataset_type='beans',
            dataset_name='Beans',
            metric_type='f1',
            metric_value=0.65
        )
    ]
)
card = ModelCard.from_template(card_data)
card.data
```

Which should leave you with the following `card.data`:

```
language: en
license: mit
model-index:
- name: my-cool-model
  results:
  - task:
      type: image-classification
    dataset:
      name: Beans
      type: beans
    metrics:
    - type: accuracy
      value: 0.7
    - type: f1
      value: 0.65
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/model-cards.md" />

### Upload files to the Hub
https://huggingface.co/docs/huggingface_hub/main/guides/upload.md

# Upload files to the Hub

Sharing your files and work is an important aspect of the Hub. The `huggingface_hub` offers several options for uploading your files to the Hub. You can use these functions independently or integrate them into your library, making it more convenient for your users to interact with the Hub.

Whenever you want to upload files to the Hub, you need to log in to your Hugging Face account. For more details about authentication, check out [this section](../quick-start#authentication).

## Upload a file

Once you've created a repository with [create_repo()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_repo), you can upload a file to your repository using [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file).

Specify the path of the file to upload, where you want to upload the file to in the repository, and the name of the repository you want to add the file to. Depending on your repository type, you can optionally set the repository type as a `dataset`, `model`, or `space`.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(
...     path_or_fileobj="/path/to/local/folder/README.md",
...     path_in_repo="README.md",
...     repo_id="username/test-dataset",
...     repo_type="dataset",
... )
```

## Upload a folder

Use the [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) function to upload a local folder to an existing repository. Specify the path of the local folder
to upload, where you want to upload the folder to in the repository, and the name of the repository you want to add the
folder to. Depending on your repository type, you can optionally set the repository type as a `dataset`, `model`, or `space`.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Upload all the content from the local folder to your remote Space.
# By default, files are uploaded at the root of the repo
>>> api.upload_folder(
...     folder_path="/path/to/local/space",
...     repo_id="username/my-cool-space",
...     repo_type="space",
... )
```

By default, the `.gitignore` file will be taken into account to know which files should be committed or not. By default we check if a `.gitignore` file is present in a commit, and if not, we check if it exists on the Hub. Please be aware that only a `.gitignore` file present at the root of the directory will be used. We do not check for `.gitignore` files in subdirectories.

If you don't want to use an hardcoded `.gitignore` file, you can use the `allow_patterns` and `ignore_patterns` arguments to filter which files to upload. These parameters accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). If both `allow_patterns` and `ignore_patterns` are provided, both constraints apply.

Beside the `.gitignore` file and allow/ignore patterns, any `.git/` folder present in any subdirectory will be ignored.

```py
>>> api.upload_folder(
...     folder_path="/path/to/local/folder",
...     path_in_repo="my-dataset/train", # Upload to a specific folder
...     repo_id="username/test-dataset",
...     repo_type="dataset",
...     ignore_patterns="**/logs/*.txt", # Ignore all text logs
... )
```

You can also use the `delete_patterns` argument to specify files you want to delete from the repo in the same commit.
This can prove useful if you want to clean a remote folder before pushing files in it and you don't know which files
already exists.

The example below uploads the local `./logs` folder to the remote `/experiment/logs/` folder. Only txt files are uploaded
but before that, all previous logs on the repo on deleted. All of this in a single commit.
```py
>>> api.upload_folder(
...     folder_path="/path/to/local/folder/logs",
...     repo_id="username/trained-model",
...     path_in_repo="experiment/logs/",
...     allow_patterns="*.txt", # Upload all local text files
...     delete_patterns="*.txt", # Delete all remote text files before
... )
```

## Upload from the CLI

You can use the `hf upload` command from the terminal to directly upload files to the Hub. Internally it uses the same [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) helpers described above.

You can either upload a single file or an entire folder:

```bash
# Usage:  hf upload [repo_id] [local_path] [path_in_repo]
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors

>>> hf upload Wauplin/my-cool-model ./models .
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

`local_path` and `path_in_repo` are optional and can be implicitly inferred. If `local_path` is not set, the tool will
check if a local folder or file has the same name as the `repo_id`. If that's the case, its content will be uploaded.
Otherwise, an exception is raised asking the user to explicitly set `local_path`. In any case, if `path_in_repo` is not
set, files are uploaded at the root of the repo.

For more details about the CLI upload command, please refer to the [CLI guide](./cli#hf-upload).

## Upload a large folder

In most cases, the [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) method and `hf upload` command should be the go-to solutions to upload files to the Hub. They ensure a single commit will be made, handle a lot of use cases, and fail explicitly when something wrong happens. However, when dealing with a large amount of data, you will usually prefer a resilient process even if it leads to more commits or requires more CPU usage. The [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) method has been implemented in that spirit:
- it is resumable: the upload process is split into many small tasks (hashing files, pre-uploading them, and committing them). Each time a task is completed, the result is cached locally in a `./cache/huggingface` folder inside the folder you are trying to upload. By doing so, restarting the process after an interruption will resume all completed tasks.
- it is multi-threaded: hashing large files and pre-uploading them benefits a lot from multithreading if your machine allows it.
- it is resilient to errors: a high-level retry-mechanism has been added to retry each independent task indefinitely until it passes (no matter if it's a OSError, ConnectionError, PermissionError, etc.). This mechanism is double-edged. If transient errors happen, the process will continue and retry. If permanent errors happen (e.g. permission denied), it will retry indefinitely without solving the root cause.

If you want more technical details about how `upload_large_folder` is implemented under the hood, please have a look to the [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) package reference.

Here is how to use [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) in a script. The method signature is very similar to [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder):

```py
>>> api.upload_large_folder(
...     repo_id="HuggingFaceM4/Docmatix",
...     repo_type="dataset",
...     folder_path="/path/to/local/docmatix",
... )
```

You will see the following output in your terminal:
```
Repo created: https://huggingface.co/datasets/HuggingFaceM4/Docmatix
Found 5 candidate files to upload
Recovering from metadata files: 100%|█████████████████████████████████████| 5/5 [00:00<00:00, 542.66it/s]

---------- 2024-07-22 17:23:17 (0:00:00) ----------
Files:   hashed 5/5 (5.0G/5.0G) | pre-uploaded: 0/5 (0.0/5.0G) | committed: 0/5 (0.0/5.0G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 5 | committing: 0 | waiting: 11
---------------------------------------------------
```

First, the repo is created if it didn't exist before. Then, the local folder is scanned for files to upload. For each file, we try to recover metadata information (from a previously interrupted upload). From there, it is able to launch workers and print an update status every 1 minute. Here, we can see that 5 files have already been hashed but not pre-uploaded. 5 workers are pre-uploading files while the 11 others are waiting for a task.

A command line is also provided. You can define the number of workers and the level of verbosity in the terminal:

```sh
hf upload-large-folder HuggingFaceM4/Docmatix --repo-type=dataset /path/to/local/docmatix --num-workers=16
```

> [!TIP]
> For large uploads, you have to set `repo_type="model"` or `--repo-type=model` explicitly. Usually, this information is implicit in all other `HfApi` methods. This is to avoid having data uploaded to a repository with a wrong type. If that's the case, you'll have to re-upload everything.

> [!WARNING]
> While being much more robust to upload large folders, `upload_large_folder` is more limited than [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) feature-wise. In practice:
> - you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally.
> - you cannot set a custom `commit_message` and `commit_description` since multiple commits are created.
> - you cannot delete from the repo while uploading. Please make a separate commit first.
> - you cannot create a PR directly. Please create a PR first (from the UI or using [create_pull_request()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)) and then commit to it by passing `revision`.

### Tips and tricks for large uploads

There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying.

Check out our [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) guide for best practices on how to structure your repositories on the Hub. Let's move on with some practical tips to make your upload process as smooth as possible.

- **Start small**: We recommend starting with a small amount of data to test your upload script. It's easier to iterate on a script when failing takes only a little time.
- **Expect failures**: Streaming large amounts of data is challenging. You don't know what can happen, but it's always best to consider that something will fail at least once -no matter if it's due to your machine, your connection, or our servers. For example, if you plan to upload a large number of files, it's best to keep track locally of which files you already uploaded before uploading the next batch. You are ensured that an LFS file that is already committed will never be re-uploaded twice but checking it client-side can still save some time. This is what [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) does for you.
- **Use `hf_xet`**: this leverages the new storage backend for the Hub, is written in Rust, and is now available for everyone to use. In fact, `hf_xet` is already enabled by default when using `huggingface_hub`! For maximum performance, set [`HF_XET_HIGH_PERFORMANCE=1`](../package_reference/environment_variables.md#hf_xet_high_performance) as an environment variable. Be aware that when high performance mode is enabled, the tool will try to use all available bandwidth and CPU cores.

## Advanced features

In most cases, you won't need more than [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) to upload your files to the Hub.
However, `huggingface_hub` has more advanced features to make things easier. Let's have a look at them!

### Faster Uploads

Take advantage of faster uploads through `hf_xet`, the Python binding to the [`xet-core`](https://github.com/huggingface/xet-core) library that enables chunk-based deduplication for faster uploads and downloads. `hf_xet` integrates seamlessly with `huggingface_hub`, but uses the Rust `xet-core` library and Xet storage instead of LFS. 

`hf_xet` uses the Xet storage system, which breaks files down into immutable chunks, storing collections of these chunks (called blocks or xorbs) remotely and retrieving them to reassemble the file when requested. When uploading, after confirming the user is authorized to write to this repo, `hf_xet` will scan the files, breaking them down into their chunks and collecting those chunks into xorbs (and deduplicating across known chunks), and then will be upload these xorbs to the Xet content-addressable service (CAS), which will verify the integrity of the xorbs, register the xorb metadata along with the LFS SHA256 hash (to support lookup/download), and write the xorbs to remote storage.

To enable it, simply install the latest version of `huggingface_hub`:

```bash
pip install -U "huggingface_hub"
```

As of `huggingface_hub` 0.32.0, this will also install `hf_xet`. 

All other `huggingface_hub` APIs will continue to work without any modification. To learn more about the benefits of Xet storage and `hf_xet`, refer to this [section](https://huggingface.co/docs/hub/xet/index).

**Cluster / Distributed Filesystem Upload Considerations**

When uploading from a cluster, the files being uploaded often reside on a distributed or networked filesystem (NFS, EBS, Lustre, Fsx, etc). Xet storage will chunk those files and write them into blocks (also called xorbs) locally, and once the block is completed will upload them. For better performance when uploading from a distributed filesystem, make sure to set [`HF_XET_CACHE`](../package_reference/environment_variables#hfxetcache) to a directory that is on a local disk (ex. a local NVMe or SSD disk). The default location for the Xet cache is under `HF_HOME` at (`~/.cache/huggingface/xet`) and this being in the user's home directory is often also located on the distributed filesystem.

### Non-blocking uploads

In some cases, you want to push data without blocking your main thread. This is particularly useful to upload logs and
artifacts while continuing a training. To do so, you can use the `run_as_future` argument in both [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and
[upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder). This will return a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
object that you can use to check the status of the upload.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.upload_folder( # Upload in the background (non-blocking action)
...     repo_id="username/my-model",
...     folder_path="checkpoints-001",
...     run_as_future=True,
... )
>>> future
Future(...)
>>> future.done()
False
>>> future.result() # Wait for the upload to complete (blocking action)
...
```

> [!TIP]
> Background jobs are queued when using `run_as_future=True`. This means that you are guaranteed that the jobs will be
> executed in the correct order.

Even though background jobs are mostly useful to upload data/create commits, you can queue any method you like using
[run_as_future()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.run_as_future). For instance, you can use it to create a repo and then upload data to it in the background. The
built-in `run_as_future` argument in upload methods is just an alias around it.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.run_as_future(api.create_repo, "username/my-model", exists_ok=True)
Future(...)
>>> api.upload_file(
...     repo_id="username/my-model",
...     path_in_repo="file.txt",
...     path_or_fileobj=b"file content",
...     run_as_future=True,
... )
Future(...)
```

### Upload a folder by chunks

[upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) makes it easy to upload an entire folder to the Hub. However, for large folders (thousands of files or
hundreds of GB), we recommend using [upload_large_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder), which splits the upload into multiple commits. See the [Upload a large folder](#upload-a-large-folder) section for more details.


### Scheduled uploads

The Hugging Face Hub makes it easy to save and version data. However, there are some limitations when updating the same file thousands of times. For instance, you might want to save logs of a training process or user
feedback on a deployed Space. In these cases, uploading the data as a dataset on the Hub makes sense, but it can be hard to do properly. The main reason is that you don't want to version every update of your data because it'll make the git repository unusable. The [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler) class offers a solution to this problem.

The idea is to run a background job that regularly pushes a local folder to the Hub. Let's assume you have a
Gradio Space that takes as input some text and generates two translations of it. Then, the user can select their preferred translation. For each run, you want to save the input, output, and user preference to analyze the results. This is a
perfect use case for [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler); you want to save data to the Hub (potentially millions of user feedback), but
you don't _need_ to save in real-time each user's input. Instead, you can save the data locally in a JSON file and
upload it every 10 minutes. For example:

```py
>>> import json
>>> import uuid
>>> from pathlib import Path
>>> import gradio as gr
>>> from huggingface_hub import CommitScheduler

# Define the file where to save the data. Use UUID to make sure not to overwrite existing data from a previous run.
>>> feedback_file = Path("user_feedback/") / f"data_{uuid.uuid4()}.json"
>>> feedback_folder = feedback_file.parent

# Schedule regular uploads. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
...     repo_id="report-translation-feedback",
...     repo_type="dataset",
...     folder_path=feedback_folder,
...     path_in_repo="data",
...     every=10,
... )

# Define the function that will be called when the user submits its feedback (to be called in Gradio)
>>> def save_feedback(input_text:str, output_1: str, output_2:str, user_choice: int) -> None:
...     """
...     Append input/outputs and user feedback to a JSON Lines file using a thread lock to avoid concurrent writes from different users.
...     """
...     with scheduler.lock:
...         with feedback_file.open("a") as f:
...             f.write(json.dumps({"input": input_text, "output_1": output_1, "output_2": output_2, "user_choice": user_choice}))
...             f.write("\n")

# Start Gradio
>>> with gr.Blocks() as demo:
>>>     ... # define Gradio demo + use `save_feedback`
>>> demo.launch()
```

And that's it! User input/outputs and feedback will be available as a dataset on the Hub. By using a unique JSON file name, you are guaranteed you won't overwrite data from a previous run or data from another
Spaces/replicas pushing concurrently to the same repository.

For more details about the [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler), here is what you need to know:
- **append-only:**
    It is assumed that you will only add content to the folder. You must only append data to existing files or create
    new files. Deleting or overwriting a file might corrupt your repository.
- **git history**:
    The scheduler will commit the folder every `every` minutes. To avoid polluting the git repository too much, it is
    recommended to set a minimal value of 5 minutes. Besides, the scheduler is designed to avoid empty commits. If no
    new content is detected in the folder, the scheduled commit is dropped.
- **errors:**
    The scheduler run as background thread. It is started when you instantiate the class and never stops. In particular,
    if an error occurs during the upload (example: connection issue), the scheduler will silently ignore it and retry
    at the next scheduled commit.
- **thread-safety:**
    In most cases it is safe to assume that you can write to a file without having to worry about a lock file. The
    scheduler will not crash or be corrupted if you write content to the folder while it's uploading. In practice,
    _it is possible_ that concurrency issues happen for heavy-loaded apps. In this case, we advice to use the
    `scheduler.lock` lock to ensure thread-safety. The lock is blocked only when the scheduler scans the folder for
    changes, not when it uploads data. You can safely assume that it will not affect the user experience on your Space.

#### Space persistence demo

Persisting data from a Space to a Dataset on the Hub is the main use case for [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler). Depending on the use
case, you might want to structure your data differently. The structure has to be robust to concurrent users and
restarts which often implies generating UUIDs. Besides robustness, you should upload data in a format readable by the 🤗 Datasets library for later reuse. We created a [Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver)
that demonstrates how to save several different data formats (you may need to adapt it for your own specific needs).

#### Custom uploads

[CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler) assumes your data is append-only and should be uploading "as is". However, you
might want to customize the way data is uploaded. You can do that by creating a class inheriting from [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler)
and overwrite the `push_to_hub` method (feel free to overwrite it any way you want). You are guaranteed it will
be called every `every` minutes in a background thread. You don't have to worry about concurrency and errors but you
must be careful about other aspects, such as pushing empty commits or duplicated data.

In the (simplified) example below, we overwrite `push_to_hub` to zip all PNG files in a single archive to avoid
overloading the repo on the Hub:

```py
class ZipScheduler(CommitScheduler):
    def push_to_hub(self):
        # 1. List PNG files
          png_files = list(self.folder_path.glob("*.png"))
          if len(png_files) == 0:
              return None  # return early if nothing to commit

        # 2. Zip png files in a single archive
        with tempfile.TemporaryDirectory() as tmpdir:
            archive_path = Path(tmpdir) / "train.zip"
            with zipfile.ZipFile(archive_path, "w", zipfile.ZIP_DEFLATED) as zip:
                for png_file in png_files:
                    zip.write(filename=png_file, arcname=png_file.name)

            # 3. Upload archive
            self.api.upload_file(..., path_or_fileobj=archive_path)

        # 4. Delete local png files to avoid re-uploading them later
        for png_file in png_files:
            png_file.unlink()
```

When you overwrite `push_to_hub`, you have access to the attributes of [CommitScheduler](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitScheduler) and especially:
- [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) client: `api`
- Folder parameters: `folder_path` and `path_in_repo`
- Repo parameters: `repo_id`, `repo_type`, `revision`
- The thread lock: `lock`

> [!TIP]
> For more examples of custom schedulers, check out our [demo Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver)
> containing different implementations depending on your use cases.

### create_commit

The [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) functions are high-level APIs that are generally convenient to use. We recommend
trying these functions first if you don't need to work at a lower level. However, if you want to work at a commit-level,
you can use the [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) function directly.

There are three types of operations supported by [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit):

- [CommitOperationAdd](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationAdd) uploads a file to the Hub. If the file already exists, the file contents are overwritten. This operation accepts two arguments:

  - `path_in_repo`: the repository path to upload a file to.
  - `path_or_fileobj`: either a path to a file on your filesystem or a file-like object. This is the content of the file to upload to the Hub.

- [CommitOperationDelete](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationDelete) removes a file or a folder from a repository. This operation accepts `path_in_repo` as an argument.

- [CommitOperationCopy](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationCopy) copies a file within a repository. This operation accepts three arguments:

  - `src_path_in_repo`: the repository path of the file to copy.
  - `path_in_repo`: the repository path where the file should be copied.
  - `src_revision`: optional - the revision of the file to copy if your want to copy a file from a different branch/revision.

For example, if you want to upload two files and delete a file in a Hub repository:

1. Use the appropriate `CommitOperation` to add or delete a file and to delete a folder:

```py
>>> from huggingface_hub import HfApi, CommitOperationAdd, CommitOperationDelete
>>> api = HfApi()
>>> operations = [
...     CommitOperationAdd(path_in_repo="LICENSE.md", path_or_fileobj="~/repo/LICENSE.md"),
...     CommitOperationAdd(path_in_repo="weights.h5", path_or_fileobj="~/repo/weights-final.h5"),
...     CommitOperationDelete(path_in_repo="old-weights.h5"),
...     CommitOperationDelete(path_in_repo="logs/"),
...     CommitOperationCopy(src_path_in_repo="image.png", path_in_repo="duplicate_image.png"),
... ]
```

2. Pass your operations to [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit):

```py
>>> api.create_commit(
...     repo_id="lysandre/test-model",
...     operations=operations,
...     commit_message="Upload my model weights and license",
... )
```

In addition to [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) and [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder), the following functions also use [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) under the hood:

- [delete_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_file) deletes a single file from a repository on the Hub.
- [delete_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.delete_folder) deletes an entire folder from a repository on the Hub.
- [metadata_update()](/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.metadata_update) updates a repository's metadata.

For more detailed information, take a look at the [HfApi](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi) reference.

### Preupload LFS files before commit

In some cases, you might want to upload huge files to S3 **before** making the commit call. For example, if you are
committing a dataset in several shards that are generated in-memory, you would need to upload the shards one by one
to avoid an out-of-memory issue. A solution is to upload each shard as a separate commit on the repo. While being
perfectly valid, this solution has the drawback of potentially messing the git history by generating tens of commits.
To overcome this issue, you can upload your files one by one to S3 and then create a single commit at the end. This
is possible using [preupload_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files) in combination with [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit).

> [!WARNING]
> This is a power-user method. Directly using [upload_file()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) or [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit) instead of handling
> the low-level logic of pre-uploading files is the way to go in the vast majority of cases. The main caveat of
> [preupload_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files) is that until the commit is actually made, the upload files are not accessible on the repo on
> the Hub. If you have a question, feel free to ping us on our Discord or in a GitHub issue.

Here is a simple example illustrating how to pre-upload files:

```py
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo

>>> repo_id = create_repo("test_preupload").repo_id

>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
...     content = ... # generate binary content
...     addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
...     preupload_lfs_files(repo_id, additions=[addition])
...     operations.append(addition)

>>> # Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
```

First, we create the [CommitOperationAdd](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationAdd) objects one by one. In a real-world example, those would contain the
generated shards. Each file is uploaded before generating the next one. During the [preupload_lfs_files()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files) step, **the
`CommitOperationAdd` object is mutated**. You should only use it to pass it directly to [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit). The main
update of the object is that **the binary content is removed** from it, meaning that it will be garbage-collected if
you don't store another reference to it. This is expected as we don't want to keep in memory the content that is
already uploaded. Finally we create the commit by passing all the operations to [create_commit()](/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.create_commit). You can pass
additional operations (add, delete or copy) that have not been processed yet and they will be handled correctly.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/en/guides/upload.md" />
