# Huggingface_Hub

## Docs

- [🤗 Hub 클라이언트 라이브러리 [[hub-client-library]]](https://huggingface.co/docs/huggingface_hub/main/ko/index.md)
- [설치 방법 [[installation]]](https://huggingface.co/docs/huggingface_hub/main/ko/installation.md)
- [둘러보기 [[quickstart]]](https://huggingface.co/docs/huggingface_hub/main/ko/quick-start.md)
- [TensorBoard 로거[[tensorboard-logger]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/tensorboard.md)
- [웹훅 서버[[webhooks-server]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/webhooks_server.md)
- [HfApi Client[[hfapi-client]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/hf_api.md)
- [컬렉션 관리[[managing-collections]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/collections.md)
- [캐시 시스템 참조[[cache-system-reference]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/cache.md)
- [리포지토리 카드[[repository-cards]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/cards.md)
- [파일 다운로드 하기[[downloading-files]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/file_download.md)
- [Overview[[overview]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/overview.md)
- [Discussions 및 Pull Requests를 이용하여 상호작용하기[[interacting-with-discussions-and-pull-requests]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/community.md)
- [파일 시스템 API[[filesystem-api]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/hf_file_system.md)
- [추론 엔드포인트 [[inference-endpoints]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_endpoints.md)
- [Space 런타임 관리[[managing-your-space-runtime]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/space_runtime.md)
- [추론 타입[[inference-types]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_types.md)
- [유틸리티[[utilities]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/utilities.md)
- [추론[[inference]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_client.md)
- [환경 변수[[environment-variables]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/environment_variables.md)
- [믹스인 & 직렬화 메소드[[mixins--serialization-methods]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/mixins.md)
- [직렬화[[serialization]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/serialization.md)
- [로그인 및 로그아웃[[login-and-logout]]](https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/login.md)
- [웹훅 서버[[webhooks-server]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/webhooks_server.md)
- [Collections[[collections]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/collections.md)
- [Hub에서 파일 다운로드하기[[download-files-from-the-hub]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/download.md)
- [Space 관리하기[[manage-your-space]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/manage-spaces.md)
- [서버에서 추론 진행하기[[run-inference-on-servers]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/inference.md)
- [How-to 가이드 [[howto-guides]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/overview.md)
- [Discussions 및 Pull Requests를 이용하여 상호작용하기[[interact-with-discussions-and-pull-requests]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/community.md)
- [Hugging Face Hub에서 파일 시스템 API를 통해 상호작용하기[[interact-with-the-hub-through-the-filesystem-api]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/hf_file_system.md)
- [추론 엔드포인트[[inference-endpoints]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/inference_endpoints.md)
- [Hub에서 검색하기[[search-the-hub]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/search.md)
- [`huggingface_hub` 캐시 시스템 관리하기[[manage-huggingfacehub-cache-system]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/manage-cache.md)
- [명령줄 인터페이스 (CLI) [[command-line-interface]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/cli.md)
- [Hub와 어떤 머신 러닝 프레임워크든 통합[[integrate-any-ml-framework-with-the-hub]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/integrations.md)
- [모델 카드 생성 및 공유[[create-and-share-model-cards]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/model-cards.md)
- [Hub에 파일 업로드하기[[upload-files-to-the-hub]]](https://huggingface.co/docs/huggingface_hub/main/ko/guides/upload.md)

### 🤗 Hub 클라이언트 라이브러리 [[hub-client-library]]
https://huggingface.co/docs/huggingface_hub/main/ko/index.md

# 🤗 Hub 클라이언트 라이브러리 [[hub-client-library]]

`huggingface_hub` 라이브러리는 [Hugging Face Hub](https://hf.co)와 상호작용할 수 있게 해줍니다. Hugging Face Hub는 창작자와 협업자를 위한 머신러닝 플랫폼입니다. 여러분의 프로젝트에 적합한 사전 훈련된 모델과 데이터셋을 발견하거나, Hub에 호스팅된 수백 개의 머신러닝 앱들을 사용해보세요. 또한, 여러분이 만든 모델과 데이터셋을 커뮤니티와 공유할 수도 있습니다. `huggingface_hub` 라이브러리는 파이썬으로 이 모든 것을 간단하게 할 수 있는 방법을 제공합니다.

`huggingface_hub` 라이브러리를 사용하기 위한 [빠른 시작 가이드](quick-start)를 읽어보세요. Hub에서 파일을 다운로드하거나, 레포지토리를 생성하거나, 파일을 업로드하는 방법을 배울 수 있습니다. 계속 읽어보면, 🤗 Hub에서 여러분의 레포지토리를 어떻게 관리하고, 토론에 어떻게 참여하고, 추론 API에 어떻게 접근하는지 알아볼 수 있습니다.


<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guides/overview">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to 가이드</div>
      <p class="text-gray-700">특정 목표를 달성하는 데 도움이 되는 실용적인 가이드입니다. huggingface_hub로 실제 문제를 해결하는 방법을 배우려면 이 가이드들을 살펴보세요.</p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/overview">
      <div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">라이브러리 레퍼런스</div>
      <p class="text-gray-700">huggingface_hub의 클래스와 메소드에 대한 완전하고 기술적인 설명입니다.</p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concepts/git_vs_http">
      <div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">개념 가이드</div>
      <p class="text-gray-700">huggingface_hub의 철학을 더 잘 이해하기 위한 고수준의 설명입니다.</p>
    </a>

  </div>
</div>

<!--
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/overview"
  ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
  <p class="text-gray-700">Learn the basics and become familiar with using huggingface_hub to programmatically interact with the 🤗 Hub!</p>
</a> -->

## 기여하기 [[contribute]]

`huggingface_hub`에 대한 모든 기여를 환영하며, 소중히 생각합니다! 🤗 코드에서 기존의 이슈를 추가하거나 수정하는 것 외에도, 문서를 정확하고 최신으로 유지하도록 개선하거나, 이슈에 대한 질문에 답하거나, 라이브러리를 개선할 수 있다고 생각하는 새로운 기능을 요청하는 것도 커뮤니티에 도움이 됩니다. 새로운 이슈나 기능 요청을 제출하는 방법, PR을 제출하는 방법, 기여한 내용을 테스트하여 모든 것이 예상대로 작동하는지 확인하는 방법 등에 대해 더 알아보려면 [기여
가이드](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md)를 살펴보세요.

기여자들은 또한 모든 사람들을 위해 포괄적이고 환영받는 협업 공간을 만들기 위해 우리의 [행동
강령](https://github.com/huggingface/huggingface_hub/blob/main/CODE_OF_CONDUCT.md)을 준수해야 합니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/index.md" />

### 설치 방법 [[installation]]
https://huggingface.co/docs/huggingface_hub/main/ko/installation.md

# 설치 방법 [[installation]]

시작하기 전에 적절한 패키지를 설치하여 환경을 설정해야 합니다.

`huggingface_hub`는 **Python 3.9+**에서 테스트되었습니다.

## pip로 설치하기 [[install-with-pip]]

[가상 환경](https://docs.python.org/3/library/venv.html)에서 `huggingface_hub`를 설치하는 것을 적극 권장합니다.
파이썬 가상 환경에 익숙하지 않다면 이 [가이드](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/)를 참고하세요.
가상 환경을 사용하면 여러 프로젝트를 더 쉽게 관리하고 의존성 간의 호환성 문제를 피할 수 있습니다.

프로젝트 디렉토리에 가상 환경을 생성하는 것으로 시작하세요:

```bash
python -m venv .env
```

가상환경을 활성화하려면 Linux 및 macOS의 경우:

```bash
source .env/bin/activate
```

Windows의 경우:

```bash
.env/Scripts/activate
```

[PyPi 레지스트리](https://pypi.org/project/huggingface-hub/)에서 `huggingface_hub`를 설치할 준비가 되었습니다:

```bash
pip install --upgrade huggingface_hub
```

완료되면 [설치 확인](#check-installation)이 올바르게 작동하는지 확인합니다.

### 선택 의존성 설치 [[install-optional-dependencies]]

`huggingface_hub`의 일부 의존성은 `huggingface_hub`의 핵심 기능을 실행하는 데 필요하지 않으므로 [선택적](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies)입니다. 설치가 되어있지 않다면 `huggingface_hub`의 추가적인 기능을 사용하지 못할 수 있습니다.

선택적 의존성은 `pip`을 통해 설치할 수 있습니다:
```bash
# PyTorch와 CLI와 관련된 기능에 대한 의존성을 모두 설치합니다.
pip install 'huggingface_hub[cli,torch]'
```

다음은 `huggingface_hub`의 선택 의존성 목록입니다:
- `cli`: 보다 편리한 `huggingface_hub`의 CLI 인터페이스입니다.
- `fastai`, `torch`: 프레임워크별 기능을 실행하려면 필요합니다.
- `dev`: 라이브러리에 기여하고 싶다면 필요합니다. 테스트 실행을 위한 `testing`, 타입 검사기 실행을 위한 `typing`, 린터 실행을 위한 `quality`가 포함됩니다.

### 소스에서 설치 [[install-from-source]]

경우에 따라 소스에서 직접 `huggingface_hub`를 설치하는 게 더 나을수도 있습니다.
이렇게 하면 최신 릴리스 버전이 아닌 최신 `main` 버전을 사용할 수 있습니다.
`main` 버전은 마지막 공식 릴리스 이후 버그가 수정되었지만 아직 새 릴리스가 출시되지 않은 경우와 같이 최신 개발 사항을 들고오는 데 유용합니다.

동시에 `main` 버전은 항상 안정적일 수 없다는 뜻이기도 합니다. 저희는 `main` 버전을 계속 운영하기 위해 노력하고 있으며, 대부분의 문제는 보통 몇 시간 또는 하루 이내에 해결됩니다. 문제가 발생하면 이슈를 열어주시면 더 빨리 해결할 수 있어요!

```bash
pip install git+https://github.com/huggingface/huggingface_hub
```

소스에서 설치할 때 특정 브랜치를 지정할 수도 있습니다. 아직 병합되지 않은 새로운 기능이나 새로운 버그 수정을 테스트하려는 경우에 유용합니다:

```bash
pip install git+https://github.com/huggingface/huggingface_hub@my-feature-branch
```

완료되면 [설치 확인](#check-installation)을 통해 올바르게 작동하는지 확인하세요.

### 편집 가능한 설치 [[editable-install]]
소스에서 설치하면 [편집 가능한 설치](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs)를 설정할 수 있습니다.
이런 고급 설치는 `huggingface_hub`에 기여하고 코드의 변경 사항을 테스트해야 하는 경우에 쓰입니다. 컴퓨터에 `huggingface_hub`의 로컬 복사본을 클론해둬야 합니다.

```bash
# 먼저 로컬에 리포지토리를 복제하세요.
git clone https://github.com/huggingface/huggingface_hub.git

# 그런 다음 -e 플래그를 사용하여 설치하세요.
cd huggingface_hub
pip install -e .
```

이렇게 클론한 레포지토리 폴더와 Python 경로를 연결합니다.
이제 Python은 일반적인 라이브러리 경로 외에도 복제된 폴더 내부를 찾습니다.
예를 들어 파이썬 패키지가 일반적으로 `./.venv/lib/python3.13/site-packages/`에 설치되어 있다면, Python은 복제된 폴더 `./huggingface_hub/`도 검색하게 됩니다.

## conda로 설치하기 [[install-with-conda]]

이미 익숙하다면 [conda-forge 채널](https://anaconda.org/conda-forge/huggingface_hub)를 통해 `huggingface_hub`를 설치할 수도 있습니다:


```bash
conda install -c conda-forge huggingface_hub
```

완료되면 [설치 확인](#check-installation)을 통해 올바르게 작동하는지 확인하세요.

## 설치 확인 [[check-installation]]

설치가 완료되면 다음 명령을 실행하여 `huggingface_hub`가 제대로 작동하는지 확인하세요:

```bash
python -c "from huggingface_hub import model_info; print(model_info('gpt2'))"
```

이 명령은 Hub에서 [gpt2](https://huggingface.co/gpt2) 모델에 대한 정보를 가져옵니다.
출력은 다음과 같아야 합니다:

```text
Model Name: gpt2
Tags: ['pytorch', 'tf', 'jax', 'tflite', 'rust', 'safetensors', 'gpt2', 'text-generation', 'en', 'doi:10.57967/hf/0039', 'transformers', 'exbert', 'license:mit', 'has_space']
Task: text-generation
```

## Windows 제한 사항 [[windows-limitations]]

좋은 ML을 어디서나 사용할 수 있게 하자는 목표 아래, `huggingface_hub`를 크로스 플랫폼 라이브러리로 만들었으며, 특히 유닉스 기반과 Windows 시스템 모두에서 잘 작동하도록 했습니다. 그럼에도 `huggingface_hub`를 Windows에서 실행할 때 몇 가지 제한이 있습니다. 다음은 알려진 문제의 전체 목록입니다. 문서화되지 않은 문제가 발생하면 [GitHub에 이슈](https://github.com/huggingface/huggingface_hub/issues/new/choose)를 열어서 알려주시기 바랍니다.

- `huggingface_hub`의 캐시 시스템은 Hub에서 다운로드한 파일을 효율적으로 캐시하기 위해 심볼릭 링크에 의존합니다. Windows에서는 개발자 모드를 활성화하거나 관리자 권한으로 스크립트를 실행해야 심볼릭 링크를 활성화할 수 있습니다. 활성화하지 않으면 캐시 시스템이 계속 작동하지만 최적화되지 않은 방식으로 작동합니다. 자세한 내용은 [캐시 제한](./guides/manage-cache#limitations) 섹션을 참조하세요.
- Hub의 파일 경로에는 특수 문자를 사용할 수 있습니다(예: `"path/to?/my/file"`). 드문 경우이길 바라지만, Windows는 [특수 문자](https://learn.microsoft.com/en-us/windows/win32/intl/character-sets-used-in-file-names)에 대한 제한이 더 엄격하기 때문에 해당 파일을 다운로드할 수 없습니다. 실수라고 생각되면 레포지토리 소유자에게 문의하시거나 해결책을 찾기 위해 저희에게 연락해 주세요.


## 다음 단계 [[next-steps]]

컴퓨터에 `huggingface_hub`가 제대로 설치되면 [환경 변수를 설정](package_reference/environment_variables)하거나 [가이드 중 하나를 골라](guides/overview) 시작할 수 있습니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/installation.md" />

### 둘러보기 [[quickstart]]
https://huggingface.co/docs/huggingface_hub/main/ko/quick-start.md

# 둘러보기 [[quickstart]]

[Hugging Face Hub](https://huggingface.co/)는 머신러닝 모델, 데모, 데이터 세트 및 메트릭을 공유할 수 있는 곳입니다. `huggingface_hub` 라이브러리는 개발 환경을 벗어나지 않고도 Hub와 상호작용할 수 있도록 도와줍니다. 리포지토리를 쉽게 만들고 관리하거나, 파일을 다운로드 및 업로드하고, 유용한 모델과 데이터 세트의 메타데이터도 구할 수 있습니다.

## 설치 [[installation]]

시작하려면 `huggingface_hub` 라이브러리를 설치하세요:

```bash
pip install --upgrade huggingface_hub
```

자세한 내용은 [설치](./installation) 가이드를 참조하세요.

## 파일 다운로드 [[download-files]]

Hub의 리포지토리는 git으로 버전 관리되며, 사용자는 단일 파일 또는 전체 리포지토리를 다운로드할 수 있습니다. 파일을 다운로드하려면 `hf_hub_download()` 함수를 사용하면 됩니다.
사용하면 파일을 다운로드하여 로컬 디스크에 캐시하기 때문에, 다음에 해당 파일이 필요하면 캐시에서 가져오므로 다시 다운로드할 필요가 없습니다.

다운로드하려면 리포지토리 ID와 파일명이 필요합니다. 예를 들어, [Pegasus](https://huggingface.co/google/pegasus-xsum) 모델 구성 파일을 다운로드하려면:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="google/pegasus-xsum", filename="config.json")
```

특정 버전의 파일을 다운로드하려면 `revision` 매개변수를 사용하여 브랜치 이름, 태그 또는 커밋 해시를 지정하세요. 커밋 해시를 사용하기로 선택한 경우, 7자로 된 짧은 커밋 해시 대신 전체 길이의 해시여야 합니다:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(
...     repo_id="google/pegasus-xsum",
...     filename="config.json",
...     revision="4d33b01d79672f27f001f6abade33f22d993b151"
... )
```

자세한 내용과 옵션은 `hf_hub_download()`에 대한 API 레퍼런스를 참조하세요.

## 로그인 [[login]]

비공개 리포지토리 다운로드, 파일 업로드, PR 생성 등 Hub와 상호 작용하려면 Hugging Face 계정으로 로그인해야 하는 경우가 많습니다.
아직 계정이 없다면 [계정 만들기](https://huggingface.co/join)를 클릭한 다음, 로그인하여 [설정 페이지](https://huggingface.co/settings/tokens)에서 [사용자 액세스 토큰](https://huggingface.co/docs/hub/security-tokens)을 받으세요. 사용자 액세스 토큰은 Hub에 인증하는 데 사용됩니다.

사용자 액세스 토큰을 받으면 터미널에서 다음 명령을 실행하세요:

```bash
hf auth login
# or using an environment variable
hf auth login --token $HUGGINGFACE_TOKEN
```

또는 주피터 노트북이나 스크립트에서 [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login)로 프로그래밍 방식으로 로그인할 수도 있습니다:

```py
>>> from huggingface_hub import login
>>> login()
```

`login(token="hf_xxx")`과 같이 토큰을 [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login)에 직접 전달하여 토큰을 입력하라는 메시지를 표시하지 않고 프로그래밍 방식으로 로그인할 수도 있습니다. 이렇게 한다면 소스 코드를 공유할 때 주의하세요. 토큰을 소스코드에 명시적으로 저장하는 대신에 보안 저장소에서 토큰을 가져오는 것이 가장 좋습니다.

한 번에 하나의 계정에만 로그인할 수 있습니다. 새 계정으로 로그인하면 이전 계정에서 로그아웃됩니다. 항상 `hf auth whoami` 명령으로 어떤 계정을 사용 중인지 확인하세요.
동일한 스크립트에서 여러 계정을 처리하려면 각 메서드를 호출할 때 토큰을 제공하면 됩니다. 이 방법은 머신에 토큰을 저장하지 않으려는 경우에도 유용합니다.

> [!WARNING]
> 로그인하면 Hub에 대한 모든 요청(반드시 인증이 필요하지 않은 메소드 포함)은 기본적으로 액세스 토큰을 사용합니다. 토큰의 암시적 사용을 비활성화하려면 `HF_HUB_DISABLE_IMPLICIT_TOKEN` 환경 변수를 설정해야 합니다.

## 리포지토리 만들기 [[create-a-repository]]

등록 및 로그인이 완료되면 [create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo) 함수를 사용하여 리포지토리를 생성하세요:

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(repo_id="super-cool-model")
```

리포지토리를 비공개로 설정하려면 다음과 같이 하세요:

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(repo_id="super-cool-model", private=True)
```

비공개 리포지토리는 본인 외에는 누구에게도 공개되지 않습니다.

> [!TIP]
> 리포지토리를 생성하거나 Hub에 콘텐츠를 푸시하려면 `write` (쓰기) 권한이 있는 사용자 액세스 토큰을 제공해야 합니다. 토큰을 생성할 때 [설정 페이지](https://huggingface.co/settings/tokens)에서 권한을 선택할 수 있습니다.

## 파일 업로드 [[upload-files]]

새로 만든 리포지토리에 파일을 추가하려면 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file) 함수를 사용하세요. 다음을 지정해야 합니다:

1. 업로드할 파일의 경로
2. 리포지토리에 있는 파일의 경로
3. 파일을 추가할 위치의 리포지토리 ID

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(
...     path_or_fileobj="/home/lysandre/dummy-test/README.md",
...     path_in_repo="README.md",
...     repo_id="lysandre/test-model",
... )
```

한 번에 두 개 이상의 파일을 업로드하려면 [업로드](./guides/upload) 가이드에서 (git을 포함하거나 제외한) 여러 가지 파일 업로드 방법을 소개하는 가이드를 참조하세요.

## 다음 단계 [[next-steps]]

`huggingface_hub` 라이브러리는 사용자가 파이썬으로 Hub와 상호작용할 수 있는 쉬운 방법을 제공합니다. Hub에서 파일과 리포지토리를 관리하는 방법에 대해 자세히 알아보려면 [How-to 가이드](./guides/overview)를 읽어보시기 바랍니다:

- 보다 쉽게 [리포지토리를 관리](./guides/repository)해보세요.
- Hub에서 [다운로드](./guides/download) 파일을 다운로드해보세요.
- Hub에 [업로드](./guides/upload) 파일을 업로드해보세요.
- 원하는 모델 또는 데이터 세트에 대한 [Hub에서 검색](./guides/search)해보세요.
- 빠른 추론을 원하신다면 [추론 API](./guides/inference)를 사용해보세요.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/quick-start.md" />

### TensorBoard 로거[[tensorboard-logger]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/tensorboard.md

# TensorBoard 로거[[tensorboard-logger]]

TensorBoard는 기계학습 실험을 위한 시각화 도구입니다. 주로 손실 및 정확도와 같은 지표를 추적 및 시각화하고, 모델 그래프와 
히스토그램을 보여주고, 이미지를 표시하는 등 다양한 기능을 제공합니다. 또한 TensorBoard는 Hugging Face Hub와 잘 통합되어 있습니다. 
`tfevents` 같은 TensorBoard 추적을 Hub에 푸시하면 Hub는 이를 자동으로 감지하여 시각화 인스턴스를 시작합니다.
TensorBoard와 Hub의 통합에 대한 자세한 정보는 [가이드](https://huggingface.co/docs/hub/tensorboard)를 확인하세요.

이 통합을 위해, `huggingface_hub`는 로그를 Hub로 푸시하기 위한 사용자 정의 로거를 제공합니다. 
이 로거는 추가적인 코드 없이 [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html)의 대체제로 사용될 수 있습니다. 
추적은 계속해서 로컬에 저장되며 백그라운드 작업이 일정한 시간마다 Hub에 푸시하는 형태로 동작합니다.

## HFSummaryWriter[[huggingface_hub.HFSummaryWriter]][[huggingface_hub.HFSummaryWriter]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HFSummaryWriter</name><anchor>huggingface_hub.HFSummaryWriter</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_tensorboard_logger.py#L46</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to which the logs will be pushed.
- **logdir** (`str`, *optional*) --
  The directory where the logs will be written. If not specified, a local directory will be created by the
  underlying `SummaryWriter` object.
- **commit_every** (`int` or `float`, *optional*) --
  The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes.
- **squash_history** (`bool`, *optional*) --
  Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
  useful to avoid degraded performances on the repo when it grows too large.
- **repo_type** (`str`, *optional*) --
  The type of the repo to which the logs will be pushed. Defaults to "model".
- **repo_revision** (`str`, *optional*) --
  The revision of the repo to which the logs will be pushed. Defaults to "main".
- **repo_private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **path_in_repo** (`str`, *optional*) --
  The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/".
- **repo_allow_patterns** (`list[str]` or `str`, *optional*) --
  A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the
  [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details.
- **repo_ignore_patterns** (`list[str]` or `str`, *optional*) --
  A list of patterns to exclude in the upload. Check out the
  [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details.
- **token** (`str`, *optional*) --
  Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more
  details
- **kwargs** --
  Additional keyword arguments passed to `SummaryWriter`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub.

Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate
thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection
issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every`
minutes (default to every 5 minutes).

> [!WARNING]
> `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice.



<ExampleCodeBlock anchor="huggingface_hub.HFSummaryWriter.example">

Examples:
```diff
# Taken from https://pytorch.org/docs/stable/tensorboard.html
- from torch.utils.tensorboard import SummaryWriter
+ from huggingface_hub import HFSummaryWriter

import numpy as np

- writer = SummaryWriter()
+ writer = HFSummaryWriter(repo_id="username/my-trained-model")

for n_iter in range(100):
    writer.add_scalar('Loss/train', np.random.random(), n_iter)
    writer.add_scalar('Loss/test', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HFSummaryWriter.example-2">

```py
>>> from huggingface_hub import HFSummaryWriter

# Logs are automatically pushed every 15 minutes (5 by default) + when exiting the context manager
>>> with HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) as logger:
...     logger.add_scalar("a", 1)
...     logger.add_scalar("b", 2)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/tensorboard.md" />

### 웹훅 서버[[webhooks-server]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/webhooks_server.md

# 웹훅 서버[[webhooks-server]]

웹훅은 MLOps 관련 기능의 기반이 됩니다. 이를 통해 특정 저장소의 새로운 변경 사항을 수신하거나, 관심 있는 특정 사용자/조직에 속한 모든 저장소의 변경 사항을 받아볼 수 있습니다.
Huggingface Hub의 웹훅에 대해 더 자세히 알아보려면 이 [가이드](https://huggingface.co/docs/hub/webhooks)를 읽어보세요.

> [!TIP]
> 웹훅 서버를 설정하고 Space로 배포하는 방법은 이 단계별 [가이드](../guides/webhooks_server)를 확인하세요.

> [!WARNING]
> 이 기능은 실험적인 기능입니다. 본 API는 현재 개선 작업 중이며, 향후 사전 통지 없이 주요 변경 사항이 도입될 수 있음을 의미합니다. `requirements`에서 `huggingface_hub`의 버전을 고정하는 것을 권장합니다. 참고로 실험적 기능을 사용하면 경고가 트리거 됩니다. 이 경고 트리거를 비활성화 시키길 원한다면 환경변수 `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1`를 설정하세요.

## 서버[[server]]
여기서 서버는 하나의 [Gradio](https://gradio.app/) 앱을 의미합니다. Gradio에는 사용자 또는 사용자에게 지침을 표시하는 UI와 웹훅을 수신하기 위한 API가 있습니다. 웹훅 엔드포인트를 구현하는 것은 함수에 데코레이터를 추가하는 것만큼 간단합니다. 서버를 Space에 배포하기 전에 Gradio 터널을 사용하여 웹훅을 머신으로 리디렉션하여 디버깅할 수 있습니다.

### WebhooksServer[[huggingface_hub.WebhooksServer]][[huggingface_hub.WebhooksServer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.WebhooksServer</name><anchor>huggingface_hub.WebhooksServer</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_server.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **ui** (`gradio.Blocks`, optional) --
  A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions
  about the configured webhooks is created.
- **webhook_secret** (`str`, optional) --
  A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as
  you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You
  can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the
  webhook endpoints are opened without any security.</paramsdesc><paramgroups>0</paramgroups></docstring>

The [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) class lets you create an instance of a Gradio app that can receive Huggingface webhooks.
These webhooks can be registered using the `add_webhook()` decorator. Webhook endpoints are added to
the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `launch` method has to be
called to start the app.

It is recommended to accept [WebhookPayload](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhookPayload) as the first argument of the webhook function. It is a Pydantic
model that contains all the information about the webhook event. The data will be parsed automatically for you.

Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
WebhooksServer and deploy it on a Space.

> [!WARNING]
> `WebhooksServer` is experimental. Its API is subject to change in the future.

> [!WARNING]
> You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`).



<ExampleCodeBlock anchor="huggingface_hub.WebhooksServer.example">

Example:

```python
import gradio as gr
from huggingface_hub import WebhooksServer, WebhookPayload

with gr.Blocks() as ui:
    ...

app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")

@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
    return {"message": "hello"}

app.launch()
```

</ExampleCodeBlock>


</div>

### @webhook_endpoint[[huggingface_hub.webhook_endpoint]][[huggingface_hub.webhook_endpoint]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.webhook_endpoint</name><anchor>huggingface_hub.webhook_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_server.py#L226</source><parameters>[{"name": "path", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **path** (`str`, optional) --
  The URL path to register the webhook function. If not provided, the function name will be used as the path.
  In any case, all webhooks are registered under `/webhooks`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Decorator to start a [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) and register the decorated function as a webhook endpoint.

This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret),
you can use [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) directly. You can register multiple webhook endpoints (to the same server) by using
this decorator multiple times.

Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to set up your
server and deploy it on a Space.

> [!WARNING]
> `webhook_endpoint` is experimental. Its API is subject to change in the future.

> [!WARNING]
> You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`).



Examples:
The default usage is to register a function as a webhook endpoint. The function name will be used as the path.
The server will be started automatically at exit (i.e. at the end of the script).

<ExampleCodeBlock anchor="huggingface_hub.webhook_endpoint.example">

```python
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

# Server is automatically started at the end of the script.
```

</ExampleCodeBlock>

Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you
are running it in a notebook.

<ExampleCodeBlock anchor="huggingface_hub.webhook_endpoint.example-2">

```python
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

# Start the server manually
trigger_training.launch()
```

</ExampleCodeBlock>


</div>

## 페이로드[[huggingface_hub.WebhookPayload]][[huggingface_hub.WebhookPayload]]

[WebhookPayload](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhookPayload)는 웹훅의 페이로드를 포함하는 기본 데이터 구조입니다. 이것은 `pydantic` 클래스로서 FastAPI에서 매우 쉽게 사용할 수 있습니다. 즉 WebhookPayload를 웹후크 엔드포인트에 매개변수로 전달하면 자동으로 유효성이 검사되고 파이썬 객체로 파싱됩니다.

웹훅 페이로드에 대한 자세한 사항은 이 [가이드](https://huggingface.co/docs/hub/webhooks#webhook-payloads)를 참고하세요.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayload</name><anchor>huggingface_hub.WebhookPayload</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L130</source><parameters>[{"name": "event", "val": ": WebhookPayloadEvent"}, {"name": "repo", "val": ": WebhookPayloadRepo"}, {"name": "discussion", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussion] = None"}, {"name": "comment", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadComment] = None"}, {"name": "webhook", "val": ": WebhookPayloadWebhook"}, {"name": "movedTo", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadMovedTo] = None"}, {"name": "updatedRefs", "val": ": typing.Optional[list[huggingface_hub._webhooks_payload.WebhookPayloadUpdatedRef]] = None"}]</parameters></docstring>


</div>

### WebhookPayload[[huggingface_hub.WebhookPayload]][[huggingface_hub.WebhookPayload]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayload</name><anchor>huggingface_hub.WebhookPayload</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L130</source><parameters>[{"name": "event", "val": ": WebhookPayloadEvent"}, {"name": "repo", "val": ": WebhookPayloadRepo"}, {"name": "discussion", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussion] = None"}, {"name": "comment", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadComment] = None"}, {"name": "webhook", "val": ": WebhookPayloadWebhook"}, {"name": "movedTo", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadMovedTo] = None"}, {"name": "updatedRefs", "val": ": typing.Optional[list[huggingface_hub._webhooks_payload.WebhookPayloadUpdatedRef]] = None"}]</parameters></docstring>


</div>

### WebhookPayloadComment[[huggingface_hub.WebhookPayloadComment]][[huggingface_hub.WebhookPayloadComment]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadComment</name><anchor>huggingface_hub.WebhookPayloadComment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L95</source><parameters>[{"name": "id", "val": ": str"}, {"name": "author", "val": ": ObjectId"}, {"name": "hidden", "val": ": bool"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "url", "val": ": WebhookPayloadUrl"}]</parameters></docstring>


</div>

### WebhookPayloadDiscussion[[huggingface_hub.WebhookPayloadDiscussion]][[huggingface_hub.WebhookPayloadDiscussion]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadDiscussion</name><anchor>huggingface_hub.WebhookPayloadDiscussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L102</source><parameters>[{"name": "id", "val": ": str"}, {"name": "num", "val": ": int"}, {"name": "author", "val": ": ObjectId"}, {"name": "url", "val": ": WebhookPayloadUrl"}, {"name": "title", "val": ": str"}, {"name": "isPullRequest", "val": ": bool"}, {"name": "status", "val": ": typing.Literal['closed', 'draft', 'open', 'merged']"}, {"name": "changes", "val": ": typing.Optional[huggingface_hub._webhooks_payload.WebhookPayloadDiscussionChanges] = None"}, {"name": "pinned", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

### WebhookPayloadDiscussionChanges[[huggingface_hub.WebhookPayloadDiscussionChanges]][[huggingface_hub.WebhookPayloadDiscussionChanges]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadDiscussionChanges</name><anchor>huggingface_hub.WebhookPayloadDiscussionChanges</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L90</source><parameters>[{"name": "base", "val": ": str"}, {"name": "mergeCommitId", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

### WebhookPayloadEvent[[huggingface_hub.WebhookPayloadEvent]][[huggingface_hub.WebhookPayloadEvent]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadEvent</name><anchor>huggingface_hub.WebhookPayloadEvent</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L85</source><parameters>[{"name": "action", "val": ": typing.Literal['create', 'delete', 'move', 'update']"}, {"name": "scope", "val": ": str"}]</parameters></docstring>


</div>

### WebhookPayloadMovedTo[[huggingface_hub.WebhookPayloadMovedTo]][[huggingface_hub.WebhookPayloadMovedTo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadMovedTo</name><anchor>huggingface_hub.WebhookPayloadMovedTo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L76</source><parameters>[{"name": "name", "val": ": str"}, {"name": "owner", "val": ": ObjectId"}]</parameters></docstring>


</div>

### WebhookPayloadRepo[[huggingface_hub.WebhookPayloadRepo]][[huggingface_hub.WebhookPayloadRepo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadRepo</name><anchor>huggingface_hub.WebhookPayloadRepo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L113</source><parameters>[{"name": "id", "val": ": str"}, {"name": "owner", "val": ": ObjectId"}, {"name": "head_sha", "val": ": typing.Optional[str] = None"}, {"name": "name", "val": ": str"}, {"name": "private", "val": ": bool"}, {"name": "subdomain", "val": ": typing.Optional[str] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "type", "val": ": typing.Literal['dataset', 'model', 'space']"}, {"name": "url", "val": ": WebhookPayloadUrl"}]</parameters></docstring>


</div>

### WebhookPayloadUrl[[huggingface_hub.WebhookPayloadUrl]][[huggingface_hub.WebhookPayloadUrl]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadUrl</name><anchor>huggingface_hub.WebhookPayloadUrl</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L71</source><parameters>[{"name": "web", "val": ": str"}, {"name": "api", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

### WebhookPayloadWebhook[[huggingface_hub.WebhookPayloadWebhook]][[huggingface_hub.WebhookPayloadWebhook]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.WebhookPayloadWebhook</name><anchor>huggingface_hub.WebhookPayloadWebhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_webhooks_payload.py#L81</source><parameters>[{"name": "id", "val": ": str"}, {"name": "version", "val": ": typing.Literal[3]"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/webhooks_server.md" />

### HfApi Client[[hfapi-client]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/hf_api.md

# HfApi Client[[hfapi-client]]

아래는 허깅 페이스 Hub의 API를 위한 파이썬 래퍼인 `HfApi` 클래스에 대한 문서입니다.

`HfApi`의 모든 메서드는 패키지의 루트에서 직접 접근할 수 있습니다. 두 접근 방식은 아래에서 자세히 설명합니다.

루트 메서드를 사용하는 것이 더 간단하지만 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 클래스를 사용하면 더 유연하게 사용할 수 있습니다.
특히 모든 HTTP 호출에서 재사용할 토큰을 전달할 수 있습니다. 
이 방식은 토큰이 머신에 유지되지 않기 때문에 `hf auth login` 또는 [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login)를 사용하는 방식과는 다르며,
다른 엔드포인트를 제공하거나 사용자정의 에이전트를 구성할 수도 있습니다.

```python
from huggingface_hub import HfApi, list_models

# 루트 메서드를 사용하세요.
models = list_models()

# 또는 HfApi client를 구성하세요.
hf_api = HfApi(
    endpoint="https://huggingface.co", # 비공개 Hub 엔드포인트를 지정할 수 있습니다.
    token="hf_xxx", # 토큰은 머신에 유지되지 않습니다.
)
models = hf_api.list_models()
```

## HfApi[[huggingface_hub.HfApi]][[huggingface_hub.HfApi]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HfApi</name><anchor>huggingface_hub.HfApi</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1647</source><parameters>[{"name": "endpoint", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "library_name", "val": ": Optional[str] = None"}, {"name": "library_version", "val": ": Optional[str] = None"}, {"name": "user_agent", "val": ": Union[dict, str, None] = None"}, {"name": "headers", "val": ": Optional[dict[str, str]] = None"}]</parameters><paramsdesc>- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **library_name** (`str`, *optional*) --
  The name of the library that is making the HTTP request. Will be added to
  the user-agent header. Example: `"transformers"`.
- **library_version** (`str`, *optional*) --
  The version of the library that is making the HTTP request. Will be added
  to the user-agent header. Example: `"4.24.0"`.
- **user_agent** (`str`, `dict`, *optional*) --
  The user agent info in the form of a dictionary or a single string. It will
  be completed with information about the installed packages.
- **headers** (`dict`, *optional*) --
  Additional headers to be sent with each request. Example: `{"X-My-Header": "value"}`.
  Headers passed here are taking precedence over the default headers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Client to interact with the Hugging Face Hub via HTTP.

The client is initialized with some high-level settings used in all requests
made to the Hub (HF endpoint, authentication, user agents...). Using the `HfApi`
client is preferred but not mandatory as all of its public methods are exposed
directly at the root of `huggingface_hub`.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>accept_access_request</name><anchor>huggingface_hub.HfApi.accept_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8675</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to accept access request for.
- **user** (`str`) --
  The username of the user which access request should be accepted.
- **repo_type** (`str`, *optional*) --
  The type of the repo to accept access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the accepted list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Accept an access request from a user for a given gated repo.

Once the request is accepted, the user will be able to download any file of the repo and access the community
tab. If the approval mode is automatic, you don't have to accept requests manually. An accepted request can be
cancelled or rejected at any time using [cancel_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) and [reject_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_collection_item</name><anchor>huggingface_hub.HfApi.add_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8227</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_id", "val": ": str"}, {"name": "item_type", "val": ": CollectionItemType_T"}, {"name": "note", "val": ": Optional[str] = None"}, {"name": "exists_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_id** (`str`) --
  ID of the item to add to the collection. It can be the ID of a repo on the Hub (e.g. `"facebook/bart-large-mnli"`)
  or a paper id (e.g. `"2307.09288"`).
- **item_type** (`str`) --
  Type of the item to add. Can be one of `"model"`, `"dataset"`, `"space"` or `"paper"`.
- **note** (`str`, *optional*) --
  A note to attach to the item in the collection. The maximum size for a note is 500 characters.
- **exists_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if item already exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the item you try to add to the collection does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 409 if the item you try to add to the collection is already in the collection (and exists_ok=False)</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>
Add an item to a collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)





<ExampleCodeBlock anchor="huggingface_hub.HfApi.add_collection_item.example">

Example:

```py
>>> from huggingface_hub import add_collection_item
>>> collection = add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="pierre-loic/climate-news-articles",
...     item_type="dataset"
... )
>>> collection.items[-1].item_id
"pierre-loic/climate-news-articles"
# ^item got added to the collection on last position

# Add item with a note
>>> add_collection_item(
...     collection_slug="davanstrien/climate-64f99dc2a5067f6b65531bab",
...     item_id="datasets/climate_fever",
...     item_type="dataset"
...     note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
(...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_space_secret</name><anchor>huggingface_hub.HfApi.add_space_secret</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6695</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "value", "val": ": str"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Secret key. Example: `"GITHUB_API_KEY"`
- **value** (`str`) --
  Secret value. Example: `"your_github_api_key"`.
- **description** (`str`, *optional*) --
  Secret description. Example: `"Github API key to access the Github API"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Adds or updates a secret in a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_space_variable</name><anchor>huggingface_hub.HfApi.add_space_variable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6784</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "value", "val": ": str"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **value** (`str`) --
  Variable value. Example: `"the_model_repo_id"`.
- **description** (`str`) --
  Description of the variable. Example: `"Model Repo ID of the implemented model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Adds or updates a variable in a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>auth_check</name><anchor>huggingface_hub.HfApi.auth_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9705</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to check for access. Format should be `"user/repo_name"`.
  Example: `"user/my-cool-model"`.

- **repo_type** (`str`, *optional*) --
  The type of the repository. Should be one of `"model"`, `"dataset"`, or `"space"`.
  If not specified, the default is `"model"`.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  Raised if the repository does not exist, is private, or the user does not have access. This can
  occur if the `repo_id` or `repo_type` is incorrect or if the repository is private but the user
  is not authenticated.

- [GatedRepoError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.GatedRepoError) -- 
  Raised if the repository exists but is gated and the user is not authorized to access it.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [GatedRepoError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.GatedRepoError)</raisederrors></docstring>

Check if the provided user token has access to a specific repository on the Hugging Face Hub.

This method verifies whether the user, authenticated via the provided token, has access to the specified
repository. If the repository is not found or if the user lacks the required permissions to access it,
the method raises an appropriate exception.







Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.auth_check.example">

Check if the user has access to a repository:

```python
>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError

try:
    auth_check("user/my-cool-model")
except GatedRepoError:
    # Handle gated repository error
    print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
    # Handle repository not found error
    print("The repository was not found or you do not have access.")
```

</ExampleCodeBlock>

In this example:
- If the user has access, the method completes successfully.
- If the repository is gated or does not exist, appropriate exceptions are raised, allowing the user
to handle them accordingly.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cancel_access_request</name><anchor>huggingface_hub.HfApi.cancel_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8635</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to cancel access request for.
- **user** (`str`) --
  The username of the user which access request should be cancelled.
- **repo_type** (`str`, *optional*) --
  The type of the repo to cancel access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the pending list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Cancel an access request from a user for a given gated repo.

A cancelled request will go back to the pending list and the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cancel_job</name><anchor>huggingface_hub.HfApi.cancel_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10026</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Cancel a compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>change_discussion_status</name><anchor>huggingface_hub.HfApi.change_discussion_status</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6450</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "new_status", "val": ": Literal['open', 'closed']"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "comment", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **new_status** (`str`) --
  The new status for the discussion, either `"open"` or `"closed"`.
- **comment** (`str`, *optional*) --
  An optional comment to post with the status change.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionStatusChange](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionStatusChange)</rettype><retdesc>the status change event</retdesc></docstring>
Closes or re-opens a Discussion or Pull Request.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.change_discussion_status.example">

Examples:
```python
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>comment_discussion</name><anchor>huggingface_hub.HfApi.comment_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6307</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment** (`str`) --
  The content of the comment to create. Comments support markdown formatting.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the newly created comment</retdesc></docstring>
Creates a new comment on the given Discussion.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.comment_discussion.example">

Examples:
```python

>>> comment = """
... Hello @otheruser!
...
... # This is a title
...
... **This is bold**, *this is italic* and ~this is strikethrough~
... And [this](http://url) is a link
... """

>>> HfApi().comment_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     comment=comment
... )
# DiscussionComment(id='deadbeef0000000', type='comment', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_branch</name><anchor>huggingface_hub.HfApi.create_branch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5657</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which the branch will be created.
  Example: `"user/my-cool-model"`.

- **branch** (`str`) --
  The name of the branch to create.

- **revision** (`str`, *optional*) --
  The git revision to create the branch from. It can be a branch name or
  the OID/SHA of a commit, as a hexadecimal string. Defaults to the head
  of the `"main"` branch.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if creating a branch on a dataset or
  space, `None` or `"model"` if tagging a model. Default is `None`.

- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if branch already exists.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If invalid reference for a branch. Ex: `refs/pr/5` or 'refs/foo/bar'.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the branch already exists on the repo (error 409) and `exist_ok` is
  set to `False`.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError) or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Create a new branch for a repo on the Hub, starting from the specified revision (defaults to `main`).
To find a revision suiting your needs, you can use [list_repo_refs()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) or [list_repo_commits()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_commits).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_collection</name><anchor>huggingface_hub.HfApi.create_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8053</source><parameters>[{"name": "title", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "private", "val": ": bool = False"}, {"name": "exists_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **title** (`str`) --
  Title of the collection to create. Example: `"Recent models"`.
- **namespace** (`str`, *optional*) --
  Namespace of the collection to create (username or org). Will default to the owner name.
- **description** (`str`, *optional*) --
  Description of the collection to create.
- **private** (`bool`, *optional*) --
  Whether the collection should be private or not. Defaults to `False` (i.e. public collection).
- **exists_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if collection already exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Create a new Collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_collection.example">

Example:

```py
>>> from huggingface_hub import create_collection
>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
>>> collection.slug
"username/iccv-2023-64f9a55bb3115b4f513ec026"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_commit</name><anchor>huggingface_hub.HfApi.create_commit</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3885</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "operations", "val": ": Iterable[CommitOperation]"}, {"name": "commit_message", "val": ": str"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "num_threads", "val": ": int = 5"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which the commit will be created, for example:
  `"username/custom_transformers"`

- **operations** (`Iterable` of `CommitOperation()`) --
  An iterable of operations to include in the commit, either:

  - [CommitOperationAdd](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationAdd) to upload a file
  - [CommitOperationDelete](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationDelete) to delete a file
  - [CommitOperationCopy](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationCopy) to copy a file

  Operation objects will be mutated to include information relative to the upload. Do not reuse the
  same objects for multiple commits.

- **commit_message** (`str`) --
  The summary (first line) of the commit that will be created.

- **commit_description** (`str`, *optional*) --
  The description of the commit that will be created

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.

- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.

- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.

- **num_threads** (`int`, *optional*) --
  Number of concurrent threads for uploading files. Defaults to 5.
  Setting it to 2 means at most 2 files will be uploaded concurrently.

- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string.
  Shorthands (7 first characters) are also supported. If specified and `create_pr` is `False`,
  the commit will fail if `revision` does not point to `parent_commit`. If specified and `create_pr`
  is `True`, the pull request will be created from `parent_commit`. Specifying `parent_commit`
  ensures the repo has not changed before committing the changes, and can be especially useful
  if the repo is updated / committed to concurrently.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If commit message is empty.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If parent commit is not a valid commit OID.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If a README.md file with an invalid metadata section is committed. In this case, the commit will fail
  early, before trying to upload any file.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `create_pr` is `True` and revision is neither `None` nor `"main"`.
- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.</raises><raisederrors>``ValueError`` or [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Creates a commit in the given repo, deleting & uploading files as needed.

> [!WARNING]
> The input list of `CommitOperation` will be mutated during the commit process. Do not reuse the same objects
> for multiple commits.

> [!WARNING]
> `create_commit` assumes that the repo already exists on the Hub. If you get a
> Client error 404, please make sure you are authenticated and that `repo_id` and
> `repo_type` are set correctly. If repo does not exist, create it first using
> [create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

> [!WARNING]
> `create_commit` is limited to 25k LFS files and a 1GB payload for regular files.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_discussion</name><anchor>huggingface_hub.HfApi.create_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6134</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "pull_request", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **title** (`str`) --
  The title of the discussion. It can be up to 200 characters long,
  and must be at least 3 characters long. Leading and trailing whitespaces
  will be stripped.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **description** (`str`, *optional*) --
  An optional description for the Pull Request.
  Defaults to `"Discussion opened with the huggingface_hub Python library"`
- **pull_request** (`bool`, *optional*) --
  Whether to create a Pull Request or discussion. If `True`, creates a Pull Request.
  If `False`, creates a discussion. Defaults to `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a Discussion or Pull Request.

Pull Requests created programmatically will be in `"draft"` status.

Creating a Pull Request with changes can also be done at once with [HfApi.create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit).



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_inference_endpoint</name><anchor>huggingface_hub.HfApi.create_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7305</source><parameters>[{"name": "name", "val": ": str"}, {"name": "repository", "val": ": str"}, {"name": "framework", "val": ": str"}, {"name": "accelerator", "val": ": str"}, {"name": "instance_size", "val": ": str"}, {"name": "instance_type", "val": ": str"}, {"name": "region", "val": ": str"}, {"name": "vendor", "val": ": str"}, {"name": "account_id", "val": ": Optional[str] = None"}, {"name": "min_replica", "val": ": int = 1"}, {"name": "max_replica", "val": ": int = 1"}, {"name": "scale_to_zero_timeout", "val": ": Optional[int] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "task", "val": ": Optional[str] = None"}, {"name": "custom_image", "val": ": Optional[dict] = None"}, {"name": "env", "val": ": Optional[dict[str, str]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, str]] = None"}, {"name": "type", "val": ": InferenceEndpointType = <InferenceEndpointType.PROTECTED: 'protected'>"}, {"name": "domain", "val": ": Optional[str] = None"}, {"name": "path", "val": ": Optional[str] = None"}, {"name": "cache_http_responses", "val": ": Optional[bool] = None"}, {"name": "tags", "val": ": Optional[list[str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The unique name for the new Inference Endpoint.
- **repository** (`str`) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **accelerator** (`str`) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **region** (`str`) --
  The cloud region in which the Inference Endpoint will be created (e.g. `"us-east-1"`).
- **vendor** (`str`) --
  The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. `"aws"`).
- **account_id** (`str`, *optional*) --
  The account ID used to link a VPC to a private Inference Endpoint (if applicable).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint. To enable
  scaling to zero, set this value to 0 and adjust `scale_to_zero_timeout` accordingly. Defaults to 1.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero, or no scaling to zero if
  set to None and `min_replica` is not 0. Defaults to None.
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **env** (`dict[str, str]`, *optional*) --
  Non-secret environment variables to inject in the container environment.
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.
- **type** ([`InferenceEndpointType]`, *optional*) --
  The type of the Inference Endpoint, which can be `"protected"` (default), `"public"` or `"private"`.
- **domain** (`str`, *optional*) --
  The custom domain for the Inference Endpoint deployment, if setup the inference endpoint will be available at this domain (e.g. `"my-new-domain.cool-website.woof"`).
- **path** (`str`, *optional*) --
  The custom path to the deployed model, should start with a `/` (e.g. `"/models/google-bert/bert-base-uncased"`).
- **cache_http_responses** (`bool`, *optional*) --
  Whether to cache HTTP responses from the Inference Endpoint. Defaults to `False`.
- **tags** (`list[str]`, *optional*) --
  A list of tags to associate with the Inference Endpoint.
- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be created. Defaults to the current user's namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the updated Inference Endpoint.</retdesc></docstring>
Create a new Inference Endpoint.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "my-endpoint-name",
...     repository="gpt2",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl",
... )
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', status="pending",...)

# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
"..."
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example-2">

```python
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x1",
...     instance_type="nvidia-a10g",
...     env={
...           "MAX_BATCH_PREFILL_TOKENS": "2048",
...           "MAX_INPUT_LENGTH": "1024",
...           "MAX_TOTAL_TOKENS": "1512",
...           "MODEL_ID": "/repository"
...         },
...     custom_image={
...         "health_route": "/health",
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
...    secrets={"MY_SECRET_KEY": "secret_value"},
...    tags=["dev", "text-generation"],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_inference_endpoint.example-3">

```python
# Start an Inference Endpoint running ProsusAI/finbert while scaling to zero in 15 minutes
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.create_inference_endpoint(
...     "finbert-classifier",
...     repository="ProsusAI/finbert",
...     framework="pytorch",
...     task="text-classification",
...     min_replica=0,
...     scale_to_zero_timeout=15,
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl",
... )
>>> endpoint.wait(timeout=300)
# Run inference on the endpoint
>>> endpoint.client.text_generation(...)
TextClassificationOutputElement(label='positive', score=0.8983615040779114)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_inference_endpoint_from_catalog</name><anchor>huggingface_hub.HfApi.create_inference_endpoint_from_catalog</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7534</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "name", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The ID of the model in the catalog to deploy as an Inference Endpoint.
- **name** (`str`, *optional*) --
  The unique name for the new Inference Endpoint. If not provided, a random name will be generated.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be created. Defaults to the current user's namespace.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the new Inference Endpoint.</retdesc></docstring>
Create a new Inference Endpoint from a model in the Hugging Face Inference Catalog.

The goal of the Inference Catalog is to provide a curated list of models that are optimized for inference
and for which default configurations have been tested. See https://endpoints.huggingface.co/catalog for a list
of available models in the catalog.







> [!WARNING]
> `create_inference_endpoint_from_catalog` is experimental. Its API is subject to change in the future. Please provide feedback
> if you have any suggestions or requests.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_pull_request</name><anchor>huggingface_hub.HfApi.create_pull_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6223</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **title** (`str`) --
  The title of the discussion. It can be up to 200 characters long,
  and must be at least 3 characters long. Leading and trailing whitespaces
  will be stripped.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **description** (`str`, *optional*) --
  An optional description for the Pull Request.
  Defaults to `"Discussion opened with the huggingface_hub Python library"`
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a Pull Request . Pull Requests created programmatically will be in `"draft"` status.

Creating a Pull Request with changes can also be done at once with [HfApi.create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit);

This is a wrapper around [HfApi.create_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_discussion).



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_repo</name><anchor>huggingface_hub.HfApi.create_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3520</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}, {"name": "resource_group_id", "val": ": Optional[str] = None"}, {"name": "space_sdk", "val": ": Optional[str] = None"}, {"name": "space_hardware", "val": ": Optional[SpaceHardware] = None"}, {"name": "space_storage", "val": ": Optional[SpaceStorage] = None"}, {"name": "space_sleep_time", "val": ": Optional[int] = None"}, {"name": "space_secrets", "val": ": Optional[list[dict[str, str]]] = None"}, {"name": "space_variables", "val": ": Optional[list[dict[str, str]]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo already exists.
- **resource_group_id** (`str`, *optional*) --
  Resource group in which to create the repo. Resource groups is only available for Enterprise Hub organizations and
  allow to define which members of the organization can access the resource. The ID of a resource group
  can be found in the URL of the resource's page on the Hub (e.g. `"66670e5163145ca562cb1988"`).
  To learn more about resource groups, see https://huggingface.co/docs/hub/en/security-resource-groups.
- **space_sdk** (`str`, *optional*) --
  Choice of SDK to use if repo_type is "space". Can be "streamlit", "gradio", "docker", or "static".
- **space_hardware** (`SpaceHardware` or `str`, *optional*) --
  Choice of Hardware if repo_type is "space". See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for a complete list.
- **space_storage** (`SpaceStorage` or `str`, *optional*) --
  Choice of persistent storage tier. Example: `"small"`. See [SpaceStorage](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceStorage) for a complete list.
- **space_sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **space_secrets** (`list[dict[str, str]]`, *optional*) --
  A list of secret keys to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
- **space_variables** (`list[dict[str, str]]`, *optional*) --
  A list of public environment variables to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.</paramsdesc><paramgroups>0</paramgroups><rettype>[RepoUrl](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.RepoUrl)</rettype><retdesc>URL to the newly created repo. Value is a subclass of `str` containing
attributes like `endpoint`, `repo_type` and `repo_id`.</retdesc></docstring>
Create an empty repo on the HuggingFace Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_scheduled_job</name><anchor>huggingface_hub.HfApi.create_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10170</source><parameters>[{"name": "image", "val": ": str"}, {"name": "command", "val": ": list[str]"}, {"name": "schedule", "val": ": str"}, {"name": "suspend", "val": ": Optional[bool] = None"}, {"name": "concurrency", "val": ": Optional[bool] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **image** (`str`) --
  The Docker image to use.
  Examples: `"ubuntu"`, `"python:3.12"`, `"pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"`.
  Example with an image from a Space: `"hf.co/spaces/lhoestq/duckdb"`.

- **command** (`list[str]`) --
  The command to run. Example: `["echo", "hello"]`.

- **schedule** (`str`) --
  One of "@annually", "@yearly", "@monthly", "@weekly", "@daily", "@hourly", or a
  CRON schedule expression (e.g., '0 9 * * 1' for 9 AM every Monday).

- **suspend** (`bool`, *optional*) --
  If True, the scheduled Job is suspended (paused).  Defaults to False.

- **concurrency** (`bool`, *optional*) --
  If True, multiple instances of this Job can run concurrently. Defaults to False.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Create scheduled compute Jobs on Hugging Face infrastructure.



Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example">

Create your first scheduled Job:

```python
>>> from huggingface_hub import create_scheduled_job
>>> create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"], schedule="@hourly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example-2">

Use a CRON schedule expression:

```python
>>> from huggingface_hub import create_scheduled_job
>>> create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('this runs every 5min')"], schedule="*/5 * * * *")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_job.example-3">

Create a scheduled GPU Job:

```python
>>> from huggingface_hub import create_scheduled_job
>>> image = "pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"
>>> command = ["python", "-c", "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"]
>>> create_scheduled_job(image, command, flavor="a10g-small", schedule="@hourly")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_scheduled_uv_job</name><anchor>huggingface_hub.HfApi.create_scheduled_uv_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10442</source><parameters>[{"name": "script", "val": ": str"}, {"name": "script_args", "val": ": Optional[list[str]] = None"}, {"name": "schedule", "val": ": str"}, {"name": "suspend", "val": ": Optional[bool] = None"}, {"name": "concurrency", "val": ": Optional[bool] = None"}, {"name": "dependencies", "val": ": Optional[list[str]] = None"}, {"name": "python", "val": ": Optional[str] = None"}, {"name": "image", "val": ": Optional[str] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "_repo", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **script** (`str`) --
  Path or URL of the UV script, or a command.

- **script_args** (`list[str]`, *optional*) --
  Arguments to pass to the script, or a command.

- **schedule** (`str`) --
  One of "@annually", "@yearly", "@monthly", "@weekly", "@daily", "@hourly", or a
  CRON schedule expression (e.g., '0 9 * * 1' for 9 AM every Monday).

- **suspend** (`bool`, *optional*) --
  If True, the scheduled Job is suspended (paused).  Defaults to False.

- **concurrency** (`bool`, *optional*) --
  If True, multiple instances of this Job can run concurrently. Defaults to False.

- **dependencies** (`list[str]`, *optional*) --
  Dependencies to use to run the UV script.

- **python** (`str`, *optional*) --
  Use a specific Python version. Default is 3.12.

- **image** (`str`, *optional*, defaults to "ghcr.io/astral-sh/uv --python3.12-bookworm"):
  Use a custom Docker image with `uv` installed.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run a UV script Job on Hugging Face infrastructure.



Example:

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example">

Schedule a script from a URL:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "https://raw.githubusercontent.com/huggingface/trl/refs/heads/main/trl/scripts/sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example-2">

Schedule a local script:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "my_sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_scheduled_uv_job.example-3">

Schedule a command:

```python
>>> from huggingface_hub import create_scheduled_uv_job
>>> script = "lighteval"
>>> script_args= ["endpoint", "inference-providers", "model_name=openai/gpt-oss-20b,provider=auto", "lighteval|gsm8k|0|0"]
>>> create_scheduled_uv_job(script, script_args=script_args, dependencies=["lighteval"], flavor="a10g-small", schedule="@weekly")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_tag</name><anchor>huggingface_hub.HfApi.create_tag</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5789</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "tag", "val": ": str"}, {"name": "tag_message", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "exist_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a commit will be tagged.
  Example: `"user/my-cool-model"`.

- **tag** (`str`) --
  The name of the tag to create.

- **tag_message** (`str`, *optional*) --
  The description of the tag to create.

- **revision** (`str`, *optional*) --
  The git revision to tag. It can be a branch name or the OID/SHA of a
  commit, as a hexadecimal string. Shorthands (7 first characters) are
  also supported. Defaults to the head of the `"main"` branch.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if tagging a dataset or
  space, `None` or `"model"` if tagging a model. Default is
  `None`.

- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if tag already exists.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the branch already exists on the repo (error 409) and `exist_ok` is
  set to `False`.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Tag a given commit of a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_webhook</name><anchor>huggingface_hub.HfApi.create_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8957</source><parameters>[{"name": "url", "val": ": Optional[str] = None"}, {"name": "job_id", "val": ": Optional[str] = None"}, {"name": "watched", "val": ": list[Union[dict, WebhookWatchedItem]]"}, {"name": "domains", "val": ": Optional[list[constants.WEBHOOK_DOMAIN_T]] = None"}, {"name": "secret", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **url** (`str`) --
  URL to send the payload to.
- **job_id** (`str`) --
  ID of the source Job to trigger with the webhook payload in the environment variable WEBHOOK_PAYLOAD.
  Additional environment variables are available for convenience: WEBHOOK_REPO_ID, WEBHOOK_REPO_TYPE and WEBHOOK_SECRET.
- **watched** (`list[WebhookWatchedItem]`) --
  List of `WebhookWatchedItem` to be watched by the webhook. It can be users, orgs, models, datasets or spaces.
  Watched items can also be provided as plain dictionaries.
- **domains** (`list[Literal["repo", "discussion"]]`, optional) --
  List of domains to watch. It can be "repo", "discussion" or both.
- **secret** (`str`, optional) --
  A secret to sign the payload with.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`WebhookInfo`</rettype><retdesc>Info about the newly created webhook.</retdesc></docstring>
Create a new webhook.

The webhook can either send a payload to a URL, or trigger a Job to run on Hugging Face infrastructure.
This function should be called with one of `url` or `job_id`, but not both.







Example:

Create a webhook that sends a payload to a URL
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_webhook.example">

```python
>>> from huggingface_hub import create_webhook
>>> payload = create_webhook(
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
...     domains=["repo", "discussion"],
...     secret="my-secret",
... )
>>> print(payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    job=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>

Run a Job and then create a webhook that triggers this Job
<ExampleCodeBlock anchor="huggingface_hub.HfApi.create_webhook.example-2">

```python
>>> from huggingface_hub import create_webhook, run_job
>>> job = run_job(
...     image="ubuntu",
...     command=["bash", "-c", r"echo An event occured in $WEBHOOK_REPO_ID: $WEBHOOK_PAYLOAD"],
... )
>>> payload = create_webhook(
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     job_id=job.id,
...     domains=["repo", "discussion"],
...     secret="my-secret",
... )
>>> print(payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url=None,
    job=JobSpec(
        docker_image='ubuntu',
        space_id=None,
        command=['bash', '-c', 'echo An event occured in $WEBHOOK_REPO_ID: $WEBHOOK_PAYLOAD'],
        arguments=[],
        environment={},
        secrets=[],
        flavor='cpu-basic',
        timeout=None,
        tags=None,
        arch=None
    ),
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dataset_info</name><anchor>huggingface_hub.HfApi.dataset_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2554</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandDatasetProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the dataset repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandDatasetProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `files_metadata` is passed.
  Possible values are `"author"`, `"cardData"`, `"citation"`, `"createdAt"`, `"disabled"`, `"description"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"lastModified"`, `"likes"`, `"paperswithcode_id"`, `"private"`, `"siblings"`, `"sha"`, `"tags"`, `"trendingScore"`,`"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[hf_api.DatasetInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.DatasetInfo)</rettype><retdesc>The dataset repository information.</retdesc></docstring>

Get info on one specific dataset on huggingface.co.

Dataset can be private if you pass an acceptable token.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_branch</name><anchor>huggingface_hub.HfApi.delete_branch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5737</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a branch will be deleted.
  Example: `"user/my-cool-model"`.

- **branch** (`str`) --
  The name of the branch to delete.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if creating a branch on a dataset or
  space, `None` or `"model"` if tagging a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If trying to delete a protected branch. Ex: `main` cannot be deleted.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If trying to delete a branch that does not exist.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)</raisederrors></docstring>

Delete a branch from a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_collection</name><anchor>huggingface_hub.HfApi.delete_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8189</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "missing_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to delete. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **missing_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if collection doesn't exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete a collection on the Hub.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_collection.example">

Example:

```py
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
```

</ExampleCodeBlock>

> [!WARNING]
> This is a non-revertible action. A deleted collection cannot be restored.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_collection_item</name><anchor>huggingface_hub.HfApi.delete_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8362</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_object_id", "val": ": str"}, {"name": "missing_ok", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_object_id** (`str`) --
  ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
  It must be retrieved from a [CollectionItem](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.CollectionItem) object. Example: `collection.items[0].item_object_id`.
- **missing_ok** (`bool`, *optional*) --
  If `True`, do not raise an error if item doesn't exists.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete an item from a collection.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_collection_item.example">

Example:

```py
>>> from huggingface_hub import get_collection, delete_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Delete item based on its ID
>>> delete_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_file</name><anchor>huggingface_hub.HfApi.delete_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4744</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example:
  `"checkpoints/1fec34a/weights.bin"`
- **repo_id** (`str`) --
  The repository from which the file will be deleted, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or
  space, `None` or `"model"` if in a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Delete {path_in_repo} with huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes a file in the given repo.



> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.
>     - [EntryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.EntryNotFoundError)
>       If the file to download cannot be found.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_files</name><anchor>huggingface_hub.HfApi.delete_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4831</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "delete_patterns", "val": ": list[str]"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository from which the folder will be deleted, for example:
  `"username/custom_transformers"`
- **delete_patterns** (`list[str]`) --
  List of files or folders to delete. Each string can either be
  a file path, a folder path or a Unix shell-style wildcard.
  E.g. `["file.txt", "folder/", "data/*.parquet"]`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
  to the stored token.
- **repo_type** (`str`, *optional*) --
  Type of the repo to delete files from. Can be `"model"`,
  `"dataset"` or `"space"`. Defaults to `"model"`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary (first line) of the generated commit. Defaults to
  `f"Delete files using huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete files from a repository on the Hub.

If a folder path is provided, the entire folder is deleted as well as
all files it contained.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_folder</name><anchor>huggingface_hub.HfApi.delete_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4907</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative folder path in the repo, for example: `"checkpoints/1fec34a"`.
- **repo_id** (`str`) --
  The repository from which the folder will be deleted, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
  to the stored token.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the folder is in a dataset or
  space, `None` or `"model"` if in a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Delete folder {path_in_repo} with huggingface_hub"`.
- **commit_description** (`str` *optional*) --
  The description of the generated commit.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes a folder in the given repo.

Simple wrapper around [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit) method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_inference_endpoint</name><anchor>huggingface_hub.HfApi.delete_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7800</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to delete.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Delete an Inference Endpoint.

This operation is not reversible. If you don't want to be charged for an Inference Endpoint, it is preferable
to pause it with [pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint) or scale it to zero with [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint).

For convenience, you can also delete an Inference Endpoint using [InferenceEndpoint.delete()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.delete).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_repo</name><anchor>huggingface_hub.HfApi.delete_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3664</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "missing_ok", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model.
- **missing_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo does not exist.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to delete from cannot be found and `missing_ok` is set to False (default).</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_scheduled_job</name><anchor>huggingface_hub.HfApi.delete_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10354</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_secret</name><anchor>huggingface_hub.HfApi.delete_space_secret</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6735</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Secret key. Example: `"GITHUB_API_KEY"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Deletes a secret from a Space.

Secrets allow to set secret keys or tokens to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_storage</name><anchor>huggingface_hub.HfApi.delete_space_storage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7212</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to update. Example: `"open-llm-leaderboard/open_llm_leaderboard"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc><raises>- `BadRequestError` -- 
  If space has no persistent storage.</raises><raisederrors>`BadRequestError`</raisederrors></docstring>
Delete persistent storage for a Space.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_space_variable</name><anchor>huggingface_hub.HfApi.delete_space_variable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6825</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "key", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Deletes a variable from a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_tag</name><anchor>huggingface_hub.HfApi.delete_tag</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5863</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "tag", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which a tag will be deleted.
  Example: `"user/my-cool-model"`.

- **tag** (`str`) --
  The name of the tag to delete.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if tagging a dataset or space, `None` or
  `"model"` if tagging a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If tag is not found.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Delete a tag from a repo on the Hub.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_webhook</name><anchor>huggingface_hub.HfApi.delete_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9274</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to delete.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>
Delete a webhook.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.delete_webhook.example">

Example:
```python
>>> from huggingface_hub import delete_webhook
>>> delete_webhook("654bbbc16f2ec14d77f109cc")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_webhook</name><anchor>huggingface_hub.HfApi.disable_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9221</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to disable.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`WebhookInfo`</rettype><retdesc>Info about the disabled webhook.</retdesc></docstring>
Disable a webhook (makes it "disabled").







<ExampleCodeBlock anchor="huggingface_hub.HfApi.disable_webhook.example">

Example:
```python
>>> from huggingface_hub import disable_webhook
>>> disabled_webhook = disable_webhook("654bbbc16f2ec14d77f109cc")
>>> disabled_webhook
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    jon=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=True,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>duplicate_space</name><anchor>huggingface_hub.HfApi.duplicate_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7058</source><parameters>[{"name": "from_id", "val": ": str"}, {"name": "to_id", "val": ": Optional[str] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "exist_ok", "val": ": bool = False"}, {"name": "hardware", "val": ": Optional[SpaceHardware] = None"}, {"name": "storage", "val": ": Optional[SpaceStorage] = None"}, {"name": "sleep_time", "val": ": Optional[int] = None"}, {"name": "secrets", "val": ": Optional[list[dict[str, str]]] = None"}, {"name": "variables", "val": ": Optional[list[dict[str, str]]] = None"}]</parameters><paramsdesc>- **from_id** (`str`) --
  ID of the Space to duplicate. Example: `"pharma/CLIP-Interrogator"`.
- **to_id** (`str`, *optional*) --
  ID of the new Space. Example: `"dog/CLIP-Interrogator"`. If not provided, the new Space will have the same
  name as the original Space, but in your account.
- **private** (`bool`, *optional*) --
  Whether the new Space should be private or not. Defaults to the same privacy as the original Space.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **exist_ok** (`bool`, *optional*, defaults to `False`) --
  If `True`, do not raise an error if repo already exists.
- **hardware** (`SpaceHardware` or `str`, *optional*) --
  Choice of Hardware. Example: `"t4-medium"`. See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for a complete list.
- **storage** (`SpaceStorage` or `str`, *optional*) --
  Choice of persistent storage tier. Example: `"small"`. See [SpaceStorage](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceStorage) for a complete list.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **secrets** (`list[dict[str, str]]`, *optional*) --
  A list of secret keys to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets.
- **variables** (`list[dict[str, str]]`, *optional*) --
  A list of public environment variables to set in your Space. Each item is in the form `{"key": ..., "value": ..., "description": ...}` where description is optional.
  For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables.</paramsdesc><paramgroups>0</paramgroups><rettype>[RepoUrl](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.RepoUrl)</rettype><retdesc>URL to the newly created repo. Value is a subclass of `str` containing
attributes like `endpoint`, `repo_type` and `repo_id`.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If one of `from_id` or `to_id` cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- `HfHubHTTPError` -- 
  If the HuggingFace API returned an error</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or `HfHubHTTPError`</raisederrors></docstring>
Duplicate a Space.

Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state
as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.duplicate_space.example">

Example:
```python
>>> from huggingface_hub import duplicate_space

# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)

# Can set custom destination id and visibility flag.
>>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True)
RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>edit_discussion_comment</name><anchor>huggingface_hub.HfApi.edit_discussion_comment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6578</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment_id", "val": ": str"}, {"name": "new_content", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment_id** (`str`) --
  The ID of the comment to edit.
- **new_content** (`str`) --
  The new content of the comment. Comments support markdown formatting.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the edited comment</retdesc></docstring>
Edits a comment on a Discussion / Pull Request.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_webhook</name><anchor>huggingface_hub.HfApi.enable_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9168</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to enable.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`WebhookInfo`</rettype><retdesc>Info about the enabled webhook.</retdesc></docstring>
Enable a webhook (makes it "active").







<ExampleCodeBlock anchor="huggingface_hub.HfApi.enable_webhook.example">

Example:
```python
>>> from huggingface_hub import enable_webhook
>>> enabled_webhook = enable_webhook("654bbbc16f2ec14d77f109cc")
>>> enabled_webhook
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo", "discussion"],
    secret="my-secret",
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fetch_job_logs</name><anchor>huggingface_hub.HfApi.fetch_job_logs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9852</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Fetch all the logs from a compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.fetch_job_logs.example">

Example:

```python
>>> from huggingface_hub import fetch_job_logs, run_job
>>> job = run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
>>> for log in fetch_job_logs(job.id):
...     print(log)
Hello from HF compute!
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>file_exists</name><anchor>huggingface_hub.HfApi.file_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2856</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **filename** (`str`) --
  The name of the file to check, for example:
  `"config.json"`
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information. Defaults to `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the file exists, False otherwise.</retdesc></docstring>

Checks if a file exists in a repository on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.file_exists.example">

Examples:
```py
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> file_exists("bigcode/not-a-repo", "config.json")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_collection</name><anchor>huggingface_hub.HfApi.get_collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8014</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection of the Hub. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Gets information about a Collection on the Hub.



Returns: [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_collection.example">

Example:

```py
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem(
    item_object_id='651446103cd773a050bf64c2',
    item_id='TheBloke/U-Amethyst-20B-AWQ',
    item_type='model',
    position=88,
    note=None
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_dataset_tags</name><anchor>huggingface_hub.HfApi.get_dataset_tags</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1781</source><parameters>[]</parameters></docstring>

List all valid dataset tags as a nested namespace object.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_discussion_details</name><anchor>huggingface_hub.HfApi.get_discussion_details</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6058</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Fetches a Discussion's / Pull Request 's details from the Hub.



Returns: [DiscussionWithDetails](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionWithDetails)

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_full_repo_name</name><anchor>huggingface_hub.HfApi.get_full_repo_name</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5912</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "organization", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **model_id** (`str`) --
  The name of the model.
- **organization** (`str`, *optional*) --
  If passed, the repository name will be in the organization
  namespace instead of the user namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The repository name in the user's namespace
({username}/{model_id}) if no organization is passed, and under the
organization namespace ({organization}/{model_id}) otherwise.</retdesc></docstring>

Returns the repository name for a given model ID and optional
organization.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_hf_file_metadata</name><anchor>huggingface_hub.HfApi.get_hf_file_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5091</source><parameters>[{"name": "url", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "timeout", "val": ": Optional[float] = 10"}]</parameters><paramsdesc>- **url** (`str`) --
  File url, for example returned by `hf_hub_url()`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **timeout** (`float`, *optional*, defaults to 10) --
  How many seconds to wait for the server to send metadata before giving up.</paramsdesc><paramgroups>0</paramgroups><retdesc>A `HfFileMetadata` object containing metadata such as location, etag, size and commit_hash.</retdesc></docstring>
Fetch metadata of a file versioned on the Hub for a given url.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_inference_endpoint</name><anchor>huggingface_hub.HfApi.get_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7616</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to retrieve information about.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the requested Inference Endpoint.</retdesc></docstring>
Get information about an Inference Endpoint.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_inference_endpoint.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> endpoint = api.get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_tags</name><anchor>huggingface_hub.HfApi.get_model_tags</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1772</source><parameters>[]</parameters></docstring>

List all valid model tags as a nested namespace object


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_organization_overview</name><anchor>huggingface_hub.HfApi.get_organization_overview</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9499</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get an overview of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended method
  for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Organization`</rettype><retdesc>An `Organization` object with the organization's overview.</retdesc><raises>- [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>``HTTPError``</raisederrors></docstring>

Get an overview of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_paths_info</name><anchor>huggingface_hub.HfApi.get_paths_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3241</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "paths", "val": ": Union[list[str], str]"}, {"name": "expand", "val": ": bool = False"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **paths** (`Union[list[str], str]`, *optional*) --
  The paths to get information about. If a path do not exist, it is ignored without raising
  an exception.
- **expand** (`bool`, *optional*, defaults to `False`) --
  Whether to fetch more information about the paths (e.g. last commit and files' security scan results). This
  operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
  As pagination is implemented in `huggingface_hub`, this is transparent for you except for the time it
  takes to get the results.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information. Defaults to `"main"` branch.
- **repo_type** (`str`, *optional*) --
  The type of the repository from which to get the information (`"model"`, `"dataset"` or `"space"`.
  Defaults to `"model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[Union[RepoFile, RepoFolder]]`</rettype><retdesc>The information about the paths, as a list of `RepoFile` and `RepoFolder` objects.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Get information about a repo's paths.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_paths_info.example">

Example:
```py
>>> from huggingface_hub import get_paths_info
>>> paths_info = get_paths_info("allenai/c4", ["README.md", "en"], repo_type="dataset")
>>> paths_info
[
    RepoFile(path='README.md', size=2379, blob_id='f84cb4c97182890fc1dbdeaf1a6a468fd27b4fff', lfs=None, last_commit=None, security=None),
    RepoFolder(path='en', tree_id='dc943c4c40f53d02b31ced1defa7e5f438d5862e', last_commit=None)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_repo_discussions</name><anchor>huggingface_hub.HfApi.get_repo_discussions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5950</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "discussion_type", "val": ": Optional[constants.DiscussionTypeFilter] = None"}, {"name": "discussion_status", "val": ": Optional[constants.DiscussionStatusFilter] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **author** (`str`, *optional*) --
  Pass a value to filter by discussion author. `None` means no filter.
  Default is `None`.
- **discussion_type** (`str`, *optional*) --
  Set to `"pull_request"` to fetch only pull requests, `"discussion"`
  to fetch only discussions. Set to `"all"` or `None` to fetch both.
  Default is `None`.
- **discussion_status** (`str`, *optional*) --
  Set to `"open"` (respectively `"closed"`) to fetch only open
  (respectively closed) discussions. Set to `"all"` or `None`
  to fetch both.
  Default is `None`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if fetching from a dataset or
  space, `None` or `"model"` if fetching from a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterator[Discussion]`</rettype><retdesc>An iterator of [Discussion](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.Discussion) objects.</retdesc></docstring>

Fetches Discussions and Pull Requests for the given repo.







Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_repo_discussions.example">

Collecting all discussions of a repo in a list:

```python
>>> from huggingface_hub import get_repo_discussions
>>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_repo_discussions.example-2">

Iterating over discussions of a repo:

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(repo_id="bert-base-uncased"):
...     print(discussion.num, discussion.title)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_safetensors_metadata</name><anchor>huggingface_hub.HfApi.get_safetensors_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5414</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or space, `None` or `"model"` if in a
  model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
  head of the `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`SafetensorsRepoMetadata`</rettype><retdesc>information related to safetensors repo.</retdesc><raises>- `NotASafetensorsRepoError` -- 
  If the repo is not a safetensors repo i.e. doesn't have either a
  `model.safetensors` or a `model.safetensors.index.json` file.
- `SafetensorsParsingError` -- 
  If a safetensors file header couldn't be parsed correctly.</raises><raisederrors>`NotASafetensorsRepoError` or `SafetensorsParsingError`</raisederrors></docstring>

Parse metadata for a safetensors repo on the Hub.

We first check if the repo has a single safetensors file or a sharded safetensors repo. If it's a single
safetensors file, we parse the metadata from this file. If it's a sharded safetensors repo, we parse the
metadata from the index file and then parse the metadata from each shard.

To parse metadata from a single safetensors file, use [parse_safetensors_file_metadata()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.parse_safetensors_file_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_safetensors_metadata.example">

Example:
```py
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
    metadata=None,
    sharded=False,
    weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
    files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}

# Parse repo with sharded model
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00,  5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72  # All safetensors files have been fetched

# Parse repo with sharded model
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_space_runtime</name><anchor>huggingface_hub.HfApi.get_space_runtime</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6854</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Gets runtime information about a Space.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_space_variables</name><anchor>huggingface_hub.HfApi.get_space_variables</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6761</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to query. Example: `"bigcode/in-the-stack"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Gets all variables from a Space.

Variables allow to set environment variables to a Space without hardcoding them.
For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_user_overview</name><anchor>huggingface_hub.HfApi.get_user_overview</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9473</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get an overview of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`User`</rettype><retdesc>A [User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) object with the user's overview.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get an overview of a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_webhook</name><anchor>huggingface_hub.HfApi.get_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8853</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to get.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`WebhookInfo`</rettype><retdesc>Info about the webhook.</retdesc></docstring>
Get a webhook by its id.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.get_webhook.example">

Example:
```python
>>> from huggingface_hub import get_webhook
>>> webhook = get_webhook("654bbbc16f2ec14d77f109cc")
>>> print(webhook)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    secret="my-secret",
    domains=["repo", "discussion"],
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>grant_access</name><anchor>huggingface_hub.HfApi.grant_access</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8798</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to grant access to.
- **user** (`str`) --
  The username of the user to grant access.
- **repo_type** (`str`, *optional*) --
  The type of the repo to grant access to. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 400 if the user already has access to the repo.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Grant access to a user for a given gated repo.

Granting access don't require for the user to send an access request by themselves. The user is automatically
added to the accepted list meaning they can download the files You can revoke the granted access at any time
using [cancel_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) or [reject_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>hf_hub_download</name><anchor>huggingface_hub.HfApi.hf_hub_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5165</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "subfolder", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "cache_dir", "val": ": Union[str, Path, None] = None"}, {"name": "local_dir", "val": ": Union[str, Path, None] = None"}, {"name": "force_download", "val": ": bool = False"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "dry_run", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **subfolder** (`str`, *optional*) --
  An optional value corresponding to a folder inside the repository.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded file will be placed under this directory.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in
  the local cache.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `httpx.request`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **dry_run** (`bool`, *optional*, defaults to `False`) --
  If `True`, perform a dry run without actually downloading the file. Returns a
  `DryRunFileInfo` object containing information about what would be downloaded.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or `DryRunFileInfo`</rettype><retdesc>- If `dry_run=False`: Local path of file or if networking is off, last version of file cached on disk.
- If `dry_run=True`: A `DryRunFileInfo` object containing download information.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- `~utils.RemoteEntryNotFoundError` -- 
  If the file to download cannot be found.
- [LocalEntryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) -- 
  If network is disabled or unavailable and file is not found in cache.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` but the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- 
  If ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or `~utils.RemoteEntryNotFoundError` or [LocalEntryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.LocalEntryNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download a given file if it's not already present in the local cache.

The new cache file layout looks like this:
- The cache directory contains one subfolder per repo_id (namespaced by repo type)
- inside each repo folder:
  - refs is a list of the latest known revision => commit_hash pairs
  - blobs contains the actual file blobs (identified by their git-sha or sha256, depending on
  whether they're LFS files or not)
  - snapshots contains one subfolder per commit, each "commit" contains the subset of the files
  that have been resolved at that particular commit. Each filename is a symlink to the blob
  at that particular commit.

<ExampleCodeBlock anchor="huggingface_hub.HfApi.hf_hub_download.example">

```
[  96]  .
└── [ 160]  models--julien-c--EsperBERTo-small
    ├── [ 160]  blobs
    │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
    │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
    │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
    ├── [  96]  refs
    │   └── [  40]  main
    └── [ 128]  snapshots
        ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
        │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
        │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
            ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
            └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
```

</ExampleCodeBlock>

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files. While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>hide_discussion_comment</name><anchor>huggingface_hub.HfApi.hide_discussion_comment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6635</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "comment_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment_id** (`str`) --
  The ID of the comment to edit.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionComment](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionComment)</rettype><retdesc>the hidden comment</retdesc></docstring>
Hides a comment on a Discussion / Pull Request.

> [!WARNING]
> Hidden comments' content cannot be retrieved anymore. Hiding a comment is irreversible.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inspect_job</name><anchor>huggingface_hub.HfApi.inspect_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9975</source><parameters>[{"name": "job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **job_id** (`str`) --
  ID of the Job.

- **namespace** (`str`, *optional*) --
  The namespace where the Job is running. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Inspect a compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.inspect_job.example">

Example:

```python
>>> from huggingface_hub import inspect_job, run_job
>>> job = run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
>>> inspect_job(job.id)
JobInfo(
    id='68780d00bbe36d38803f645f',
    created_at=datetime.datetime(2025, 7, 16, 20, 35, 12, 808000, tzinfo=datetime.timezone.utc),
    docker_image='python:3.12',
    space_id=None,
    command=['python', '-c', "print('Hello from HF compute!')"],
    arguments=[],
    environment={},
    secrets={},
    flavor='cpu-basic',
    status=JobStatus(stage='RUNNING', message=None)
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inspect_scheduled_job</name><anchor>huggingface_hub.HfApi.inspect_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10315</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Inspect a scheduled compute Job on Hugging Face infrastructure.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.inspect_scheduled_job.example">

Example:

```python
>>> from huggingface_hub import inspect_job, create_scheduled_job
>>> scheduled_job = create_scheduled_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"], schedule="@hourly")
>>> inspect_scheduled_job(scheduled_job.id)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_accepted_access_requests</name><anchor>huggingface_hub.HfApi.list_accepted_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8482</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get accepted access requests for a given gated repo.

An accepted request means the user has requested access to the repo and the request has been accepted. The user
can download any file of the repo. If the approval mode is automatic, this list should contains by default all
requests. Accepted requests can be cancelled or rejected at any time using [cancel_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request) and
[reject_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request). A cancelled request will go back to the pending list while a rejected request will
go to the rejected list. In both cases, the user will lose access to the repo.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_accepted_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_accepted_access_requests

>>> requests = list_accepted_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='accepted',
        fields=None,
    ),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_collections</name><anchor>huggingface_hub.HfApi.list_collections</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7958</source><parameters>[{"name": "owner", "val": ": Union[list[str], str, None] = None"}, {"name": "item", "val": ": Union[list[str], str, None] = None"}, {"name": "sort", "val": ": Optional[Literal['lastModified', 'trending', 'upvotes']] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **owner** (`list[str]` or `str`, *optional*) --
  Filter by owner's username.
- **item** (`list[str]` or `str`, *optional*) --
  Filter collections containing a particular items. Example: `"models/teknium/OpenHermes-2.5-Mistral-7B"`, `"datasets/squad"` or `"papers/2311.12983"`.
- **sort** (`Literal["lastModified", "trending", "upvotes"]`, *optional*) --
  Sort collections by last modified, trending or upvotes.
- **limit** (`int`, *optional*) --
  Maximum number of collections to be returned.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[Collection]`</rettype><retdesc>an iterable of [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection) objects.</retdesc></docstring>
List collections on the Huggingface Hub, given some filters.

> [!WARNING]
> When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items
> from a collection, you must use [get_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_collection).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_datasets</name><anchor>huggingface_hub.HfApi.list_datasets</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1990</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "benchmark", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "dataset_name", "val": ": Optional[str] = None"}, {"name": "gated", "val": ": Optional[bool] = None"}, {"name": "language_creators", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "language", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "multilinguality", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "size_categories", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "task_categories", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "task_ids", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "sort", "val": ": Optional[Union[Literal['last_modified'], str]] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandDatasetProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "tags", "val": ": Optional[Union[str, list[str]]] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable[str]`, *optional*) --
  A string or list of string to filter datasets on the hub.
- **author** (`str`, *optional*) --
  A string which identify the author of the returned datasets.
- **benchmark** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by their official benchmark.
- **dataset_name** (`str`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by its name, such as `SQAC` or `wikineural`
- **gated** (`bool`, *optional*) --
  A boolean to filter datasets on the Hub that are gated or not. By default, all datasets are returned.
  If `gated=True` is passed, only gated datasets are returned.
  If `gated=False` is passed, only non-gated datasets are returned.
- **language_creators** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub with how the data was curated, such as `crowdsourced` or
  `machine_generated`.
- **language** (`str` or `List`, *optional*) --
  A string or list of strings representing a two-character language to
  filter datasets by on the Hub.
- **multilinguality** (`str` or `List`, *optional*) --
  A string or list of strings representing a filter for datasets that
  contain multiple languages.
- **size_categories** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the size of the dataset such as `100K<n<1M` or
  `1M<n<10M`.
- **tags** (`str` or `List`, *optional*) --
  Deprecated. Pass tags in `filter` to filter datasets by tags.
- **task_categories** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the designed task, such as `audio_classification` or
  `named_entity_recognition`.
- **task_ids** (`str` or `List`, *optional*) --
  A string or list of strings that can be used to identify datasets on
  the Hub by the specific task such as `speech_emotion_recognition` or
  `paraphrase`.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned datasets.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at", "downloads" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of datasets fetched. Leaving this option
  to `None` fetches all datasets.
- **expand** (`list[ExpandDatasetProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"citation"`, `"createdAt"`, `"disabled"`, `"description"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"lastModified"`, `"likes"`, `"paperswithcode_id"`, `"private"`, `"siblings"`, `"sha"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all dataset data, including the `last_modified`,
  the `card_data` and  the files. Can contain useful information such as the
  PapersWithCode ID.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[DatasetInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.DatasetInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.DatasetInfo) objects.</retdesc></docstring>

List datasets hosted on the Huggingface Hub, given some filters.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_datasets.example">

Example usage with the `filter` argument:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all datasets
>>> api.list_datasets()


# List only the text classification datasets
>>> api.list_datasets(filter="task_categories:text-classification")


# List only the datasets in russian for language modeling
>>> api.list_datasets(
...     filter=("language:ru", "task_ids:language-modeling")
... )

# List FiftyOne datasets (identified by the tag "fiftyone" in dataset card)
>>> api.list_datasets(tags="fiftyone")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_datasets.example-2">

Example usage with the `search` argument:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all datasets with "text" in their name
>>> api.list_datasets(search="text")

# List all datasets with "text" in their name made by google
>>> api.list_datasets(search="text", author="google")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_inference_catalog</name><anchor>huggingface_hub.HfApi.list_inference_catalog</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7586</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).</paramsdesc><paramgroups>0</paramgroups><rettype>List`str`</rettype><retdesc>A list of model IDs available in the catalog.</retdesc></docstring>
List models available in the Hugging Face Inference Catalog.

The goal of the Inference Catalog is to provide a curated list of models that are optimized for inference
and for which default configurations have been tested. See https://endpoints.huggingface.co/catalog for a list
of available models in the catalog.

Use [create_inference_endpoint_from_catalog()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint_from_catalog) to deploy a model from the catalog.







> [!WARNING]
> `list_inference_catalog` is experimental. Its API is subject to change in the future. Please provide feedback
> if you have any suggestions or requests.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_inference_endpoints</name><anchor>huggingface_hub.HfApi.list_inference_endpoints</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7247</source><parameters>[{"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **namespace** (`str`, *optional*) --
  The namespace to list endpoints for. Defaults to the current user. Set to `"*"` to list all endpoints
  from all namespaces (i.e. personal namespace and all orgs the user belongs to).
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>list[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>A list of all inference endpoints for the given namespace.</retdesc></docstring>
Lists all inference endpoints for the given namespace.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_inference_endpoints.example">

Example:
```python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_inference_endpoints()
[InferenceEndpoint(name='my-endpoint', ...), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_jobs</name><anchor>huggingface_hub.HfApi.list_jobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9943</source><parameters>[{"name": "timeout", "val": ": Optional[int] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.

- **namespace** (`str`, *optional*) --
  The namespace from where it lists the jobs. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

List compute Jobs on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_lfs_files</name><anchor>huggingface_hub.HfApi.list_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3398</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository for which you are listing LFS files.
- **repo_type** (`str`, *optional*) --
  Type of repository. Set to `"dataset"` or `"space"` if listing from a dataset or space, `None` or
  `"model"` if listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[LFSFileInfo]`</rettype><retdesc>An iterator of `LFSFileInfo` objects.</retdesc></docstring>

List all LFS files in a repo on the Hub.

This is primarily useful to count how much storage a repo is using and to eventually clean up large files
with [permanently_delete_lfs_files()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.permanently_delete_lfs_files). Note that this would be a permanent action that will affect all commits
referencing this deleted files and that cannot be undone.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_lfs_files.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_liked_repos</name><anchor>huggingface_hub.HfApi.list_liked_repos</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2364</source><parameters>[{"name": "user", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **user** (`str`, *optional*) --
  Name of the user for which you want to fetch the likes.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[UserLikes](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.UserLikes)</rettype><retdesc>object containing the user name and 3 lists of repo ids (1 for
models, 1 for datasets and 1 for Spaces).</retdesc><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `user` is not passed and no token found (either from argument or from machine).</raises><raisederrors>``ValueError``</raisederrors></docstring>

List all public repos liked by a user on huggingface.co.

This list is public so token is optional. If `user` is not passed, it defaults to
the logged in user.

See also [unlike()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.unlike).











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_liked_repos.example">

Example:
```python
>>> from huggingface_hub import list_liked_repos

>>> likes = list_liked_repos("julien-c")

>>> likes.user
"julien-c"

>>> likes.models
["osanseviero/streamlit_1.15", "Xhaheen/ChatGPT_HF", ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_models</name><anchor>huggingface_hub.HfApi.list_models</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1790</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "apps", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "gated", "val": ": Optional[bool] = None"}, {"name": "inference", "val": ": Optional[Literal['warm']] = None"}, {"name": "inference_provider", "val": ": Optional[Union[Literal['all'], 'PROVIDER_T', list['PROVIDER_T']]] = None"}, {"name": "model_name", "val": ": Optional[str] = None"}, {"name": "trained_dataset", "val": ": Optional[Union[str, list[str]]] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "pipeline_tag", "val": ": Optional[str] = None"}, {"name": "emissions_thresholds", "val": ": Optional[tuple[float, float]] = None"}, {"name": "sort", "val": ": Union[Literal['last_modified'], str, None] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandModelProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "cardData", "val": ": bool = False"}, {"name": "fetch_config", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable[str]`, *optional*) --
  A string or list of string to filter models on the Hub.
  Models can be filtered by library, language, task, tags, and more.
- **author** (`str`, *optional*) --
  A string which identify the author (user or organization) of the
  returned models.
- **apps** (`str` or `List`, *optional*) --
  A string or list of strings to filter models on the Hub that
  support the specified apps. Example values include `"ollama"` or `["ollama", "vllm"]`.
- **gated** (`bool`, *optional*) --
  A boolean to filter models on the Hub that are gated or not. By default, all models are returned.
  If `gated=True` is passed, only gated models are returned.
  If `gated=False` is passed, only non-gated models are returned.
- **inference** (`Literal["warm"]`, *optional*) --
  If "warm", filter models on the Hub currently served by at least one provider.
- **inference_provider** (`Literal["all"]` or `str`, *optional*) --
  A string to filter models on the Hub that are served by a specific provider.
  Pass `"all"` to get all models served by at least one provider.
- **model_name** (`str`, *optional*) --
  A string that contain complete or partial names for models on the
  Hub, such as "bert" or "bert-base-cased"
- **trained_dataset** (`str` or `List`, *optional*) --
  A string tag or a list of string tags of the trained dataset for a
  model on the Hub.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned model ids.
- **pipeline_tag** (`str`, *optional*) --
  A string pipeline tag to filter models on the Hub by, such as `summarization`.
- **emissions_thresholds** (`Tuple`, *optional*) --
  A tuple of two ints or floats representing a minimum and maximum
  carbon footprint to filter the resulting models with in grams.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at", "downloads" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of models fetched. Leaving this option
  to `None` fetches all models.
- **expand** (`list[ExpandModelProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full`, `cardData` or `fetch_config` are passed.
  Possible values are `"author"`, `"cardData"`, `"config"`, `"createdAt"`, `"disabled"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"gguf"`, `"inference"`, `"inferenceProviderMapping"`, `"lastModified"`, `"library_name"`, `"likes"`, `"mask_token"`, `"model-index"`, `"pipeline_tag"`, `"private"`, `"safetensors"`, `"sha"`, `"siblings"`, `"spaces"`, `"tags"`, `"transformersInfo"`, `"trendingScore"`, `"widgetData"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all model data, including the `last_modified`,
  the `sha`, the files and the `tags`. This is set to `True` by
  default when using a filter.
- **cardData** (`bool`, *optional*) --
  Whether to grab the metadata for the model as well. Can contain
  useful information such as carbon emissions, metrics, and
  datasets trained on.
- **fetch_config** (`bool`, *optional*) --
  Whether to fetch the model configs as well. This is not included
  in `full` due to its size.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[ModelInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.ModelInfo) objects.</retdesc></docstring>

List models hosted on the Huggingface Hub, given some filters.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_models.example">

Example:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all models
>>> api.list_models()

# List text classification models
>>> api.list_models(filter="text-classification")

# List models from the KerasHub library
>>> api.list_models(filter="keras-hub")

# List models served by Cohere
>>> api.list_models(inference_provider="cohere")

# List models with "bert" in their name
>>> api.list_models(search="bert")

# List models with "bert" in their name and pushed by google
>>> api.list_models(search="bert", author="google")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_organization_followers</name><anchor>huggingface_hub.HfApi.list_organization_followers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9526</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get the followers of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) objects with the followers of the organization.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

List followers of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_organization_members</name><anchor>huggingface_hub.HfApi.list_organization_members</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9555</source><parameters>[{"name": "organization", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **organization** (`str`) --
  Name of the organization to get the members of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) objects with the members of the organization.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the organization does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

List of members of an organization on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_papers</name><anchor>huggingface_hub.HfApi.list_papers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9639</source><parameters>[{"name": "query", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **query** (`str`, *optional*) --
  A search query string to find papers.
  If provided, returns papers that match the query.
- **token** (Union[bool, str, None], *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[PaperInfo]`</rettype><retdesc>an iterable of `huggingface_hub.hf_api.PaperInfo` objects.</retdesc></docstring>

List daily papers on the Hugging Face Hub given a search query.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_papers.example">

Example:

```python
>>> from huggingface_hub import HfApi

>>> api = HfApi()

# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_pending_access_requests</name><anchor>huggingface_hub.HfApi.list_pending_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8418</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get pending access requests for a given gated repo.

A pending request means the user has requested access to the repo but the request has not been processed yet.
If the approval mode is automatic, this list should be empty. Pending requests can be accepted or rejected
using [accept_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and [reject_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request).

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_pending_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_pending_access_requests, accept_access_request

# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='pending',
        fields=None,
    ),
    ...
]

# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_rejected_access_requests</name><anchor>huggingface_hub.HfApi.list_rejected_access_requests</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8544</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to get access requests for.
- **repo_type** (`str`, *optional*) --
  The type of the repo to get access requests for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AccessRequest]`</rettype><retdesc>A list of `AccessRequest` objects. Each time contains a `username`, `email`,
`status` and `timestamp` attribute. If the gated repo has a custom form, the `fields` attribute will
be populated with user's answers.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get rejected access requests for a given gated repo.

A rejected request means the user has requested access to the repo and the request has been explicitly rejected
by a repo owner (either you or another user from your organization). The user cannot download any file of the
repo. Rejected requests can be accepted or cancelled at any time using [accept_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and
[cancel_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request). A cancelled request will go back to the pending list while an accepted request will
go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.











<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_rejected_access_requests.example">

Example:
```py
>>> from huggingface_hub import list_rejected_access_requests

>>> requests = list_rejected_access_requests("meta-llama/Llama-2-7b")
>>> len(requests)
411
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem 🤗',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='rejected',
        fields=None,
    ),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_commits</name><anchor>huggingface_hub.HfApi.list_repo_commits</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3155</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "formatted", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing commits from a dataset or a Space, `None` or `"model"` if
  listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **formatted** (`bool`) --
  Whether to return the HTML-formatted title and description of the commits. Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>list[[GitCommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitCommitInfo)]</rettype><retdesc>list of objects containing information about the commits for a repo on the Hub.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)</raisederrors></docstring>

Get the list of commits of a given revision for a repo on the Hub.

Commits are sorted by date (last commit first).



<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_commits.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Commits are sorted by date (last commit first)
>>> initial_commit = api.list_repo_commits("gpt2")[-1]

# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
    commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
    authors=['system'],
    created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
    title='initial commit',
    message='',
    formatted_title=None,
    formatted_message=None
)

# Create an empty branch by deriving from initial commit
>>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id)
```

</ExampleCodeBlock>










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_files</name><anchor>huggingface_hub.HfApi.list_repo_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2914</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the information.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or space, `None` or `"model"` if uploading to
  a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[str]`</rettype><retdesc>the list of files in a given repository.</retdesc></docstring>

Get the list of files in a given repo.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_likers</name><anchor>huggingface_hub.HfApi.list_repo_likers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2440</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to retrieve . Example: `"user/my-cool-model"`.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) objects.</retdesc></docstring>

List all users who liked a given repo on the hugging Face Hub.

See also [list_liked_repos()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_liked_repos).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_refs</name><anchor>huggingface_hub.HfApi.list_repo_refs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3083</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "include_pull_requests", "val": ": bool = False"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing refs from a dataset or a Space,
  `None` or `"model"` if listing from a model. Default is `None`.
- **include_pull_requests** (`bool`, *optional*) --
  Whether to include refs from pull requests in the list. Defaults to `False`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[GitRefs](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitRefs)</rettype><retdesc>object containing all information about branches and tags for a
repo on the Hub.</retdesc></docstring>

Get the list of refs of a given repo (both tags and branches).



<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_refs.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.list_repo_refs("gpt2")
GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[])

>>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset')
GitRefs(
    branches=[
        GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'),
        GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714')
    ],
    converts=[],
    tags=[
        GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da')
    ]
)
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_repo_tree</name><anchor>huggingface_hub.HfApi.list_repo_tree</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2951</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "path_in_repo", "val": ": Optional[str] = None"}, {"name": "recursive", "val": ": bool = False"}, {"name": "expand", "val": ": bool = False"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **path_in_repo** (`str`, *optional*) --
  Relative path of the tree (folder) in the repo, for example:
  `"checkpoints/1fec34a/results"`. Will default to the root tree (folder) of the repository.
- **recursive** (`bool`, *optional*, defaults to `False`) --
  Whether to list tree's files and folders recursively.
- **expand** (`bool`, *optional*, defaults to `False`) --
  Whether to fetch more information about the tree's files and folders (e.g. last commit and files' security scan results). This
  operation is more expensive for the server so only 50 results are returned per page (instead of 1000).
  As pagination is implemented in `huggingface_hub`, this is transparent for you except for the time it
  takes to get the results.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the tree. Defaults to `"main"` branch.
- **repo_type** (`str`, *optional*) --
  The type of the repository from which to get the tree (`"model"`, `"dataset"` or `"space"`.
  Defaults to `"model"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[Union[RepoFile, RepoFolder]]`</rettype><retdesc>The information about the tree's files and folders, as an iterable of `RepoFile` and `RepoFolder` objects. The order of the files and folders is
not guaranteed.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If revision is not found (error 404) on the repo.
- `~utils.RemoteEntryNotFoundError` -- 
  If the tree (folder) does not exist (error 404) on the repo.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or `~utils.RemoteEntryNotFoundError`</raisederrors></docstring>

List a repo tree's files and folders and get information about them.











Examples:

Get information about a repo's tree.
<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_tree.example">

```py
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("lysandre/arxiv-nlp")
>>> repo_tree
<generator object HfApi.list_repo_tree at 0x7fa4088e1ac0>
>>> list(repo_tree)
[
    RepoFile(path='.gitattributes', size=391, blob_id='ae8c63daedbd4206d7d40126955d4e6ab1c80f8f', lfs=None, last_commit=None, security=None),
    RepoFile(path='README.md', size=391, blob_id='43bd404b159de6fba7c2f4d3264347668d43af25', lfs=None, last_commit=None, security=None),
    RepoFile(path='config.json', size=554, blob_id='2f9618c3a19b9a61add74f70bfb121335aeef666', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='flax_model.msgpack', size=497764107, blob_id='8095a62ccb4d806da7666fcda07467e2d150218e',
        lfs={'size': 497764107, 'sha256': 'd88b0d6a6ff9c3f8151f9d3228f57092aaea997f09af009eefd7373a77b5abb9', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='merges.txt', size=456318, blob_id='226b0752cac7789c48f0cb3ec53eda48b7be36cc', lfs=None, last_commit=None, security=None),
    RepoFile(
        path='pytorch_model.bin', size=548123560, blob_id='64eaa9c526867e404b68f2c5d66fd78e27026523',
        lfs={'size': 548123560, 'sha256': '9be78edb5b928eba33aa88f431551348f7466ba9f5ef3daf1d552398722a5436', 'pointer_size': 134}, last_commit=None, security=None
    ),
    RepoFile(path='vocab.json', size=898669, blob_id='b00361fece0387ca34b4b8b8539ed830d644dbeb', lfs=None, last_commit=None, security=None)]
]
```

</ExampleCodeBlock>

Get even more information about a repo's tree (last commit and files' security scan results)
<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_repo_tree.example-2">

```py
>>> from huggingface_hub import list_repo_tree
>>> repo_tree = list_repo_tree("prompthero/openjourney-v4", expand=True)
>>> list(repo_tree)
[
    RepoFolder(
        path='feature_extractor',
        tree_id='aa536c4ea18073388b5b0bc791057a7296a00398',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFolder(
        path='safety_checker',
        tree_id='65aef9d787e5557373fdf714d6c34d4fcdd70440',
        last_commit={
            'oid': '47b62b20b20e06b9de610e840282b7e6c3d51190',
            'title': 'Upload diffusers weights (#48)',
            'date': datetime.datetime(2023, 3, 21, 9, 5, 27, tzinfo=datetime.timezone.utc)
        }
    ),
    RepoFile(
        path='model_index.json',
        size=582,
        blob_id='d3d7c1e8c3e78eeb1640b8e2041ee256e24c9ee1',
        lfs=None,
        last_commit={
            'oid': 'b195ed2d503f3eb29637050a886d77bd81d35f0e',
            'title': 'Fix deprecation warning by changing `CLIPFeatureExtractor` to `CLIPImageProcessor`. (#54)',
            'date': datetime.datetime(2023, 5, 15, 21, 41, 59, tzinfo=datetime.timezone.utc)
        },
        security={
            'safe': True,
            'av_scan': {'virusFound': False, 'virusNames': None},
            'pickle_import_scan': None
        }
    )
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_scheduled_jobs</name><anchor>huggingface_hub.HfApi.list_scheduled_jobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10283</source><parameters>[{"name": "timeout", "val": ": Optional[int] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.

- **namespace** (`str`, *optional*) --
  The namespace from where it lists the jobs. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

List scheduled compute Jobs on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_spaces</name><anchor>huggingface_hub.HfApi.list_spaces</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2203</source><parameters>[{"name": "filter", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "author", "val": ": Optional[str] = None"}, {"name": "search", "val": ": Optional[str] = None"}, {"name": "datasets", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "models", "val": ": Union[str, Iterable[str], None] = None"}, {"name": "linked", "val": ": bool = False"}, {"name": "sort", "val": ": Union[Literal['last_modified'], str, None] = None"}, {"name": "direction", "val": ": Optional[Literal[-1]] = None"}, {"name": "limit", "val": ": Optional[int] = None"}, {"name": "expand", "val": ": Optional[list[ExpandSpaceProperty_T]] = None"}, {"name": "full", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **filter** (`str` or `Iterable`, *optional*) --
  A string tag or list of tags that can be used to identify Spaces on the Hub.
- **author** (`str`, *optional*) --
  A string which identify the author of the returned Spaces.
- **search** (`str`, *optional*) --
  A string that will be contained in the returned Spaces.
- **datasets** (`str` or `Iterable`, *optional*) --
  Whether to return Spaces that make use of a dataset.
  The name of a specific dataset can be passed as a string.
- **models** (`str` or `Iterable`, *optional*) --
  Whether to return Spaces that make use of a model.
  The name of a specific model can be passed as a string.
- **linked** (`bool`, *optional*) --
  Whether to return Spaces that make use of either a model or a dataset.
- **sort** (`Literal["last_modified"]` or `str`, *optional*) --
  The key with which to sort the resulting models. Possible values are "last_modified", "trending_score",
  "created_at" and "likes".
- **direction** (`Literal[-1]` or `int`, *optional*) --
  Direction in which to sort. The value `-1` sorts by descending
  order while all other values sort by ascending order.
- **limit** (`int`, *optional*) --
  The limit on the number of Spaces fetched. Leaving this option
  to `None` fetches all Spaces.
- **expand** (`list[ExpandSpaceProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"datasets"`, `"disabled"`, `"lastModified"`, `"createdAt"`, `"likes"`, `"models"`, `"private"`, `"runtime"`, `"sdk"`, `"siblings"`, `"sha"`, `"subdomain"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **full** (`bool`, *optional*) --
  Whether to fetch all Spaces data, including the `last_modified`, `siblings`
  and `card_data` fields.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[SpaceInfo]`</rettype><retdesc>an iterable of [huggingface_hub.hf_api.SpaceInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.SpaceInfo) objects.</retdesc></docstring>

List spaces hosted on the Huggingface Hub, given some filters.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_user_followers</name><anchor>huggingface_hub.HfApi.list_user_followers</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9583</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get the followers of.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) objects with the followers of the user.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get the list of followers of a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_user_following</name><anchor>huggingface_hub.HfApi.list_user_following</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9611</source><parameters>[{"name": "username", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user to get the users followed by.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Iterable[User]`</rettype><retdesc>A list of [User](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.User) objects with the users followed by the user.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the user does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get the list of users followed by a user on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>list_webhooks</name><anchor>huggingface_hub.HfApi.list_webhooks</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8906</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[WebhookInfo]`</rettype><retdesc>List of webhook info objects.</retdesc></docstring>
List all configured webhooks.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.list_webhooks.example">

Example:
```python
>>> from huggingface_hub import list_webhooks
>>> webhooks = list_webhooks()
>>> len(webhooks)
2
>>> webhooks[0]
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    url="https://webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    secret="my-secret",
    domains=["repo", "discussion"],
    disabled=False,
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge_pull_request</name><anchor>huggingface_hub.HfApi.merge_pull_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6525</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "comment", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **comment** (`str`, *optional*) --
  An optional comment to post with the status change.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionStatusChange](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionStatusChange)</rettype><retdesc>the status change event</retdesc></docstring>
Merges a Pull Request.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>model_info</name><anchor>huggingface_hub.HfApi.model_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2479</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "securityStatus", "val": ": Optional[bool] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandModelProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the model repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **securityStatus** (`bool`, *optional*) --
  Whether to retrieve the security status from the model
  repository as well. The security status will be returned in the `security_repo_status` field.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandModelProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `securityStatus` or `files_metadata` are passed.
  Possible values are `"author"`, `"baseModels"`, `"cardData"`, `"childrenModelCount"`, `"config"`, `"createdAt"`, `"disabled"`, `"downloads"`, `"downloadsAllTime"`, `"gated"`, `"gguf"`, `"inference"`, `"inferenceProviderMapping"`, `"lastModified"`, `"library_name"`, `"likes"`, `"mask_token"`, `"model-index"`, `"pipeline_tag"`, `"private"`, `"safetensors"`, `"sha"`, `"siblings"`, `"spaces"`, `"tags"`, `"transformersInfo"`, `"trendingScore"`, `"widgetData"`, `"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.ModelInfo)</rettype><retdesc>The model repository information.</retdesc></docstring>

Get info on one specific model on huggingface.co

Model can be private if you pass an acceptable token or are logged in.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>move_repo</name><anchor>huggingface_hub.HfApi.move_repo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3789</source><parameters>[{"name": "from_id", "val": ": str"}, {"name": "to_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **from_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`. Original repository identifier.
- **to_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`. Final repository identifier.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Moving a repository from namespace1/repo_name1 to namespace2/repo_name2

Note there are certain limitations. For more information about moving
repositories, please see
https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo.



> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>paper_info</name><anchor>huggingface_hub.HfApi.paper_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9685</source><parameters>[{"name": "id", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`, **optional**) --
  ArXiv id of the paper.</paramsdesc><paramgroups>0</paramgroups><rettype>`PaperInfo`</rettype><retdesc>A `PaperInfo` object.</retdesc><raises>- `HfHubHTTPError` -- 
  HTTP 404 If the paper does not exist on the Hub.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Get information for a paper on the Hub.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>parse_safetensors_file_metadata</name><anchor>huggingface_hub.HfApi.parse_safetensors_file_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5554</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **filename** (`str`) --
  The name of the file in the repo.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if the file is in a dataset or space, `None` or `"model"` if in a
  model. Default is `None`.
- **revision** (`str`, *optional*) --
  The git revision to fetch the file from. Can be a branch name, a tag, or a commit hash. Defaults to the
  head of the `"main"` branch.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`SafetensorsFileMetadata`</rettype><retdesc>information related to a safetensors file.</retdesc><raises>- `NotASafetensorsRepoError` -- 
  If the repo is not a safetensors repo i.e. doesn't have either a
  `model.safetensors` or a `model.safetensors.index.json` file.
- `SafetensorsParsingError` -- 
  If a safetensors file header couldn't be parsed correctly.</raises><raisederrors>`NotASafetensorsRepoError` or `SafetensorsParsingError`</raisederrors></docstring>

Parse metadata from a safetensors file on the Hub.

To parse metadata from all safetensors files in a repo at once, use [get_safetensors_metadata()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_safetensors_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause_inference_endpoint</name><anchor>huggingface_hub.HfApi.pause_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7828</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to pause.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the paused Inference Endpoint.</retdesc></docstring>
Pause an Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using [resume_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint).
This is different than scaling the Inference Endpoint to zero with [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint), which
would be automatically restarted when a request is made to it.

For convenience, you can also pause an Inference Endpoint using [pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause_space</name><anchor>huggingface_hub.HfApi.pause_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6973</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to pause. Example: `"Salesforce/BLIP2"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about your Space including `stage=PAUSED` and requested hardware.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you
  are not authenticated.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don't
  own, either ask the owner by opening a Discussion or duplicate the Space.
- [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide
  a static Space, you can set it to private.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Pause your Space.

A paused Space stops executing until manually restarted by its owner. This is different from the sleeping
state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the
hardware you've selected. To restart your Space, use [restart_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.restart_space) and go to your Space settings page.

For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause).












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>permanently_delete_lfs_files</name><anchor>huggingface_hub.HfApi.permanently_delete_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3452</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "lfs_files", "val": ": Iterable[LFSFileInfo]"}, {"name": "rewrite_history", "val": ": bool = True"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository for which you are listing LFS files.
- **lfs_files** (`Iterable[LFSFileInfo]`) --
  An iterable of `LFSFileInfo` items to permanently delete from the repo. Use [list_lfs_files()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_lfs_files) to list
  all LFS files from a repo.
- **rewrite_history** (`bool`, *optional*, default to `True`) --
  Whether to rewrite repository history to remove file pointers referencing the deleted LFS files (recommended).
- **repo_type** (`str`, *optional*) --
  Type of repository. Set to `"dataset"` or `"space"` if listing from a dataset or space, `None` or
  `"model"` if listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Permanently delete LFS files from a repo on the Hub.

> [!WARNING]
> This is a permanent action that will affect all commits referencing the deleted files and might corrupt your
> repository. This is a non-revertible operation. Use it only if you know what you are doing.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.permanently_delete_lfs_files.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preupload_lfs_files</name><anchor>huggingface_hub.HfApi.preupload_lfs_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4172</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "additions", "val": ": Iterable[CommitOperationAdd]"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "num_threads", "val": ": int = 5"}, {"name": "free_memory", "val": ": bool = True"}, {"name": "gitignore_content", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository in which you will commit the files, for example: `"username/custom_transformers"`.

- **operations** (`Iterable` of [CommitOperationAdd](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationAdd)) --
  The list of files to upload. Warning: the objects in this list will be mutated to include information
  relative to the upload. Do not reuse the same objects for multiple commits.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  The type of repository to upload to (e.g. `"model"` -default-, `"dataset"` or `"space"`).

- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.

- **create_pr** (`boolean`, *optional*) --
  Whether or not you plan to create a Pull Request with that commit. Defaults to `False`.

- **num_threads** (`int`, *optional*) --
  Number of concurrent threads for uploading files. Defaults to 5.
  Setting it to 2 means at most 2 files will be uploaded concurrently.

- **gitignore_content** (`str`, *optional*) --
  The content of the `.gitignore` file to know which files should be ignored. The order of priority
  is to first check if `gitignore_content` is passed, then check if the `.gitignore` file is present
  in the list of files to commit and finally default to the `.gitignore` file already hosted on the Hub
  (if any).</paramsdesc><paramgroups>0</paramgroups></docstring>
Pre-upload LFS files to S3 in preparation on a future commit.

This method is useful if you are generating the files to upload on-the-fly and you don't want to store them
in memory before uploading them all at once.

> [!WARNING]
> This is a power-user method. You shouldn't need to call it directly to make a normal commit.
> Use [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit) directly instead.

> [!WARNING]
> Commit operations will be mutated during the process. In particular, the attached `path_or_fileobj` will be
> removed after the upload to save memory (and replaced by an empty `bytes` object). Do not reuse the same
> objects except to pass them to [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit). If you don't want to remove the attached content from the
> commit operation object, pass `free_memory=False`.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.preupload_lfs_files.example">

Example:
```py
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo

>>> repo_id = create_repo("test_preupload").repo_id

# Generate and preupload LFS files one by one
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
...     content = ... # generate binary content
...     addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
...     preupload_lfs_files(repo_id, additions=[addition]) # upload + free memory
...     operations.append(addition)

# Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reject_access_request</name><anchor>huggingface_hub.HfApi.reject_access_request</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8717</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "user", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "rejection_reason", "val": ": Optional[str]"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to reject access request for.
- **user** (`str`) --
  The username of the user which access request should be rejected.
- **repo_type** (`str`, *optional*) --
  The type of the repo to reject access request for. Must be one of `model`, `dataset` or `space`.
  Defaults to `model`.
- **rejection_reason** (`str`, *optional*) --
  Optional rejection reason that will be visible to the user (max 200 characters).
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- `HfHubHTTPError` -- 
  HTTP 400 if the repo is not gated.
- `HfHubHTTPError` -- 
  HTTP 403 if you only have read-only access to the repo. This can be the case if you don't have `write`
  or `admin` role in the organization the repo belongs to or if you passed a `read` token.
- `HfHubHTTPError` -- 
  HTTP 404 if the user does not exist on the Hub.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request cannot be found.
- `HfHubHTTPError` -- 
  HTTP 404 if the user access request is already in the rejected list.</raises><raisederrors>`HfHubHTTPError`</raisederrors></docstring>

Reject an access request from a user for a given gated repo.

A rejected request will go to the rejected list. The user cannot download any file of the repo. Rejected
requests can be accepted or cancelled at any time using [accept_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.accept_access_request) and [cancel_access_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.cancel_access_request).
A cancelled request will go back to the pending list while an accepted request will go to the accepted list.

For more info about gated repos, see https://huggingface.co/docs/hub/models-gated.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>rename_discussion</name><anchor>huggingface_hub.HfApi.rename_discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6383</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "discussion_num", "val": ": int"}, {"name": "new_title", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **discussion_num** (`int`) --
  The number of the Discussion or Pull Request . Must be a strictly positive integer.
- **new_title** (`str`) --
  The new title for the discussion
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiscussionTitleChange](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionTitleChange)</rettype><retdesc>the title change event</retdesc></docstring>
Renames a Discussion.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.rename_discussion.example">

Examples:
```python
>>> new_title = "New title, fixing a typo"
>>> HfApi().rename_discussion(
...     repo_id="username/repo_name",
...     discussion_num=34
...     new_title=new_title
... )
# DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...)

```

</ExampleCodeBlock>

> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>repo_exists</name><anchor>huggingface_hub.HfApi.repo_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2765</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the repository exists, False otherwise.</retdesc></docstring>

Checks if a repository exists on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.repo_exists.example">

Examples:
```py
>>> from huggingface_hub import repo_exists
>>> repo_exists("google/gemma-7b")
True
>>> repo_exists("google/not-a-repo")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>repo_info</name><anchor>huggingface_hub.HfApi.repo_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2694</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[Union[ExpandModelProperty_T, ExpandDatasetProperty_T, ExpandSpaceProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the repository from which to get the
  information.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **expand** (`ExpandModelProperty_T` or `ExpandDatasetProperty_T` or `ExpandSpaceProperty_T`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `files_metadata` is passed.
  For an exhaustive list of available properties, check out [model_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.model_info), [dataset_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) or [space_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.space_info).
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[SpaceInfo, DatasetInfo, ModelInfo]`</rettype><retdesc>The repository information, as a
[huggingface_hub.hf_api.DatasetInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.DatasetInfo), [huggingface_hub.hf_api.ModelInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.ModelInfo)
or [huggingface_hub.hf_api.SpaceInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.SpaceInfo) object.</retdesc></docstring>

Get the info object for a given repo of a given type.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>request_space_hardware</name><anchor>huggingface_hub.HfApi.request_space_hardware</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6875</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "hardware", "val": ": SpaceHardware"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "sleep_time", "val": ": Optional[int] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **hardware** (`str` or [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware)) --
  Hardware on which to run the Space. Example: `"t4-medium"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Request new hardware for a Space.







> [!TIP]
> It is also possible to request hardware directly when creating the Space repo! See [create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo) for details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>request_space_storage</name><anchor>huggingface_hub.HfApi.request_space_storage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7176</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "storage", "val": ": SpaceStorage"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to update. Example: `"open-llm-leaderboard/open_llm_leaderboard"`.
- **storage** (`str` or [SpaceStorage](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceStorage)) --
  Storage tier. Either 'small', 'medium', or 'large'.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Request persistent storage for a Space.







> [!TIP]
> It is not possible to decrease persistent storage after its granted. To do so, you must delete it
> via [delete_space_storage()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_space_storage).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>restart_space</name><anchor>huggingface_hub.HfApi.restart_space</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7012</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "factory_reboot", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the Space to restart. Example: `"Salesforce/BLIP2"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **factory_reboot** (`bool`, *optional*) --
  If `True`, the Space will be rebuilt from scratch without caching any requirements.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about your Space.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you
  are not authenticated.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don't
  own, either ask the owner by opening a Discussion or duplicate the Space.
- [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide
  a static Space, you can set it to private.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Restart your Space.

This is the only way to programmatically restart a Space if you've put it on Pause (see [pause_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_space)). You
must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be
billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space.

For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause).












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume_inference_endpoint</name><anchor>huggingface_hub.HfApi.resume_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7863</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "running_ok", "val": ": bool = True"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to resume.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **running_ok** (`bool`, *optional*) --
  If `True`, the method will not raise an error if the Inference Endpoint is already running. Defaults to
  `True`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the resumed Inference Endpoint.</retdesc></docstring>
Resume an Inference Endpoint.

For convenience, you can also resume an Inference Endpoint using [InferenceEndpoint.resume()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume_scheduled_job</name><anchor>huggingface_hub.HfApi.resume_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10413</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Resume (unpause) a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>revision_exists</name><anchor>huggingface_hub.HfApi.revision_exists</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2809</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`) --
  The revision of the repository to check.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space,
  `None` or `"model"` if getting repository info from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><retdesc>True if the repository and the revision exists, False otherwise.</retdesc></docstring>

Checks if a specific revision exists on a repo on the Hugging Face Hub.





<ExampleCodeBlock anchor="huggingface_hub.HfApi.revision_exists.example">

Examples:
```py
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_as_future</name><anchor>huggingface_hub.HfApi.run_as_future</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1695</source><parameters>[{"name": "fn", "val": ": Callable[..., R]"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **fn** (`Callable`) --
  The method to run in the background.
- ***args,** **kwargs --
  Arguments with which the method will be called.</paramsdesc><paramgroups>0</paramgroups><rettype>`Future`</rettype><retdesc>a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) instance to
get the result of the task.</retdesc></docstring>

Run a method in the background and return a Future instance.

The main goal is to run methods without blocking the main thread (e.g. to push data during a training).
Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts
by parallelizing lots of call to the API, you must setup and use your own [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor).

Note: Most-used methods like [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) and [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit) have a `run_as_future: bool`
argument to directly call them in the background. This is equivalent to calling `api.run_as_future(...)` on them
but less verbose.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_as_future.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.run_as_future(api.whoami) # instant
>>> future.done()
False
>>> future.result() # wait until complete and return result
(...)
>>> future.done()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_job</name><anchor>huggingface_hub.HfApi.run_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9769</source><parameters>[{"name": "image", "val": ": str"}, {"name": "command", "val": ": list[str]"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **image** (`str`) --
  The Docker image to use.
  Examples: `"ubuntu"`, `"python:3.12"`, `"pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"`.
  Example with an image from a Space: `"hf.co/spaces/lhoestq/duckdb"`.

- **command** (`list[str]`) --
  The command to run. Example: `["echo", "hello"]`.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run compute Jobs on Hugging Face infrastructure.



Example:
<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_job.example">

Run your first Job:

```python
>>> from huggingface_hub import run_job
>>> run_job(image="python:3.12", command=["python", "-c" ,"print('Hello from HF compute!')"])
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_job.example-2">

Run a GPU Job:

```python
>>> from huggingface_hub import run_job
>>> image = "pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel"
>>> command = ["python", "-c", "import torch; print(f"This code ran with the following GPU: {torch.cuda.get_device_name()}")"]
>>> run_job(image=image, command=command, flavor="a10g-small")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run_uv_job</name><anchor>huggingface_hub.HfApi.run_uv_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10055</source><parameters>[{"name": "script", "val": ": str"}, {"name": "script_args", "val": ": Optional[list[str]] = None"}, {"name": "dependencies", "val": ": Optional[list[str]] = None"}, {"name": "python", "val": ": Optional[str] = None"}, {"name": "image", "val": ": Optional[str] = None"}, {"name": "env", "val": ": Optional[dict[str, Any]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, Any]] = None"}, {"name": "flavor", "val": ": Optional[SpaceHardware] = None"}, {"name": "timeout", "val": ": Optional[Union[int, float, str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "_repo", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **script** (`str`) --
  Path or URL of the UV script, or a command.

- **script_args** (`list[str]`, *optional*) --
  Arguments to pass to the script or command.

- **dependencies** (`list[str]`, *optional*) --
  Dependencies to use to run the UV script.

- **python** (`str`, *optional*) --
  Use a specific Python version. Default is 3.12.

- **image** (`str`, *optional*, defaults to "ghcr.io/astral-sh/uv --python3.12-bookworm"):
  Use a custom Docker image with `uv` installed.

- **env** (`dict[str, Any]`, *optional*) --
  Defines the environment variables for the Job.

- **secrets** (`dict[str, Any]`, *optional*) --
  Defines the secret environment variables for the Job.

- **flavor** (`str`, *optional*) --
  Flavor for the hardware, as in Hugging Face Spaces. See [SpaceHardware](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceHardware) for possible values.
  Defaults to `"cpu-basic"`.

- **timeout** (`Union[int, float, str]`, *optional*) --
  Max duration for the Job: int/float with s (seconds, default), m (minutes), h (hours) or d (days).
  Example: `300` or `"5m"` for 5 minutes.

- **namespace** (`str`, *optional*) --
  The namespace where the Job will be created. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run a UV script Job on Hugging Face infrastructure.



Example:

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example">

Run a script from a URL:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "https://raw.githubusercontent.com/huggingface/trl/refs/heads/main/trl/scripts/sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> run_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example-2">

Run a local script:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "my_sft.py"
>>> script_args = ["--model_name_or_path", "Qwen/Qwen2-0.5B", "--dataset_name", "trl-lib/Capybara", "--push_to_hub"]
>>> run_uv_job(script, script_args=script_args, dependencies=["trl"], flavor="a10g-small")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HfApi.run_uv_job.example-3">

Run a command:

```python
>>> from huggingface_hub import run_uv_job
>>> script = "lighteval"
>>> script_args= ["endpoint", "inference-providers", "model_name=openai/gpt-oss-20b,provider=auto", "lighteval|gsm8k|0|0"]
>>> run_uv_job(script, script_args=script_args, dependencies=["lighteval"], flavor="a10g-small")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_to_zero_inference_endpoint</name><anchor>huggingface_hub.HfApi.scale_to_zero_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7909</source><parameters>[{"name": "name", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to scale to zero.
- **namespace** (`str`, *optional*) --
  The namespace in which the Inference Endpoint is located. Defaults to the current user.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the scaled-to-zero Inference Endpoint.</retdesc></docstring>
Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resume on the next request to it, with a
cold start delay. This is different than pausing the Inference Endpoint with [pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint), which
would require a manual resume with [resume_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint).

For convenience, you can also scale an Inference Endpoint to zero using [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_space_sleep_time</name><anchor>huggingface_hub.HfApi.set_space_sleep_time</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L6925</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "sleep_time", "val": ": int"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repo to update. Example: `"bigcode/in-the-stack"`.
- **sleep_time** (`int`, *optional*) --
  Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want
  your Space to pause (default behavior for upgraded hardware). For free hardware, you can't configure
  the sleep time (value is fixed to 48 hours of inactivity).
  See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime)</rettype><retdesc>Runtime information about a Space including Space stage and hardware.</retdesc></docstring>
Set a custom sleep time for a Space running on upgraded hardware..

Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in "sleep"
mode. If a new visitor lands on your Space, it will "wake it up". Only upgraded hardware can have a
configurable sleep time. To know more about the sleep stage, please refer to
https://huggingface.co/docs/hub/spaces-gpus#sleep-time.







> [!TIP]
> It is also possible to set a custom sleep time when requesting hardware with [request_space_hardware()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.request_space_hardware).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>snapshot_download</name><anchor>huggingface_hub.HfApi.snapshot_download</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L5299</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "cache_dir", "val": ": Union[str, Path, None] = None"}, {"name": "local_dir", "val": ": Union[str, Path, None] = None"}, {"name": "etag_timeout", "val": ": float = 10"}, {"name": "force_download", "val": ": bool = False"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "max_workers", "val": ": int = 8"}, {"name": "tqdm_class", "val": ": Optional[type[base_tqdm]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A user or an organization name and a repo name separated by a `/`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if downloading from a dataset or space,
  `None` or `"model"` if downloading from a model. Default is `None`.
- **revision** (`str`, *optional*) --
  An optional Git revision id which can be a branch name, a tag, or a
  commit hash.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_dir** (`str` or `Path`, *optional*) --
  If provided, the downloaded files will be placed under this directory.
- **etag_timeout** (`float`, *optional*, defaults to `10`) --
  When fetching ETag, how many seconds to wait for the server to send
  data before giving up which is passed to `httpx.request`.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether the file should be downloaded even if it already exists in the local cache.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the
  local cached file if it exists.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are downloaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not downloaded.
- **max_workers** (`int`, *optional*) --
  Number of concurrent threads to download files (1 thread = 1 file download).
  Defaults to 8.
- **tqdm_class** (`tqdm`, *optional*) --
  If provided, overwrites the default behavior for the progress bar. Passed
  argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior.
  Note that the `tqdm_class` is not passed to each individual download.
  Defaults to the custom HF progress bar that can be disabled by setting
  `HF_HUB_DISABLE_PROGRESS_BARS` environment variable.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>folder path of the repo snapshot.</retdesc><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the revision to download from cannot be found.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) -- 
  If `token=True` and the token cannot be found.
- [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) -- if
  ETag cannot be determined.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  if some parameter value is invalid.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or ``EnvironmentError`` or ``OSError`` or ``ValueError``</raisederrors></docstring>
Download repo files.

Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from
a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order
to keep their actual filename relative to that folder. You can also filter which files to download using
`allow_patterns` and `ignore_patterns`.

If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this
option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir`
to store some metadata related to the downloaded files.While this mechanism is not as robust as the main
cache-system, it's optimized for regularly pulling the latest version of a repository.

An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly
configured. It is also not possible to filter which files to download when cloning a repository using git.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>space_info</name><anchor>huggingface_hub.HfApi.space_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2624</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "timeout", "val": ": Optional[float] = None"}, {"name": "files_metadata", "val": ": bool = False"}, {"name": "expand", "val": ": Optional[list[ExpandSpaceProperty_T]] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated
  by a `/`.
- **revision** (`str`, *optional*) --
  The revision of the space repository from which to get the
  information.
- **timeout** (`float`, *optional*) --
  Whether to set a timeout for the request to the Hub.
- **files_metadata** (`bool`, *optional*) --
  Whether or not to retrieve metadata for files in the repository
  (size, LFS metadata, etc). Defaults to `False`.
- **expand** (`list[ExpandSpaceProperty_T]`, *optional*) --
  List properties to return in the response. When used, only the properties in the list will be returned.
  This parameter cannot be used if `full` is passed.
  Possible values are `"author"`, `"cardData"`, `"createdAt"`, `"datasets"`, `"disabled"`, `"lastModified"`, `"likes"`, `"models"`, `"private"`, `"runtime"`, `"sdk"`, `"siblings"`, `"sha"`, `"subdomain"`, `"tags"`, `"trendingScore"`, `"usedStorage"`, and `"resourceGroup"`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SpaceInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.SpaceInfo)</rettype><retdesc>The space repository information.</retdesc></docstring>

Get info on one specific Space on huggingface.co.

Space can be private if you pass an acceptable token.







> [!TIP]
> Raises the following errors:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>super_squash_history</name><anchor>huggingface_hub.HfApi.super_squash_history</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3318</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "branch", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a `/`.
- **branch** (`str`, *optional*) --
  The branch to squash. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The commit message to use for the squashed commit.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if listing commits from a dataset or a Space, `None` or `"model"` if
  listing from a model. Default is `None`.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo
  does not exist.
- [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) -- 
  If the branch to squash cannot be found.
- [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError) -- 
  If invalid reference for a branch. You cannot squash history on tags.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) or [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError) or [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError)</raisederrors></docstring>
Squash commit history on a branch for a repo on the Hub.

Squashing the repo history is useful when you know you'll make hundreds of commits and you don't want to
clutter the history. Squashing commits can only be performed from the head of a branch.

> [!WARNING]
> Once squashed, the commit history cannot be retrieved. This is a non-revertible operation.

> [!WARNING]
> Once the history of a branch has been squashed, it is not possible to merge it back into another branch since
> their history will have diverged.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.super_squash_history.example">

Example:
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# Create repo
>>> repo_id = api.create_repo("test-squash").repo_id

# Make a lot of commits.
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="lfs.bin", path_or_fileobj=b"content")
>>> api.upload_file(repo_id=repo_id, path_in_repo="file.txt", path_or_fileobj=b"another_content")

# Squash history
>>> api.super_squash_history(repo_id=repo_id)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>suspend_scheduled_job</name><anchor>huggingface_hub.HfApi.suspend_scheduled_job</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L10384</source><parameters>[{"name": "scheduled_job_id", "val": ": str"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **scheduled_job_id** (`str`) --
  ID of the scheduled Job.

- **namespace** (`str`, *optional*) --
  The namespace where the scheduled Job is. Defaults to the current user's namespace.

- **token** `(Union[bool, str, None]`, *optional*) --
  A valid user access token. If not provided, the locally saved token will be used, which is the
  recommended authentication method. Set to `False` to disable authentication.
  Refer to: https://huggingface.co/docs/huggingface_hub/quick-start#authentication.</paramsdesc><paramgroups>0</paramgroups></docstring>

Suspend (pause) a scheduled compute Job on Hugging Face infrastructure.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unlike</name><anchor>huggingface_hub.HfApi.unlike</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L2313</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[bool, str, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to unlike. Example: `"user/my-cool-model"`.

- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.

- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if unliking a dataset or space, `None` or
  `"model"` if unliking a model. Default is `None`.</paramsdesc><paramgroups>0</paramgroups><raises>- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If repository is not found (error 404): wrong repo_id/repo_type, private
  but not authenticated or repo does not exist.</raises><raisederrors>[RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Unlike a given repo on the Hub (e.g. remove from favorite list).

To prevent spam usage, it is not possible to `like` a repository from a script.

See also [list_liked_repos()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_liked_repos).







<ExampleCodeBlock anchor="huggingface_hub.HfApi.unlike.example">

Example:
```python
>>> from huggingface_hub import list_liked_repos, unlike
>>> "gpt2" in list_liked_repos().models # we assume you have already liked gpt2
True
>>> unlike("gpt2")
>>> "gpt2" in list_liked_repos().models
False
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_collection_item</name><anchor>huggingface_hub.HfApi.update_collection_item</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8309</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "item_object_id", "val": ": str"}, {"name": "note", "val": ": Optional[str] = None"}, {"name": "position", "val": ": Optional[int] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **item_object_id** (`str`) --
  ID of the item in the collection. This is not the id of the item on the Hub (repo_id or paper id).
  It must be retrieved from a [CollectionItem](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.CollectionItem) object. Example: `collection.items[0].item_object_id`.
- **note** (`str`, *optional*) --
  A note to attach to the item in the collection. The maximum size for a note is 500 characters.
- **position** (`int`, *optional*) --
  New position of the item in the collection.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Update an item in a collection.



<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_collection_item.example">

Example:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Get collection first
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")

# Update item based on its ID (add note + update position)
>>> update_collection_item(
...     collection_slug="TheBloke/recent-models-64f9a55bb3115b4f513ec026",
...     item_object_id=collection.items[-1].item_object_id,
...     note="Newly updated model!"
...     position=0,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_collection_metadata</name><anchor>huggingface_hub.HfApi.update_collection_metadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L8121</source><parameters>[{"name": "collection_slug", "val": ": str"}, {"name": "title", "val": ": Optional[str] = None"}, {"name": "description", "val": ": Optional[str] = None"}, {"name": "position", "val": ": Optional[int] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "theme", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **collection_slug** (`str`) --
  Slug of the collection to update. Example: `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **title** (`str`) --
  Title of the collection to update.
- **description** (`str`, *optional*) --
  Description of the collection to update.
- **position** (`int`, *optional*) --
  New position of the collection in the list of collections of the user.
- **private** (`bool`, *optional*) --
  Whether the collection should be private or not.
- **theme** (`str`, *optional*) --
  Theme of the collection on the Hub.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Update metadata of a collection on the Hub.

All arguments are optional. Only provided metadata will be updated.



Returns: [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)

<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_collection_metadata.example">

Example:

```py
>>> from huggingface_hub import update_collection_metadata
>>> collection = update_collection_metadata(
...     collection_slug="username/iccv-2023-64f9a55bb3115b4f513ec026",
...     title="ICCV Oct. 2023"
...     description="Portfolio of models, datasets, papers and demos I presented at ICCV Oct. 2023",
...     private=False,
...     theme="pink",
... )
>>> collection.slug
"username/iccv-oct-2023-64f9a55bb3115b4f513ec026"
# ^collection slug got updated but not the trailing ID
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_inference_endpoint</name><anchor>huggingface_hub.HfApi.update_inference_endpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L7663</source><parameters>[{"name": "name", "val": ": str"}, {"name": "accelerator", "val": ": Optional[str] = None"}, {"name": "instance_size", "val": ": Optional[str] = None"}, {"name": "instance_type", "val": ": Optional[str] = None"}, {"name": "min_replica", "val": ": Optional[int] = None"}, {"name": "max_replica", "val": ": Optional[int] = None"}, {"name": "scale_to_zero_timeout", "val": ": Optional[int] = None"}, {"name": "repository", "val": ": Optional[str] = None"}, {"name": "framework", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "task", "val": ": Optional[str] = None"}, {"name": "custom_image", "val": ": Optional[dict] = None"}, {"name": "env", "val": ": Optional[dict[str, str]] = None"}, {"name": "secrets", "val": ": Optional[dict[str, str]] = None"}, {"name": "domain", "val": ": Optional[str] = None"}, {"name": "path", "val": ": Optional[str] = None"}, {"name": "cache_http_responses", "val": ": Optional[bool] = None"}, {"name": "tags", "val": ": Optional[list[str]] = None"}, {"name": "namespace", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **name** (`str`) --
  The name of the Inference Endpoint to update.

- **accelerator** (`str`, *optional*) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`, *optional*) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`, *optional*) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero.

- **repository** (`str`, *optional*) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`, *optional*) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **env** (`dict[str, str]`, *optional*) --
  Non-secret environment variables to inject in the container environment
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.

- **domain** (`str`, *optional*) --
  The custom domain for the Inference Endpoint deployment, if setup the inference endpoint will be available at this domain (e.g. `"my-new-domain.cool-website.woof"`).
- **path** (`str`, *optional*) --
  The custom path to the deployed model, should start with a `/` (e.g. `"/models/google-bert/bert-base-uncased"`).

- **cache_http_responses** (`bool`, *optional*) --
  Whether to cache HTTP responses from the Inference Endpoint.
- **tags** (`list[str]`, *optional*) --
  A list of tags to associate with the Inference Endpoint.

- **namespace** (`str`, *optional*) --
  The namespace where the Inference Endpoint will be updated. Defaults to the current user's namespace.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>information about the updated Inference Endpoint.</retdesc></docstring>
Update an Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, the route, or any combination.
All arguments are optional but at least one must be provided.

For convenience, you can also update an Inference Endpoint using [InferenceEndpoint.update()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_repo_settings</name><anchor>huggingface_hub.HfApi.update_repo_settings</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L3714</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "gated", "val": ": Optional[Literal['auto', 'manual', False]] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  A namespace (user or an organization) and a repo name separated by a /.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  The gated status for the repository. If set to `None` (default), the `gated` setting of the repository won't be updated.
  * "auto": The repository is gated, and access requests are automatically approved or denied based on predefined criteria.
  * "manual": The repository is gated, and access requests require manual approval.
  * False : The repository is not gated, and anyone can access it.
- **private** (`bool`, *optional*) --
  Whether the repository should be private.
- **token** (`Union[str, bool, None]`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token,
  which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass False.
- **repo_type** (`str`, *optional*) --
  The type of the repository to update settings from (`"model"`, `"dataset"` or `"space"`).
  Defaults to `"model"`.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If gated is not one of "auto", "manual", or False.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If repo_type is not one of the values in constants.REPO_TYPES.
- [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) -- 
  If the request to the Hugging Face Hub API fails.
- [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError) -- 
  If the repository to download from cannot be found. This may be because it doesn't exist,
  or because it is set to `private` and you do not have access.</raises><raisederrors>``ValueError`` or [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError) or [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)</raisederrors></docstring>

Update the settings of a repository, including gated access and visibility.

To give more control over how repos are used, the Hub allows repo authors to enable
access requests for their repos, and also to set the visibility of the repo to private.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_webhook</name><anchor>huggingface_hub.HfApi.update_webhook</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L9087</source><parameters>[{"name": "webhook_id", "val": ": str"}, {"name": "url", "val": ": Optional[str] = None"}, {"name": "watched", "val": ": Optional[list[Union[dict, WebhookWatchedItem]]] = None"}, {"name": "domains", "val": ": Optional[list[constants.WEBHOOK_DOMAIN_T]] = None"}, {"name": "secret", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **webhook_id** (`str`) --
  The unique identifier of the webhook to be updated.
- **url** (`str`, optional) --
  The URL to which the payload will be sent.
- **watched** (`list[WebhookWatchedItem]`, optional) --
  List of items to watch. It can be users, orgs, models, datasets, or spaces.
  Refer to `WebhookWatchedItem` for more details. Watched items can also be provided as plain dictionaries.
- **domains** (`list[Literal["repo", "discussion"]]`, optional) --
  The domains to watch. This can include "repo", "discussion", or both.
- **secret** (`str`, optional) --
  A secret to sign the payload with, providing an additional layer of security.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved token, which is the recommended
  method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>`WebhookInfo`</rettype><retdesc>Info about the updated webhook.</retdesc></docstring>
Update an existing webhook.







<ExampleCodeBlock anchor="huggingface_hub.HfApi.update_webhook.example">

Example:
```python
>>> from huggingface_hub import update_webhook
>>> updated_payload = update_webhook(
...     webhook_id="654bbbc16f2ec14d77f109cc",
...     url="https://new.webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
...     watched=[{"type": "user", "name": "julien-c"}, {"type": "org", "name": "HuggingFaceH4"}],
...     domains=["repo"],
...     secret="my-secret",
... )
>>> print(updated_payload)
WebhookInfo(
    id="654bbbc16f2ec14d77f109cc",
    job=None,
    url="https://new.webhook.site/a2176e82-5720-43ee-9e06-f91cb4c91548",
    watched=[WebhookWatchedItem(type="user", name="julien-c"), WebhookWatchedItem(type="org", name="HuggingFaceH4")],
    domains=["repo"],
    secret="my-secret",
    disabled=False,
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_file</name><anchor>huggingface_hub.HfApi.upload_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4360</source><parameters>[{"name": "path_or_fileobj", "val": ": Union[str, Path, bytes, BinaryIO]"}, {"name": "path_in_repo", "val": ": str"}, {"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **path_or_fileobj** (`str`, `Path`, `bytes`, or `IO`) --
  Path to a file on the local machine or binary data stream /
  fileobj / buffer.
- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example:
  `"checkpoints/1fec34a/weights.bin"`
- **repo_id** (`str`) --
  The repository to which the file will be uploaded, for example:
  `"username/custom_transformers"`
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`.
  If `revision` is not set, PR is opened against the `"main"` branch. If
  `revision` is set and is a branch, PR is opened against this branch. If
  `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc></docstring>

Upload a local file (up to 50 GB) to the given repo. The upload is done
through a HTTP post request, and doesn't require git or git-lfs to be
installed.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if some parameter value is invalid
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>       If the repository to download from cannot be found. This may be because it doesn't exist,
>       or because it is set to `private` and you do not have access.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>       If the revision to download from cannot be found.

> [!WARNING]
> `upload_file` assumes that the repo already exists on the Hub. If you get a
> Client error 404, please make sure you are authenticated and that `repo_id` and
> `repo_type` are set correctly. If repo does not exist, create it first using
> [create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

<ExampleCodeBlock anchor="huggingface_hub.HfApi.upload_file.example">

Example:

```python
>>> from huggingface_hub import upload_file

>>> with open("./local/filepath", "rb") as fobj:
...     upload_file(
...         path_or_fileobj=fileobj,
...         path_in_repo="remote/file/path.h5",
...         repo_id="username/my-dataset",
...         repo_type="dataset",
...         token="my_token",
...     )

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
... )

>>> upload_file(
...     path_or_fileobj=".\\local\\file\\path",
...     path_in_repo="remote/file/path.h5",
...     repo_id="username/my-model",
...     token="my_token",
...     create_pr=True,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_folder</name><anchor>huggingface_hub.HfApi.upload_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4542</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": Union[str, Path]"}, {"name": "path_in_repo", "val": ": Optional[str] = None"}, {"name": "commit_message", "val": ": Optional[str] = None"}, {"name": "commit_description", "val": ": Optional[str] = None"}, {"name": "token", "val": ": Union[str, bool, None] = None"}, {"name": "repo_type", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "create_pr", "val": ": Optional[bool] = None"}, {"name": "parent_commit", "val": ": Optional[str] = None"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "delete_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "run_as_future", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to which the file will be uploaded, for example:
  `"username/custom_transformers"`
- **folder_path** (`str` or `Path`) --
  Path to the folder to upload on the local file system
- **path_in_repo** (`str`, *optional*) --
  Relative path of the directory in the repo, for example:
  `"checkpoints/1fec34a/results"`. Will default to the root folder of the repository.
- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if uploading to a dataset or
  space, `None` or `"model"` if uploading to a model. Default is
  `None`.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to:
  `f"Upload {path_in_repo} with huggingface_hub"`
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request with that commit. Defaults to `False`. If `revision` is not
  set, PR is opened against the `"main"` branch. If `revision` is set and is a branch, PR is opened
  against this branch. If `revision` is set and is not a branch name (example: a commit oid), an
  `RevisionNotFoundError` is returned by the server.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **delete_patterns** (`list[str]` or `str`, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo while committing
  new files. This is useful if you don't know which files have already been uploaded.
  Note: to avoid discrepancies the `.gitattributes` file is not deleted even if it matches the pattern.
- **run_as_future** (`bool`, *optional*) --
  Whether or not to run this method in the background. Background jobs are run sequentially without
  blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
  object. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) or `Future`</rettype><retdesc>Instance of [CommitInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitInfo) containing information about the newly created commit (commit hash, commit
url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will
contain the result when executed.</retdesc></docstring>

Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn't require git or
git-lfs to be installed.

The structure of the folder will be preserved. Files with the same name already present in the repository will
be overwritten. Others will be left untouched.

Use the `allow_patterns` and `ignore_patterns` arguments to specify which files to upload. These parameters
accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as
documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). If both `allow_patterns` and
`ignore_patterns` are provided, both constraints apply. By default, all files from the folder are uploaded.

Use the `delete_patterns` argument to specify remote files you want to delete. Input type is the same as for
`allow_patterns` (see above). If `path_in_repo` is also provided, the patterns are matched against paths
relative to this folder. For example, `upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")`
will delete any remote file under `./experiment/logs/`. Note that the `.gitattributes` file will not be deleted
even if it matches the patterns.

Any `.git/` folder present in any subdirectory will be ignored. However, please be aware that the `.gitignore`
file is not taken into account.

Uses `HfApi.create_commit` under the hood.







> [!TIP]
> Raises the following errors:
>
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>     if the HuggingFace API returned an error
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>     if some parameter value is invalid

> [!WARNING]
> `upload_folder` assumes that the repo already exists on the Hub. If you get a Client error 404, please make
> sure you are authenticated and that `repo_id` and `repo_type` are set correctly. If repo does not exist, create
> it first using [create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo).

> [!TIP]
> When dealing with a large folder (thousands of files or hundreds of GB), we recommend using [upload_large_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_large_folder) instead.

<ExampleCodeBlock anchor="huggingface_hub.HfApi.upload_folder.example">

Example:

```python
# Upload checkpoints folder except the log files
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     ignore_patterns="**/logs/*.txt",
... )

# Upload checkpoints folder including logs while deleting existing logs from the repo
# Useful if you don't know exactly which log files have already being pushed
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     delete_patterns="**/logs/*.txt",
... )

# Upload checkpoints folder while creating a PR
>>> upload_folder(
...     folder_path="local/checkpoints",
...     path_in_repo="remote/experiment/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="datasets",
...     token="my_token",
...     create_pr=True,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>upload_large_folder</name><anchor>huggingface_hub.HfApi.upload_large_folder</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L4975</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": Union[str, Path]"}, {"name": "repo_type", "val": ": str"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "private", "val": ": Optional[bool] = None"}, {"name": "allow_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "ignore_patterns", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "num_workers", "val": ": Optional[int] = None"}, {"name": "print_report", "val": ": bool = True"}, {"name": "print_report_every", "val": ": int = 60"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repository to which the file will be uploaded.
  E.g. `"HuggingFaceTB/smollm-corpus"`.
- **folder_path** (`str` or `Path`) --
  Path to the folder to upload on the local file system.
- **repo_type** (`str`) --
  Type of the repository. Must be one of `"model"`, `"dataset"` or `"space"`.
  Unlike in all other `HfApi` methods, `repo_type` is explicitly required here. This is to avoid
  any mistake when uploading a large folder to the Hub, and therefore prevent from having to re-upload
  everything.
- **revision** (`str`, `optional`) --
  The branch to commit to. If not provided, the `main` branch will be used.
- **private** (`bool`, `optional`) --
  Whether the repository should be private.
  If `None` (default), the repo will be public unless the organization's default is private.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **num_workers** (`int`, *optional*) --
  Number of workers to start. Defaults to `os.cpu_count() - 2` (minimum 2).
  A higher number of workers may speed up the process if your machine allows it. However, on machines with a
  slower connection, it is recommended to keep the number of workers low to ensure better resumability.
  Indeed, partially uploaded files will have to be completely re-uploaded if the process is interrupted.
- **print_report** (`bool`, *optional*) --
  Whether to print a report of the upload progress. Defaults to True.
  Report is printed to `sys.stdout` every X seconds (60 by defaults) and overwrites the previous report.
- **print_report_every** (`int`, *optional*) --
  Frequency at which the report is printed. Defaults to 60 seconds.</paramsdesc><paramgroups>0</paramgroups></docstring>
Upload a large folder to the Hub in the most resilient way possible.

Several workers are started to upload files in an optimized way. Before being committed to a repo, files must be
hashed and be pre-uploaded if they are LFS files. Workers will perform these tasks for each file in the folder.
At each step, some metadata information about the upload process is saved in the folder under `.cache/.huggingface/`
to be able to resume the process if interrupted. The whole process might result in several commits.



> [!TIP]
> A few things to keep in mind:
>     - Repository limits still apply: https://huggingface.co/docs/hub/repositories-recommendations
>     - Do not start several processes in parallel.
>     - You can interrupt and resume the process at any time.
>     - Do not upload the same folder to several repositories. If you need to do so, you must delete the local `.cache/.huggingface/` folder first.

> [!WARNING]
> While being much more robust to upload large folders, `upload_large_folder` is more limited than [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) feature-wise. In practice:
>     - you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally.
>     - you cannot set a custom `commit_message` and `commit_description` since multiple commits are created.
>     - you cannot delete from the repo while uploading. Please make a separate commit first.
>     - you cannot create a PR directly. Please create a PR first (from the UI or using [create_pull_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)) and then commit to it by passing `revision`.

**Technical details:**

`upload_large_folder` process is as follow:
1. (Check parameters and setup.)
2. Create repo if missing.
3. List local files to upload.
4. Run validation checks and display warnings if repository limits might be exceeded:
   - Warns if the total number of files exceeds 100k (recommended limit).
   - Warns if any folder contains more than 10k files (recommended limit).
   - Warns about files larger than 20GB (recommended) or 50GB (hard limit).
5. Start workers. Workers can perform the following tasks:
   - Hash a file.
   - Get upload mode (regular or LFS) for a list of files.
   - Pre-upload an LFS file.
   - Commit a bunch of files.
Once a worker finishes a task, it will move on to the next task based on the priority list (see below) until
all files are uploaded and committed.
6. While workers are up, regularly print a report to sys.stdout.

Order of priority:
1. Commit if more than 5 minutes since last commit attempt (and at least 1 file).
2. Commit if at least 150 files are ready to commit.
3. Get upload mode if at least 10 files have been hashed.
4. Pre-upload LFS file if at least 1 file and no worker is pre-uploading.
5. Hash file if at least 1 file and no worker is hashing.
6. Get upload mode if at least 1 file and no worker is getting upload mode.
7. Pre-upload LFS file if at least 1 file.
8. Hash file if at least 1 file to hash.
9. Get upload mode if at least 1 file to get upload mode.
10. Commit if at least 1 file to commit and at least 1 min since last commit attempt.
11. Commit if at least 1 file to commit and all other queues are empty.

Special rules:
- Only one worker can commit at a time.
- If no tasks are available, the worker waits for 10 seconds before checking again.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>whoami</name><anchor>huggingface_hub.HfApi.whoami</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1735</source><parameters>[{"name": "token", "val": ": Union[bool, str, None] = None"}]</parameters><paramsdesc>- **token** (`bool` or `str`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Call HF API to know "whoami".




</div></div>

## API Dataclasses[[api-dataclasses]]

### AccessRequest[[huggingface_hub.hf_api.AccessRequest]][[huggingface_hub.hf_api.AccessRequest]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.AccessRequest</name><anchor>huggingface_hub.hf_api.AccessRequest</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L418</source><parameters>[{"name": "username", "val": ": str"}, {"name": "fullname", "val": ": str"}, {"name": "email", "val": ": Optional[str]"}, {"name": "timestamp", "val": ": datetime"}, {"name": "status", "val": ": Literal['pending', 'accepted', 'rejected']"}, {"name": "fields", "val": ": Optional[dict[str, Any]] = None"}]</parameters><paramsdesc>- **username** (`str`) --
  Username of the user who requested access.
- **fullname** (`str`) --
  Fullname of the user who requested access.
- **email** (`Optional[str]`) --
  Email of the user who requested access.
  Can only be `None` in the /accepted list if the user was granted access manually.
- **timestamp** (`datetime`) --
  Timestamp of the request.
- **status** (`Literal["pending", "accepted", "rejected"]`) --
  Status of the request. Can be one of `["pending", "accepted", "rejected"]`.
- **fields** (`dict[str, Any]`, *optional*) --
  Additional fields filled by the user in the gate form.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a user access request.




</div>

### CommitInfo[[huggingface_hub.CommitInfo]][[huggingface_hub.CommitInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitInfo</name><anchor>huggingface_hub.CommitInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L347</source><parameters>[{"name": "*args", "val": ""}, {"name": "commit_url", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **commit_url** (`str`) --
  Url where to find the commit.

- **commit_message** (`str`) --
  The summary (first line) of the commit that has been created.

- **commit_description** (`str`) --
  Description of the commit that has been created. Can be empty.

- **oid** (`str`) --
  Commit hash id. Example: `"91c54ad1727ee830252e457677f467be0bfd8a57"`.

- **pr_url** (`str`, *optional*) --
  Url to the PR that has been created, if any. Populated when `create_pr=True`
  is passed.

- **pr_revision** (`str`, *optional*) --
  Revision of the PR that has been created, if any. Populated when
  `create_pr=True` is passed. Example: `"refs/pr/1"`.

- **pr_num** (`int`, *optional*) --
  Number of the PR discussion that has been created, if any. Populated when
  `create_pr=True` is passed. Can be passed as `discussion_num` in
  [get_discussion_details()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details). Example: `1`.

- **repo_url** (`RepoUrl`) --
  Repo URL of the commit containing info like repo_id, repo_type, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>
Data structure containing information about a newly created commit.

Returned by any method that creates a commit on the Hub: [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit), [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder),
[delete_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_file), [delete_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_folder). It inherits from `str` for backward compatibility but using methods specific
to `str` is deprecated.




</div>

### DatasetInfo[[huggingface_hub.hf_api.DatasetInfo]][[huggingface_hub.DatasetInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetInfo</name><anchor>huggingface_hub.DatasetInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L896</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of dataset.
- **author** (`str`) --
  Author of the dataset.
- **sha** (`str`) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **disabled** (`bool`, *optional*) --
  Is the repo disabled.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **downloads** (`int`) --
  Number of downloads of the dataset over the last 30 days.
- **downloads_all_time** (`int`) --
  Cumulated number of downloads of the model since its creation.
- **likes** (`int`) --
  Number of likes of the dataset.
- **tags** (`list[str]`) --
  List of tags of the dataset.
- **card_data** (`DatasetCardData`, *optional*) --
  Model Card Metadata  as a [huggingface_hub.repocard_data.DatasetCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.DatasetCardData) object.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the dataset.
- **paperswithcode_id** (`str`, *optional*) --
  Papers with code ID of the dataset.
- **trending_score** (`int`, *optional*) --
  Trending score of the dataset.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a dataset on the Hub. This object is returned by [dataset_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) and [list_datasets()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_datasets).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing datasets
> using [list_datasets()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_datasets) only a subset of the attributes are returned.




</div>

### GitRefInfo[[huggingface_hub.GitRefInfo]][[huggingface_hub.GitRefInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitRefInfo</name><anchor>huggingface_hub.GitRefInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1244</source><parameters>[{"name": "name", "val": ": str"}, {"name": "ref", "val": ": str"}, {"name": "target_commit", "val": ": str"}]</parameters><paramsdesc>- **name** (`str`) --
  Name of the reference (e.g. tag name or branch name).
- **ref** (`str`) --
  Full git ref on the Hub (e.g. `"refs/heads/main"` or `"refs/tags/v1.0"`).
- **target_commit** (`str`) --
  OID of the target commit for the ref (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a git reference for a repo on the Hub.




</div>

### GitCommitInfo[[huggingface_hub.GitCommitInfo]][[huggingface_hub.GitCommitInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitCommitInfo</name><anchor>huggingface_hub.GitCommitInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1289</source><parameters>[{"name": "commit_id", "val": ": str"}, {"name": "authors", "val": ": list[str]"}, {"name": "created_at", "val": ": datetime"}, {"name": "title", "val": ": str"}, {"name": "message", "val": ": str"}, {"name": "formatted_title", "val": ": Optional[str]"}, {"name": "formatted_message", "val": ": Optional[str]"}]</parameters><paramsdesc>- **commit_id** (`str`) --
  OID of the commit (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`)
- **authors** (`list[str]`) --
  List of authors of the commit.
- **created_at** (`datetime`) --
  Datetime when the commit was created.
- **title** (`str`) --
  Title of the commit. This is a free-text value entered by the authors.
- **message** (`str`) --
  Description of the commit. This is a free-text value entered by the authors.
- **formatted_title** (`str`) --
  Title of the commit formatted as HTML. Only returned if `formatted=True` is set.
- **formatted_message** (`str`) --
  Description of the commit formatted as HTML. Only returned if `formatted=True` is set.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a git commit for a repo on the Hub. Check out [list_repo_commits()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_commits) for more details.




</div>

### GitRefs[[huggingface_hub.GitRefs]][[huggingface_hub.GitRefs]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.GitRefs</name><anchor>huggingface_hub.GitRefs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1263</source><parameters>[{"name": "branches", "val": ": list[GitRefInfo]"}, {"name": "converts", "val": ": list[GitRefInfo]"}, {"name": "tags", "val": ": list[GitRefInfo]"}, {"name": "pull_requests", "val": ": Optional[list[GitRefInfo]] = None"}]</parameters><paramsdesc>- **branches** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about branches on the repo.
- **converts** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about "convert" refs on the repo.
  Converts are refs used (internally) to push preprocessed data in Dataset repos.
- **tags** (`list[GitRefInfo]`) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about tags on the repo.
- **pull_requests** (`list[GitRefInfo]`, *optional*) --
  A list of [GitRefInfo](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.GitRefInfo) containing information about pull requests on the repo.
  Only returned if `include_prs=True` is set.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about all git references for a repo on the Hub.

Object is returned by [list_repo_refs()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs).




</div>

### ModelInfo[[huggingface_hub.hf_api.ModelInfo]][[huggingface_hub.ModelInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelInfo</name><anchor>huggingface_hub.ModelInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L702</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of model.
- **author** (`str`, *optional*) --
  Author of the model.
- **sha** (`str`, *optional*) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **disabled** (`bool`, *optional*) --
  Is the repo disabled.
- **downloads** (`int`) --
  Number of downloads of the model over the last 30 days.
- **downloads_all_time** (`int`) --
  Cumulated number of downloads of the model since its creation.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **gguf** (`dict`, *optional*) --
  GGUF information of the model.
- **inference** (`Literal["warm"]`, *optional*) --
  Status of the model on Inference Providers. Warm if the model is served by at least one provider.
- **inference_provider_mapping** (`list[InferenceProviderMapping]`, *optional*) --
  A list of `InferenceProviderMapping` ordered after the user's provider order.
- **likes** (`int`) --
  Number of likes of the model.
- **library_name** (`str`, *optional*) --
  Library associated with the model.
- **tags** (`list[str]`) --
  List of tags of the model. Compared to `card_data.tags`, contains extra tags computed by the Hub
  (e.g. supported libraries, model's arXiv).
- **pipeline_tag** (`str`, *optional*) --
  Pipeline tag associated with the model.
- **mask_token** (`str`, *optional*) --
  Mask token used by the model.
- **widget_data** (`Any`, *optional*) --
  Widget data associated with the model.
- **model_index** (`dict`, *optional*) --
  Model index for evaluation.
- **config** (`dict`, *optional*) --
  Model configuration.
- **transformers_info** (`TransformersInfo`, *optional*) --
  Transformers-specific info (auto class, processor, etc.) associated with the model.
- **trending_score** (`int`, *optional*) --
  Trending score of the model.
- **card_data** (`ModelCardData`, *optional*) --
  Model Card Metadata  as a [huggingface_hub.repocard_data.ModelCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCardData) object.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the model.
- **spaces** (`list[str]`, *optional*) --
  List of spaces using the model.
- **safetensors** (`SafeTensorsInfo`, *optional*) --
  Model's safetensors information.
- **security_repo_status** (`dict`, *optional*) --
  Model's security scan status.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a model on the Hub. This object is returned by [model_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.model_info) and [list_models()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_models).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing models
> using [list_models()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_models) only a subset of the attributes are returned.




</div>

### RepoSibling[[huggingface_hub.hf_api.RepoSibling]][[huggingface_hub.hf_api.RepoSibling]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.RepoSibling</name><anchor>huggingface_hub.hf_api.RepoSibling</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L556</source><parameters>[{"name": "rfilename", "val": ": str"}, {"name": "size", "val": ": Optional[int] = None"}, {"name": "blob_id", "val": ": Optional[str] = None"}, {"name": "lfs", "val": ": Optional[BlobLfsInfo] = None"}]</parameters><paramsdesc>- **rfilename** (str) --
  file name, relative to the repo root.
- **size** (`int`, *optional*) --
  The file's size, in bytes. This attribute is defined when `files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set
  to `True`. It's `None` otherwise.
- **blob_id** (`str`, *optional*) --
  The file's git OID. This attribute is defined when `files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set to
  `True`. It's `None` otherwise.
- **lfs** (`BlobLfsInfo`, *optional*) --
  The file's LFS metadata. This attribute is defined when`files_metadata` argument of [repo_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.repo_info) is set to
  `True` and the file is stored with Git LFS. It's `None` otherwise.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains basic information about a repo file inside a repo on the Hub.

> [!TIP]
> All attributes of this class are optional except `rfilename`. This is because only the file names are returned when
> listing repositories on the Hub (with [list_models()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_models), [list_datasets()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_datasets) or [list_spaces()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_spaces)). If you need more
> information like file size, blob id or lfs details, you must request them specifically from one repo at a time
> (using [model_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.model_info), [dataset_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.dataset_info) or [space_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.space_info)) as it adds more constraints on the backend server to
> retrieve these.




</div>

### RepoFile[[huggingface_hub.hf_api.RepoFile]][[huggingface_hub.hf_api.RepoFile]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.hf_api.RepoFile</name><anchor>huggingface_hub.hf_api.RepoFile</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L588</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (str) --
  file path relative to the repo root.
- **size** (`int`) --
  The file's size, in bytes.
- **blob_id** (`str`) --
  The file's git OID.
- **lfs** (`BlobLfsInfo`) --
  The file's LFS metadata.
- **last_commit** (`LastCommitInfo`, *optional*) --
  The file's last commit metadata. Only defined if [list_repo_tree()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_tree) and [get_paths_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_paths_info)
  are called with `expand=True`.
- **security** (`BlobSecurityInfo`, *optional*) --
  The file's security scan metadata. Only defined if [list_repo_tree()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_repo_tree) and [get_paths_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_paths_info)
  are called with `expand=True`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a file on the Hub.




</div>

### RepoUrl[[huggingface_hub.RepoUrl]][[huggingface_hub.RepoUrl]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.RepoUrl</name><anchor>huggingface_hub.RepoUrl</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L494</source><parameters>[{"name": "url", "val": ": Any"}, {"name": "endpoint", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **url** (`Any`) --
  String value of the repo url.
- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If URL cannot be parsed.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `repo_type` is unknown.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Subclass of `str` describing a repo URL on the Hub.

`RepoUrl` is returned by `HfApi.create_repo`. It inherits from `str` for backward
compatibility. At initialization, the URL is parsed to populate properties:
- endpoint (`str`)
- namespace (`Optional[str]`)
- repo_name (`str`)
- repo_id (`str`)
- repo_type (`Literal["model", "dataset", "space"]`)
- url (`str`)



<ExampleCodeBlock anchor="huggingface_hub.RepoUrl.example">

Example:
```py
>>> RepoUrl('https://huggingface.co/gpt2')
RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2')

>>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co')
RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset')

>>> RepoUrl('hf://datasets/my-user/my-dataset')
RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset')

>>> HfApi.create_repo("dummy_model")
RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model')
```

</ExampleCodeBlock>






</div>

### SafetensorsRepoMetadata[[huggingface_hub.utils.SafetensorsRepoMetadata]][[huggingface_hub.utils.SafetensorsRepoMetadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.SafetensorsRepoMetadata</name><anchor>huggingface_hub.utils.SafetensorsRepoMetadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L74</source><parameters>[{"name": "metadata", "val": ": typing.Optional[dict]"}, {"name": "sharded", "val": ": bool"}, {"name": "weight_map", "val": ": dict"}, {"name": "files_metadata", "val": ": dict"}]</parameters><paramsdesc>- **metadata** (`dict`, *optional*) --
  The metadata contained in the 'model.safetensors.index.json' file, if it exists. Only populated for sharded
  models.
- **sharded** (`bool`) --
  Whether the repo contains a sharded model or not.
- **weight_map** (`dict[str, str]`) --
  A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors.
- **files_metadata** (`dict[str, SafetensorsFileMetadata]`) --
  A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as
  a `SafetensorsFileMetadata` object.
- **parameter_count** (`dict[str, int]`) --
  A map of the number of parameters per data type. Keys are data types and values are the number of parameters
  of that data type.</paramsdesc><paramgroups>0</paramgroups></docstring>
Metadata for a Safetensors repo.

A repo is considered to be a Safetensors repo if it contains either a 'model.safetensors' weight file (non-shared
model) or a 'model.safetensors.index.json' index file (sharded model) at its root.

This class is returned by [get_safetensors_metadata()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_safetensors_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### SafetensorsFileMetadata[[huggingface_hub.utils.SafetensorsFileMetadata]][[huggingface_hub.utils.SafetensorsFileMetadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.SafetensorsFileMetadata</name><anchor>huggingface_hub.utils.SafetensorsFileMetadata</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L44</source><parameters>[{"name": "metadata", "val": ": dict"}, {"name": "tensors", "val": ": dict"}]</parameters><paramsdesc>- **metadata** (`dict`) --
  The metadata contained in the file.
- **tensors** (`dict[str, TensorInfo]`) --
  A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as a
  `TensorInfo` object.
- **parameter_count** (`dict[str, int]`) --
  A map of the number of parameters per data type. Keys are data types and values are the number of parameters
  of that data type.</paramsdesc><paramgroups>0</paramgroups></docstring>
Metadata for a Safetensors file hosted on the Hub.

This class is returned by [parse_safetensors_file_metadata()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.parse_safetensors_file_metadata).

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### SpaceInfo[[huggingface_hub.hf_api.SpaceInfo]][[huggingface_hub.SpaceInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceInfo</name><anchor>huggingface_hub.SpaceInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1010</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **id** (`str`) --
  ID of the Space.
- **author** (`str`, *optional*) --
  Author of the Space.
- **sha** (`str`, *optional*) --
  Repo SHA at this particular revision.
- **created_at** (`datetime`, *optional*) --
  Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`,
  corresponding to the date when we began to store creation dates.
- **last_modified** (`datetime`, *optional*) --
  Date of last commit to the repo.
- **private** (`bool`) --
  Is the repo private.
- **gated** (`Literal["auto", "manual", False]`, *optional*) --
  Is the repo gated.
  If so, whether there is manual or automatic approval.
- **disabled** (`bool`, *optional*) --
  Is the Space disabled.
- **host** (`str`, *optional*) --
  Host URL of the Space.
- **subdomain** (`str`, *optional*) --
  Subdomain of the Space.
- **likes** (`int`) --
  Number of likes of the Space.
- **tags** (`list[str]`) --
  List of tags of the Space.
- **siblings** (`list[RepoSibling]`) --
  List of [huggingface_hub.hf_api.RepoSibling](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.hf_api.RepoSibling) objects that constitute the Space.
- **card_data** (`SpaceCardData`, *optional*) --
  Space Card Metadata  as a [huggingface_hub.repocard_data.SpaceCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.SpaceCardData) object.
- **runtime** (`SpaceRuntime`, *optional*) --
  Space runtime information as a [huggingface_hub.hf_api.SpaceRuntime](/docs/huggingface_hub/main/ko/package_reference/space_runtime#huggingface_hub.SpaceRuntime) object.
- **sdk** (`str`, *optional*) --
  SDK used by the Space.
- **models** (`list[str]`, *optional*) --
  List of models used by the Space.
- **datasets** (`list[str]`, *optional*) --
  List of datasets used by the Space.
- **trending_score** (`int`, *optional*) --
  Trending score of the Space.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a Space on the Hub. This object is returned by [space_info()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.space_info) and [list_spaces()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_spaces).

> [!TIP]
> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made.
> In general, the more specific the query, the more information is returned. On the contrary, when listing spaces
> using [list_spaces()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_spaces) only a subset of the attributes are returned.




</div>

### TensorInfo[[huggingface_hub.utils.TensorInfo]][[huggingface_hub.utils.TensorInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.utils.TensorInfo</name><anchor>huggingface_hub.utils.TensorInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_safetensors.py#L14</source><parameters>[{"name": "dtype", "val": ": typing.Literal['F64', 'F32', 'F16', 'BF16', 'I64', 'I32', 'I16', 'I8', 'U8', 'BOOL']"}, {"name": "shape", "val": ": list"}, {"name": "data_offsets", "val": ": tuple"}]</parameters><paramsdesc>- **dtype** (`str`) --
  The data type of the tensor ("F64", "F32", "F16", "BF16", "I64", "I32", "I16", "I8", "U8", "BOOL").
- **shape** (`list[int]`) --
  The shape of the tensor.
- **data_offsets** (`tuple[int, int]`) --
  The offsets of the data in the file as a tuple `[BEGIN, END]`.
- **parameter_count** (`int`) --
  The number of parameters in the tensor.</paramsdesc><paramgroups>0</paramgroups></docstring>
Information about a tensor.

For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format.




</div>

### User[[huggingface_hub.User]][[huggingface_hub.User]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.User</name><anchor>huggingface_hub.User</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1409</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **username** (`str`) --
  Name of the user on the Hub (unique).
- **fullname** (`str`) --
  User's full name.
- **avatar_url** (`str`) --
  URL of the user's avatar.
- **details** (`str`, *optional*) --
  User's details.
- **is_following** (`bool`, *optional*) --
  Whether the authenticated user is following this user.
- **is_pro** (`bool`, *optional*) --
  Whether the user is a pro user.
- **num_models** (`int`, *optional*) --
  Number of models created by the user.
- **num_datasets** (`int`, *optional*) --
  Number of datasets created by the user.
- **num_spaces** (`int`, *optional*) --
  Number of spaces created by the user.
- **num_discussions** (`int`, *optional*) --
  Number of discussions initiated by the user.
- **num_papers** (`int`, *optional*) --
  Number of papers authored by the user.
- **num_upvotes** (`int`, *optional*) --
  Number of upvotes received by the user.
- **num_likes** (`int`, *optional*) --
  Number of likes given by the user.
- **num_following** (`int`, *optional*) --
  Number of users this user is following.
- **num_followers** (`int`, *optional*) --
  Number of users following this user.
- **orgs** (list of `Organization`) --
  List of organizations the user is part of.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a user on the Hub.




</div>

### UserLikes[[huggingface_hub.UserLikes]][[huggingface_hub.UserLikes]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.UserLikes</name><anchor>huggingface_hub.UserLikes</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1322</source><parameters>[{"name": "user", "val": ": str"}, {"name": "total", "val": ": int"}, {"name": "datasets", "val": ": list[str]"}, {"name": "models", "val": ": list[str]"}, {"name": "spaces", "val": ": list[str]"}]</parameters><paramsdesc>- **user** (`str`) --
  Name of the user for which we fetched the likes.
- **total** (`int`) --
  Total number of likes.
- **datasets** (`list[str]`) --
  List of datasets liked by the user (as repo_ids).
- **models** (`list[str]`) --
  List of models liked by the user (as repo_ids).
- **spaces** (`list[str]`) --
  List of spaces liked by the user (as repo_ids).</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a user likes on the Hub.




</div>

## CommitOperation[[huggingface_hub.CommitOperationAdd]][[huggingface_hub.CommitOperationAdd]]

`CommitOperation()`에 지원되는 값은 다음과 같습니다:

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationAdd</name><anchor>huggingface_hub.CommitOperationAdd</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L125</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "path_or_fileobj", "val": ": typing.Union[str, pathlib.Path, bytes, typing.BinaryIO]"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
- **path_or_fileobj** (`str`, `Path`, `bytes`, or `BinaryIO`) --
  Either:
  - a path to a local file (as `str` or `pathlib.Path`) to upload
  - a buffer of bytes (`bytes`) holding the content of the file to upload
  - a "file object" (subclass of `io.BufferedIOBase`), typically obtained
    with `open(path, "rb")`. It must support `seek()` and `tell()` methods.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is not one of `str`, `Path`, `bytes` or `io.BufferedIOBase`.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is a `str` or `Path` but not a path to an existing file.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If `path_or_fileobj` is a `io.BufferedIOBase` but it doesn't support both
  `seek()` and `tell()`.</raises><raisederrors>``ValueError``</raisederrors></docstring>

Data structure holding necessary info to upload a file to a repository on the Hub.









<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>as_file</name><anchor>huggingface_hub.CommitOperationAdd.as_file</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L207</source><parameters>[{"name": "with_tqdm", "val": ": bool = False"}]</parameters><paramsdesc>- **with_tqdm** (`bool`, *optional*, defaults to `False`) --
  If True, iterating over the file object will display a progress bar. Only
  works if the file-like object is a path to a file. Pure bytes and buffers
  are not supported.</paramsdesc><paramgroups>0</paramgroups></docstring>

A context manager that yields a file-like object allowing to read the underlying
data behind `path_or_fileobj`.



<ExampleCodeBlock anchor="huggingface_hub.CommitOperationAdd.as_file.example">

Example:

```python
>>> operation = CommitOperationAdd(
...        path_in_repo="remote/dir/weights.h5",
...        path_or_fileobj="./local/weights.h5",
... )
CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')

>>> with operation.as_file() as file:
...     content = file.read()

>>> with operation.as_file(with_tqdm=True) as file:
...     while True:
...         data = file.read(1024)
...         if not data:
...              break
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]

>>> with operation.as_file(with_tqdm=True) as file:
...     httpx.put(..., data=file)
config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>b64content</name><anchor>huggingface_hub.CommitOperationAdd.b64content</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L257</source><parameters>[]</parameters></docstring>

The base64-encoded content of `path_or_fileobj`

Returns: `bytes`


</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationDelete</name><anchor>huggingface_hub.CommitOperationDelete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L58</source><parameters>[{"name": "path_in_repo", "val": ": str"}, {"name": "is_folder", "val": ": typing.Union[bool, typing.Literal['auto']] = 'auto'"}]</parameters><paramsdesc>- **path_in_repo** (`str`) --
  Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
  for a file or `"checkpoints/1fec34a/"` for a folder.
- **is_folder** (`bool` or `Literal["auto"]`, *optional*) --
  Whether the Delete Operation applies to a folder or not. If "auto", the path
  type (file or folder) is guessed automatically by looking if path ends with
  a "/" (folder) or not (file). To explicitly set the path type, you can set
  `is_folder=True` or `is_folder=False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Data structure holding necessary info to delete a file or a folder from a repository
on the Hub.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitOperationCopy</name><anchor>huggingface_hub.CommitOperationCopy</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_api.py#L89</source><parameters>[{"name": "src_path_in_repo", "val": ": str"}, {"name": "path_in_repo", "val": ": str"}, {"name": "src_revision", "val": ": typing.Optional[str] = None"}, {"name": "_src_oid", "val": ": typing.Optional[str] = None"}, {"name": "_dest_oid", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **src_path_in_repo** (`str`) --
  Relative filepath in the repo of the file to be copied, e.g. `"checkpoints/1fec34a/weights.bin"`.
- **path_in_repo** (`str`) --
  Relative filepath in the repo where to copy the file, e.g. `"checkpoints/1fec34a/weights_copy.bin"`.
- **src_revision** (`str`, *optional*) --
  The git revision of the file to be copied. Can be any valid git revision.
  Default to the target commit revision.</paramsdesc><paramgroups>0</paramgroups></docstring>

Data structure holding necessary info to copy a file in a repository on the Hub.

Limitations:
- Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
- Cross-repository copies are not supported.

Note: you can combine a [CommitOperationCopy](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationCopy) and a [CommitOperationDelete](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationDelete) to rename an LFS file on the Hub.




</div>

## CommitScheduler[[huggingface_hub.CommitScheduler]][[huggingface_hub.CommitScheduler]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CommitScheduler</name><anchor>huggingface_hub.CommitScheduler</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L29</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "folder_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "every", "val": ": typing.Union[int, float] = 5"}, {"name": "path_in_repo", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "squash_history", "val": ": bool = False"}, {"name": "hf_api", "val": ": typing.Optional[ForwardRef('HfApi')] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The id of the repo to commit to.
- **folder_path** (`str` or `Path`) --
  Path to the local folder to upload regularly.
- **every** (`int` or `float`, *optional*) --
  The number of minutes between each commit. Defaults to 5 minutes.
- **path_in_repo** (`str`, *optional*) --
  Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder
  of the repository.
- **repo_type** (`str`, *optional*) --
  The type of the repo to commit to. Defaults to `model`.
- **revision** (`str`, *optional*) --
  The revision of the repo to commit to. Defaults to `main`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
- **token** (`str`, *optional*) --
  The token to use to commit to the repo. Defaults to the token saved on the machine.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are uploaded.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not uploaded.
- **squash_history** (`bool`, *optional*) --
  Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
  useful to avoid degraded performances on the repo when it grows too large.
- **hf_api** (`HfApi`, *optional*) --
  The [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) client to use to commit to the Hub. Can be set with custom settings (user agent, token,...).</paramsdesc><paramgroups>0</paramgroups></docstring>

Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).

The recommended way to use the scheduler is to use it as a context manager. This ensures that the scheduler is
properly stopped and the last commit is triggered when the script ends. The scheduler can also be stopped manually
with the `stop` method. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads)
to learn more about how to use it.



<ExampleCodeBlock anchor="huggingface_hub.CommitScheduler.example">

Example:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler

# Scheduler uploads every 10 minutes
>>> csv_path = Path("watched_folder/data.csv")
>>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)

>>> with csv_path.open("a") as f:
...     f.write("first line")

# Some time later (...)
>>> with csv_path.open("a") as f:
...     f.write("second line")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.CommitScheduler.example-2">

Example using a context manager:
```py
>>> from pathlib import Path
>>> from huggingface_hub import CommitScheduler

>>> with CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path="watched_folder", every=10) as scheduler:
...     csv_path = Path("watched_folder/data.csv")
...     with csv_path.open("a") as f:
...         f.write("first line")
...     (...)
...     with csv_path.open("a") as f:
...         f.write("second line")

# Scheduler is now stopped and last commit have been triggered
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.CommitScheduler.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L204</source><parameters>[]</parameters></docstring>

Push folder to the Hub and return the commit info.

> [!WARNING]
> This method is not meant to be called directly. It is run in the background by the scheduler, respecting a
> queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency
> issues.

The default behavior of `push_to_hub` is to assume an append-only folder. It lists all files in the folder and
uploads only changed files. If no changes are found, the method returns without committing anything. If you want
to change this behavior, you can inherit from [CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler) and override this method. This can be useful
for example to compress data together in a single file before committing. For more details and examples, check
out our [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>stop</name><anchor>huggingface_hub.CommitScheduler.stop</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L157</source><parameters>[]</parameters></docstring>
Stop the scheduler.

A stopped scheduler cannot be restarted. Mostly for tests purposes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>trigger</name><anchor>huggingface_hub.CommitScheduler.trigger</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_commit_scheduler.py#L181</source><parameters>[]</parameters></docstring>
Trigger a `push_to_hub` and return a future.

This method is automatically called every `every` minutes. You can also call it manually to trigger a commit
immediately, without waiting for the next scheduled commit.


</div></div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/hf_api.md" />

### 컬렉션 관리[[managing-collections]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/collections.md

# 컬렉션 관리[[managing-collections]]

Hub에서 Space를 관리하는 메소드에 대한 자세한 설명은 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 페이지를 확인하세요.

- 컬렉션 내용 가져오기: [get_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_collection)
- 새로운 컬렉션 생성: [create_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_collection)
- 컬렉션 업데이트: [update_collection_metadata()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_collection_metadata)
- 컬렉션 삭제: [delete_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_collection)
- 컬렉션에 항목 추가: [add_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.add_collection_item)
- 컬렉션의 항목 업데이트: [update_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item)
- 컬렉션에서 항목 제거: [delete_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_collection_item)


### Collection[[huggingface_hub.Collection]][[huggingface_hub.Collection]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Collection</name><anchor>huggingface_hub.Collection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1181</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **slug** (`str`) --
  Slug of the collection. E.g. `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`.
- **title** (`str`) --
  Title of the collection. E.g. `"Recent models"`.
- **owner** (`str`) --
  Owner of the collection. E.g. `"TheBloke"`.
- **items** (`list[CollectionItem]`) --
  List of items in the collection.
- **last_updated** (`datetime`) --
  Date of the last update of the collection.
- **position** (`int`) --
  Position of the collection in the list of collections of the owner.
- **private** (`bool`) --
  Whether the collection is private or not.
- **theme** (`str`) --
  Theme of the collection. E.g. `"green"`.
- **upvotes** (`int`) --
  Number of upvotes of the collection.
- **description** (`str`, *optional*) --
  Description of the collection, as plain text.
- **url** (`str`) --
  (property) URL of the collection on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a Collection on the Hub.




</div>

### CollectionItem[[huggingface_hub.CollectionItem]][[huggingface_hub.CollectionItem]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CollectionItem</name><anchor>huggingface_hub.CollectionItem</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L1136</source><parameters>[{"name": "_id", "val": ": str"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": CollectionItemType_T"}, {"name": "position", "val": ": int"}, {"name": "note", "val": ": Optional[dict] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **item_object_id** (`str`) --
  Unique ID of the item in the collection.
- **item_id** (`str`) --
  ID of the underlying object on the Hub. Can be either a repo_id, a paper id or a collection slug.
  e.g. `"jbilcke-hf/ai-comic-factory"`, `"2307.09288"`, `"celinah/cerebras-function-calling-682607169c35fbfa98b30b9a"`.
- **item_type** (`str`) --
  Type of the underlying object. Can be one of `"model"`, `"dataset"`, `"space"`, `"paper"` or `"collection"`.
- **position** (`int`) --
  Position of the item in the collection.
- **note** (`str`, *optional*) --
  Note associated with the item, as plain text.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about an item of a Collection (model, dataset, Space, paper or collection).




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/collections.md" />

### 캐시 시스템 참조[[cache-system-reference]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/cache.md

# 캐시 시스템 참조[[cache-system-reference]]

버전 0.8.0에서의 업데이트로, 캐시 시스템은 Hub에 의존하는 라이브러리 전체에서 공유되는 중앙 캐시 시스템으로 발전하였습니다. Hugging Face 캐싱에 대한 자세한 설명은 [캐시 시스템 가이드](../guides/manage-cache)를 참조하세요.

## 도우미 함수[[helpers]]

### try_to_load_from_cache[[huggingface_hub.try_to_load_from_cache]][[huggingface_hub.try_to_load_from_cache]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.try_to_load_from_cache</name><anchor>huggingface_hub.try_to_load_from_cache</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/file_download.py#L1386</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "filename", "val": ": str"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **cache_dir** (`str` or `os.PathLike`) --
  The folder where the cached files lie.
- **repo_id** (`str`) --
  The ID of the repo on huggingface.co.
- **filename** (`str`) --
  The filename to look for inside `repo_id`.
- **revision** (`str`, *optional*) --
  The specific model version to use. Will default to `"main"` if it's not provided and no `commit_hash` is
  provided either.
- **repo_type** (`str`, *optional*) --
  The type of the repository. Will default to `"model"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`Optional[str]` or `_CACHED_NO_EXIST`</rettype><retdesc>Will return `None` if the file was not cached. Otherwise:
- The exact path to the cached file if it's found in the cache
- A special value `_CACHED_NO_EXIST` if the file does not exist at the given commit hash and this fact was
  cached.</retdesc></docstring>

Explores the cache to return the latest cached file for a given revision if found.

This function will not raise any exception if the file in not cached.







<ExampleCodeBlock anchor="huggingface_hub.try_to_load_from_cache.example">

Example:

```python
from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST

filepath = try_to_load_from_cache()
if isinstance(filepath, str):
    # file exists and is cached
    ...
elif filepath is _CACHED_NO_EXIST:
    # non-existence of file is cached
    ...
else:
    # file is not cached
    ...
```

</ExampleCodeBlock>


</div>

### cached_assets_path[[huggingface_hub.cached_assets_path]][[huggingface_hub.cached_assets_path]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.cached_assets_path</name><anchor>huggingface_hub.cached_assets_path</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_assets.py#L21</source><parameters>[{"name": "library_name", "val": ": str"}, {"name": "namespace", "val": ": str = 'default'"}, {"name": "subfolder", "val": ": str = 'default'"}, {"name": "assets_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}]</parameters><paramsdesc>- **library_name** (`str`) --
  Name of the library that will manage the cache folder. Example: `"dataset"`.
- **namespace** (`str`, *optional*, defaults to "default") --
  Namespace to which the data belongs. Example: `"SQuAD"`.
- **subfolder** (`str`, *optional*, defaults to "default") --
  Subfolder in which the data will be stored. Example: `extracted`.
- **assets_dir** (`str`, `Path`, *optional*) --
  Path to the folder where assets are cached. This must not be the same folder
  where Hub files are cached. Defaults to `HF_HOME / "assets"` if not provided.
  Can also be set with `HF_ASSETS_CACHE` environment variable.</paramsdesc><paramgroups>0</paramgroups><retdesc>Path to the cache folder (`Path`).</retdesc></docstring>
Return a folder path to cache arbitrary files.

`huggingface_hub` provides a canonical folder path to store assets. This is the
recommended way to integrate cache in a downstream library as it will benefit from
the builtins tools to scan and delete the cache properly.

The distinction is made between files cached from the Hub and assets. Files from the
Hub are cached in a git-aware manner and entirely managed by `huggingface_hub`. See
[related documentation](https://huggingface.co/docs/huggingface_hub/how-to-cache).
All other files that a downstream library caches are considered to be "assets"
(files downloaded from external sources, extracted from a .tar archive, preprocessed
for training,...).

Once the folder path is generated, it is guaranteed to exist and to be a directory.
The path is based on 3 levels of depth: the library name, a namespace and a
subfolder. Those 3 levels grants flexibility while allowing `huggingface_hub` to
expect folders when scanning/deleting parts of the assets cache. Within a library,
it is expected that all namespaces share the same subset of subfolder names but this
is not a mandatory rule. The downstream library has then full control on which file
structure to adopt within its cache. Namespace and subfolder are optional (would
default to a `"default/"` subfolder) but library name is mandatory as we want every
downstream library to manage its own cache.

<ExampleCodeBlock anchor="huggingface_hub.cached_assets_path.example">

Expected tree:
```text
    assets/
    └── datasets/
    │   ├── SQuAD/
    │   │   ├── downloaded/
    │   │   ├── extracted/
    │   │   └── processed/
    │   ├── Helsinki-NLP--tatoeba_mt/
    │       ├── downloaded/
    │       ├── extracted/
    │       └── processed/
    └── transformers/
        ├── default/
        │   ├── something/
        ├── bert-base-cased/
        │   ├── default/
        │   └── training/
    hub/
    └── models--julien-c--EsperBERTo-small/
        ├── blobs/
        │   ├── (...)
        │   ├── (...)
        ├── refs/
        │   └── (...)
        └── [ 128]  snapshots/
            ├── 2439f60ef33a0d46d85da5001d52aeda5b00ce9f/
            │   ├── (...)
            └── bbc77c8132af1cc5cf678da3f1ddf2de43606d48/
                └── (...)
```

</ExampleCodeBlock>






<ExampleCodeBlock anchor="huggingface_hub.cached_assets_path.example-2">

Example:
```py
>>> from huggingface_hub import cached_assets_path

>>> cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="download")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/SQuAD/download')

>>> cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="extracted")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/SQuAD/extracted')

>>> cached_assets_path(library_name="datasets", namespace="Helsinki-NLP/tatoeba_mt")
PosixPath('/home/wauplin/.cache/huggingface/extra/datasets/Helsinki-NLP--tatoeba_mt/default')

>>> cached_assets_path(library_name="datasets", assets_dir="/tmp/tmp123456")
PosixPath('/tmp/tmp123456/datasets/default/default')
```

</ExampleCodeBlock>


</div>

### scan_cache_dir[[huggingface_hub.scan_cache_dir]][[huggingface_hub.scan_cache_dir]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.scan_cache_dir</name><anchor>huggingface_hub.scan_cache_dir</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L561</source><parameters>[{"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}]</parameters><paramsdesc>- **cache_dir** (`str` or `Path`, `optional`) --
  Cache directory to cache. Defaults to the default HF cache directory.</paramsdesc><paramgroups>0</paramgroups></docstring>
Scan the entire HF cache-system and return a [~HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo) structure.

Use `scan_cache_dir` in order to programmatically scan your cache-system. The cache
will be scanned repo by repo. If a repo is corrupted, a [~CorruptedCacheException](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CorruptedCacheException)
will be thrown internally but captured and returned in the [~HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo)
structure. Only valid repos get a proper report.

<ExampleCodeBlock anchor="huggingface_hub.scan_cache_dir.example">

```py
>>> from huggingface_hub import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(
    size_on_disk=3398085269,
    repos=frozenset({
        CachedRepoInfo(
            repo_id='t5-small',
            repo_type='model',
            repo_path=PosixPath(...),
            size_on_disk=970726914,
            nb_files=11,
            revisions=frozenset({
                CachedRevisionInfo(
                    commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5',
                    size_on_disk=970726339,
                    snapshot_path=PosixPath(...),
                    files=frozenset({
                        CachedFileInfo(
                            file_name='config.json',
                            size_on_disk=1197
                            file_path=PosixPath(...),
                            blob_path=PosixPath(...),
                        ),
                        CachedFileInfo(...),
                        ...
                    }),
                ),
                CachedRevisionInfo(...),
                ...
            }),
        ),
        CachedRepoInfo(...),
        ...
    }),
    warnings=[
        CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."),
        CorruptedCacheException(...),
        ...
    ],
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.scan_cache_dir.example-2">

You can also print a detailed report directly from the `hf` command line using:
```text
> hf cache ls
ID                          SIZE     LAST_ACCESSED LAST_MODIFIED REFS
--------------------------- -------- ------------- ------------- -----------
dataset/nyu-mll/glue          157.4M 2 days ago    2 days ago    main script
model/LiquidAI/LFM2-VL-1.6B     3.2G 4 days ago    4 days ago    main
model/microsoft/UserLM-8b      32.1G 4 days ago    4 days ago    main

Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G.
Got 1 warning(s) while scanning. Use -vvv to print details.
```

</ExampleCodeBlock>



> [!WARNING]
> Raises:
>
>     `CacheNotFound`
>       If the cache directory does not exist.
>
>     [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       If the cache directory is a file, instead of a directory.

Returns: a [~HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo) object.


</div>

## 데이터 구조[[data-structures]]

모든 구조체는 [scan_cache_dir()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.scan_cache_dir)에 의해 생성되고 반환되며, 불변(immutable)입니다.

### HFCacheInfo[[huggingface_hub.HFCacheInfo]][[huggingface_hub.HFCacheInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HFCacheInfo</name><anchor>huggingface_hub.HFCacheInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L331</source><parameters>[{"name": "size_on_disk", "val": ": int"}, {"name": "repos", "val": ": frozenset"}, {"name": "warnings", "val": ": list"}]</parameters><paramsdesc>- **size_on_disk** (`int`) --
  Sum of all valid repo sizes in the cache-system.
- **repos** (`frozenset[CachedRepoInfo]`) --
  Set of [~CachedRepoInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedRepoInfo) describing all valid cached repos found on the
  cache-system while scanning.
- **warnings** (`list[CorruptedCacheException]`) --
  List of [~CorruptedCacheException](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CorruptedCacheException) that occurred while scanning the cache.
  Those exceptions are captured so that the scan can continue. Corrupted repos
  are skipped from the scan.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about the entire cache-system.

This data structure is returned by [scan_cache_dir()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.scan_cache_dir) and is immutable.



> [!WARNING]
> Here `size_on_disk` is equal to the sum of all repo sizes (only blobs). However if
> some cached repos are corrupted, their sizes are not taken into account.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_revisions</name><anchor>huggingface_hub.HFCacheInfo.delete_revisions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L366</source><parameters>[{"name": "*revisions", "val": ": str"}]</parameters></docstring>
Prepare the strategy to delete one or more revisions cached locally.

Input revisions can be any revision hash. If a revision hash is not found in the
local cache, a warning is thrown but no error is raised. Revisions can be from
different cached repos since hashes are unique across repos,

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.delete_revisions.example">

Examples:
```py
>>> from huggingface_hub import scan_cache_dir
>>> cache_info = scan_cache_dir()
>>> delete_strategy = cache_info.delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa"
... )
>>> print(f"Will free {delete_strategy.expected_freed_size_str}.")
Will free 7.9K.
>>> delete_strategy.execute()
Cache deletion done. Saved 7.9K.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.delete_revisions.example-2">

```py
>>> from huggingface_hub import scan_cache_dir
>>> scan_cache_dir().delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa",
...     "e2983b237dccf3ab4937c97fa717319a9ca1a96d",
...     "6c0e6080953db56375760c0471a8c5f2929baf11",
... ).execute()
Cache deletion done. Saved 8.6G.
```

</ExampleCodeBlock>

> [!WARNING]
> `delete_revisions` returns a [DeleteCacheStrategy](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.DeleteCacheStrategy) object that needs to
> be executed. The [DeleteCacheStrategy](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.DeleteCacheStrategy) is not meant to be modified but
> allows having a dry run before actually executing the deletion.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>export_as_table</name><anchor>huggingface_hub.HFCacheInfo.export_as_table</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L466</source><parameters>[{"name": "verbosity", "val": ": int = 0"}]</parameters><paramsdesc>- **verbosity** (`int`, *optional*) --
  The verbosity level. Defaults to 0.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The table as a string.</retdesc></docstring>
Generate a table from the [HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo) object.

Pass `verbosity=0` to get a table with a single row per repo, with columns
"repo_id", "repo_type", "size_on_disk", "nb_files", "last_accessed", "last_modified", "refs", "local_path".

Pass `verbosity=1` to get a table with a row per repo and revision (thus multiple rows can appear for a single repo), with columns
"repo_id", "repo_type", "revision", "size_on_disk", "nb_files", "last_modified", "refs", "local_path".

<ExampleCodeBlock anchor="huggingface_hub.HFCacheInfo.export_as_table.example">

Example:
```py
>>> from huggingface_hub.utils import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(...)

>>> print(hf_cache_info.export_as_table())
REPO ID                                             REPO TYPE SIZE ON DISK NB FILES LAST_ACCESSED LAST_MODIFIED REFS LOCAL PATH
--------------------------------------------------- --------- ------------ -------- ------------- ------------- ---- --------------------------------------------------------------------------------------------------
roberta-base                                        model             2.7M        5 1 day ago     1 week ago    main ~/.cache/huggingface/hub/models--roberta-base
suno/bark                                           model             8.8K        1 1 week ago    1 week ago    main ~/.cache/huggingface/hub/models--suno--bark
t5-base                                             model           893.8M        4 4 days ago    7 months ago  main ~/.cache/huggingface/hub/models--t5-base
t5-large                                            model             3.0G        4 5 weeks ago   5 months ago  main ~/.cache/huggingface/hub/models--t5-large

>>> print(hf_cache_info.export_as_table(verbosity=1))
REPO ID                                             REPO TYPE REVISION                                 SIZE ON DISK NB FILES LAST_MODIFIED REFS LOCAL PATH
--------------------------------------------------- --------- ---------------------------------------- ------------ -------- ------------- ---- -----------------------------------------------------------------------------------------------------------------------------------------------------
roberta-base                                        model     e2da8e2f811d1448a5b465c236feacd80ffbac7b         2.7M        5 1 week ago    main ~/.cache/huggingface/hub/models--roberta-base/snapshots/e2da8e2f811d1448a5b465c236feacd80ffbac7b
suno/bark                                           model     70a8a7d34168586dc5d028fa9666aceade177992         8.8K        1 1 week ago    main ~/.cache/huggingface/hub/models--suno--bark/snapshots/70a8a7d34168586dc5d028fa9666aceade177992
t5-base                                             model     a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1       893.8M        4 7 months ago  main ~/.cache/huggingface/hub/models--t5-base/snapshots/a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1
t5-large                                            model     150ebc2c4b72291e770f58e6057481c8d2ed331a         3.0G        4 5 months ago  main ~/.cache/huggingface/hub/models--t5-large/snapshots/150ebc2c4b72291e770f58e6057481c8d2ed331a
```

</ExampleCodeBlock>








</div></div>

### CachedRepoInfo[[huggingface_hub.CachedRepoInfo]][[huggingface_hub.CachedRepoInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedRepoInfo</name><anchor>huggingface_hub.CachedRepoInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L176</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": typing.Literal['model', 'dataset', 'space']"}, {"name": "repo_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "nb_files", "val": ": int"}, {"name": "revisions", "val": ": frozenset"}, {"name": "last_accessed", "val": ": float"}, {"name": "last_modified", "val": ": float"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  Repo id of the repo on the Hub. Example: `"google/fleurs"`.
- **repo_type** (`Literal["dataset", "model", "space"]`) --
  Type of the cached repo.
- **repo_path** (`Path`) --
  Local path to the cached repo.
- **size_on_disk** (`int`) --
  Sum of the blob file sizes in the cached repo.
- **nb_files** (`int`) --
  Total number of blob files in the cached repo.
- **revisions** (`frozenset[CachedRevisionInfo]`) --
  Set of [~CachedRevisionInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedRevisionInfo) describing all revisions cached in the repo.
- **last_accessed** (`float`) --
  Timestamp of the last time a blob file of the repo has been accessed.
- **last_modified** (`float`) --
  Timestamp of the last time a blob file of the repo has been modified/created.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a cached repository.



> [!WARNING]
> `size_on_disk` is not necessarily the sum of all revisions sizes because of
> duplicated files. Besides, only blobs are taken into account, not the (negligible)
> size of folders and symlinks.

> [!WARNING]
> `last_accessed` and `last_modified` reliability can depend on the OS you are using.
> See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result)
> for more details.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedRepoInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L238</source><parameters>[]</parameters></docstring>

(property) Sum of the blob file sizes as a human-readable string.

Example: "42.2K".


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>refs</name><anchor>huggingface_hub.CachedRepoInfo.refs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L252</source><parameters>[]</parameters></docstring>

(property) Mapping between `refs` and revision data structures.


</div></div>

### CachedRevisionInfo[[huggingface_hub.CachedRevisionInfo]][[huggingface_hub.CachedRevisionInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedRevisionInfo</name><anchor>huggingface_hub.CachedRevisionInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L105</source><parameters>[{"name": "commit_hash", "val": ": str"}, {"name": "snapshot_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "files", "val": ": frozenset"}, {"name": "refs", "val": ": frozenset"}, {"name": "last_modified", "val": ": float"}]</parameters><paramsdesc>- **commit_hash** (`str`) --
  Hash of the revision (unique).
  Example: `"9338f7b671827df886678df2bdd7cc7b4f36dffd"`.
- **snapshot_path** (`Path`) --
  Path to the revision directory in the `snapshots` folder. It contains the
  exact tree structure as the repo on the Hub.
- **files** -- (`frozenset[CachedFileInfo]`):
  Set of [~CachedFileInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedFileInfo) describing all files contained in the snapshot.
- **refs** (`frozenset[str]`) --
  Set of `refs` pointing to this revision. If the revision has no `refs`, it
  is considered detached.
  Example: `{"main", "2.4.0"}` or `{"refs/pr/1"}`.
- **size_on_disk** (`int`) --
  Sum of the blob file sizes that are symlink-ed by the revision.
- **last_modified** (`float`) --
  Timestamp of the last time the revision has been created/modified.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a revision.

A revision correspond to a folder in the `snapshots` folder and is populated with
the exact tree structure as the repo on the Hub but contains only symlinks. A
revision can be either referenced by 1 or more `refs` or be "detached" (no refs).



> [!WARNING]
> `last_accessed` cannot be determined correctly on a single revision as blob files
> are shared across revisions.

> [!WARNING]
> `size_on_disk` is not necessarily the sum of all file sizes because of possible
> duplicated files. Besides, only blobs are taken into account, not the (negligible)
> size of folders and symlinks.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedRevisionInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L158</source><parameters>[]</parameters></docstring>

(property) Sum of the blob file sizes as a human-readable string.

Example: "42.2K".


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>nb_files</name><anchor>huggingface_hub.CachedRevisionInfo.nb_files</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L167</source><parameters>[]</parameters></docstring>

(property) Total number of files in the revision.


</div></div>

### CachedFileInfo[[huggingface_hub.CachedFileInfo]][[huggingface_hub.CachedFileInfo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CachedFileInfo</name><anchor>huggingface_hub.CachedFileInfo</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L41</source><parameters>[{"name": "file_name", "val": ": str"}, {"name": "file_path", "val": ": Path"}, {"name": "blob_path", "val": ": Path"}, {"name": "size_on_disk", "val": ": int"}, {"name": "blob_last_accessed", "val": ": float"}, {"name": "blob_last_modified", "val": ": float"}]</parameters><paramsdesc>- **file_name** (`str`) --
  Name of the file. Example: `config.json`.
- **file_path** (`Path`) --
  Path of the file in the `snapshots` directory. The file path is a symlink
  referring to a blob in the `blobs` folder.
- **blob_path** (`Path`) --
  Path of the blob file. This is equivalent to `file_path.resolve()`.
- **size_on_disk** (`int`) --
  Size of the blob file in bytes.
- **blob_last_accessed** (`float`) --
  Timestamp of the last time the blob file has been accessed (from any
  revision).
- **blob_last_modified** (`float`) --
  Timestamp of the last time the blob file has been modified/created.</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding information about a single cached file.



> [!WARNING]
> `blob_last_accessed` and `blob_last_modified` reliability can depend on the OS you
> are using. See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result)
> for more details.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>size_on_disk_str</name><anchor>huggingface_hub.CachedFileInfo.size_on_disk_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L94</source><parameters>[]</parameters></docstring>

(property) Size of the blob file as a human-readable string.

Example: "42.2K".


</div></div>

### DeleteCacheStrategy[[huggingface_hub.DeleteCacheStrategy]][[huggingface_hub.DeleteCacheStrategy]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DeleteCacheStrategy</name><anchor>huggingface_hub.DeleteCacheStrategy</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L261</source><parameters>[{"name": "expected_freed_size", "val": ": int"}, {"name": "blobs", "val": ": frozenset"}, {"name": "refs", "val": ": frozenset"}, {"name": "repos", "val": ": frozenset"}, {"name": "snapshots", "val": ": frozenset"}]</parameters><paramsdesc>- **expected_freed_size** (`float`) --
  Expected freed size once strategy is executed.
- **blobs** (`frozenset[Path]`) --
  Set of blob file paths to be deleted.
- **refs** (`frozenset[Path]`) --
  Set of reference file paths to be deleted.
- **repos** (`frozenset[Path]`) --
  Set of entire repo paths to be deleted.
- **snapshots** (`frozenset[Path]`) --
  Set of snapshots to be deleted (directory of symlinks).</paramsdesc><paramgroups>0</paramgroups></docstring>
Frozen data structure holding the strategy to delete cached revisions.

This object is not meant to be instantiated programmatically but to be returned by
[delete_revisions()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions). See documentation for usage example.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>expected_freed_size_str</name><anchor>huggingface_hub.DeleteCacheStrategy.expected_freed_size_str</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_cache_manager.py#L286</source><parameters>[]</parameters></docstring>

(property) Expected size that will be freed as a human-readable string.

Example: "42.2K".


</div></div>

## 예외[[exceptions]]

### CorruptedCacheException[[huggingface_hub.CorruptedCacheException]][[huggingface_hub.CorruptedCacheException]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CorruptedCacheException</name><anchor>huggingface_hub.CorruptedCacheException</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L22</source><parameters>""</parameters></docstring>
Exception for any unexpected structure in the Huggingface cache-system.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/cache.md" />

### 리포지토리 카드[[repository-cards]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/cards.md

# 리포지토리 카드[[repository-cards]]

huggingface_hub 라이브러리는 모델/데이터 세트 카드를 생성, 공유 및 업데이트하기 위한 Python 인터페이스를 제공합니다.
Hub의 모델 카드가 무엇이며 내부적으로 어떻게 작동하는지 더 깊이 있게 알아보려면 [전용 문서 페이지](https://huggingface.co/docs/hub/models-cards)를 방문하세요. 또한 이러한 유틸리티를 자신의 프로젝트에서 어떻게 사용할 수 있는지 감을 잡기 위해 [모델 카드 가이드](../how-to-model-cards)를 확인할 수 있습니다.

## 리포지토리 카드[[huggingface_hub.RepoCard]][[huggingface_hub.RepoCard]]

`RepoCard` 객체는 [ModelCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCard), [DatasetCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.DatasetCard) 및 `SpaceCard`의 상위 클래스입니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.RepoCard</name><anchor>huggingface_hub.RepoCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L37</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__init__</name><anchor>huggingface_hub.RepoCard.__init__</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L42</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters><paramsdesc>- **content** (`str`) -- The content of the Markdown file.</paramsdesc><paramgroups>0</paramgroups></docstring>
Initialize a RepoCard from string content. The content should be a
Markdown file with a YAML block at the beginning and a Markdown body.



<ExampleCodeBlock anchor="huggingface_hub.RepoCard.__init__.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> text = '''
... ---
... language: en
... license: mit
... ---
...
... # My repo
... '''
>>> card = RepoCard(text)
>>> card.data.to_dict()
{'language': 'en', 'license': 'mit'}
>>> card.text
'\n# My repo\n'

```

</ExampleCodeBlock>
> [!TIP]
> Raises the following error:
>
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       when the content of the repo card metadata is not a dictionary.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.RepoCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L289</source><parameters>[{"name": "card_data", "val": ": CardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.CardData`) --
  A huggingface_hub.CardData instance containing the metadata you want to include in the YAML
  header of the repo card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.repocard.RepoCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard)</rettype><retdesc>A RepoCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a RepoCard from a template. By default, it uses the default template.

Templates are Jinja2 templates that can be customized by passing keyword arguments.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load</name><anchor>huggingface_hub.RepoCard.load</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L135</source><parameters>[{"name": "repo_id_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters><paramsdesc>- **repo_id_or_path** (`Union[str, Path]`) --
  The repo ID associated with a Hugging Face Hub repo or a local filepath.
- **repo_type** (`str`, *optional*) --
  The type of Hugging Face repo to push to. Defaults to None, which will use "model". Other options
  are "dataset" and "space". Not used when loading from a local filepath. If this is called from a child
  class, the default value will be the child class's `repo_type`.
- **token** (`str`, *optional*) --
  Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to the stored token.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.repocard.RepoCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard)</rettype><retdesc>The RepoCard (or subclass) initialized from the repo's
README.md file or filepath.</retdesc></docstring>
Initialize a RepoCard from a Hugging Face Hub repo's README.md or a local filepath.







<ExampleCodeBlock anchor="huggingface_hub.RepoCard.load.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> card = RepoCard.load("nateraw/food")
>>> assert card.data.tags == ["generated_from_trainer", "image-classification", "pytorch"]

```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.RepoCard.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L226</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "commit_description", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "parent_commit", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The repo ID of the Hugging Face Hub repo to push to. Example: "nateraw/food".
- **token** (`str`, *optional*) --
  Authentication token, obtained with `huggingface_hub.HfApi.login` method. Will default to
  the stored token.
- **repo_type** (`str`, *optional*, defaults to "model") --
  The type of Hugging Face repo to push to. Options are "model", "dataset", and "space". If this
  function is called by a child class, it will default to the child class's `repo_type`.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit.
- **commit_description** (`str`, *optional*) --
  The description of the generated commit.
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the `"main"` branch.
- **create_pr** (`bool`, *optional*) --
  Whether or not to create a Pull Request with this commit. Defaults to `False`.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>URL of the commit which updated the card metadata.</retdesc></docstring>
Push a RepoCard to a Hugging Face Hub repo.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save</name><anchor>huggingface_hub.RepoCard.save</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L115</source><parameters>[{"name": "filepath", "val": ": typing.Union[pathlib.Path, str]"}]</parameters><paramsdesc>- **filepath** (`Union[Path, str]`) -- Filepath to the markdown file to save.</paramsdesc><paramgroups>0</paramgroups></docstring>
Save a RepoCard to a file.



<ExampleCodeBlock anchor="huggingface_hub.RepoCard.save.example">

Example:
```python
>>> from huggingface_hub.repocard import RepoCard
>>> card = RepoCard("---\nlanguage: en\n---\n# This is a test repo card")
>>> card.save("/tmp/test.md")

```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>validate</name><anchor>huggingface_hub.RepoCard.validate</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L189</source><parameters>[{"name": "repo_type", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_type** (`str`, *optional*, defaults to "model") --
  The type of Hugging Face repo to push to. Options are "model", "dataset", and "space".
  If this function is called from a child class, the default will be the child class's `repo_type`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Validates card against Hugging Face Hub's card validation logic.
Using this function requires access to the internet, so it is only called
internally by [huggingface_hub.repocard.RepoCard.push_to_hub()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard.push_to_hub).



> [!TIP]
> Raises the following errors:
>
>     - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if the card fails validation checks.
>     - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
>       if the request to the Hub API fails for any other reason.


</div></div>

## 카드 데이터[[huggingface_hub.CardData]][[huggingface_hub.CardData]]

[CardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.CardData) 객체는 [ModelCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCardData)와 [DatasetCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.DatasetCardData)의 상위 클래스입니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.CardData</name><anchor>huggingface_hub.CardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L165</source><parameters>[{"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Structure containing metadata from a RepoCard.

[CardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.CardData) is the parent class of [ModelCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCardData) and [DatasetCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.DatasetCardData).

Metadata can be exported as a dictionary or YAML. Export can be customized to alter the representation of the data
(example: flatten evaluation results). `CardData` behaves as a dictionary (can get, pop, set values) but do not
inherit from `dict` to allow this export step.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get</name><anchor>huggingface_hub.CardData.get</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L222</source><parameters>[{"name": "key", "val": ": str"}, {"name": "default", "val": ": typing.Any = None"}]</parameters></docstring>
Get value for a given metadata key.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pop</name><anchor>huggingface_hub.CardData.pop</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L227</source><parameters>[{"name": "key", "val": ": str"}, {"name": "default", "val": ": typing.Any = None"}]</parameters></docstring>
Pop value for a given metadata key.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>huggingface_hub.CardData.to_dict</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L178</source><parameters>[]</parameters><rettype>`dict`</rettype><retdesc>CardData represented as a dictionary ready to be dumped to a YAML
block for inclusion in a README.md file.</retdesc></docstring>
Converts CardData to a dict.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_yaml</name><anchor>huggingface_hub.CardData.to_yaml</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L198</source><parameters>[{"name": "line_break", "val": " = None"}, {"name": "original_order", "val": ": typing.Optional[list[str]] = None"}]</parameters><paramsdesc>- **line_break** (str, *optional*) --
  The line break to use when dumping to yaml.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>CardData represented as a YAML block.</retdesc></docstring>
Dumps CardData to a YAML block for inclusion in a README.md file.








</div></div>

## 모델 카드[[model-cards]]

### ModelCard[[huggingface_hub.ModelCard]][[huggingface_hub.ModelCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelCard</name><anchor>huggingface_hub.ModelCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L333</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.ModelCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L338</source><parameters>[{"name": "card_data", "val": ": ModelCardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.ModelCardData`) --
  A huggingface_hub.ModelCardData instance containing the metadata you want to include in the YAML
  header of the model card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.ModelCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCard)</rettype><retdesc>A ModelCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a ModelCard from a template. By default, it uses the default template, which can be found here:
https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md

Templates are Jinja2 templates that can be customized by passing keyword arguments.







<ExampleCodeBlock anchor="huggingface_hub.ModelCard.from_template.example">

Example:
```python
>>> from huggingface_hub import ModelCard, ModelCardData, EvalResult

>>> # Using the Default Template
>>> card_data = ModelCardData(
...     language='en',
...     license='mit',
...     library_name='timm',
...     tags=['image-classification', 'resnet'],
...     datasets=['beans'],
...     metrics=['accuracy'],
... )
>>> card = ModelCard.from_template(
...     card_data,
...     model_description='This model does x + y...'
... )

>>> # Including Evaluation Results
>>> card_data = ModelCardData(
...     language='en',
...     tags=['image-classification', 'resnet'],
...     eval_results=[
...         EvalResult(
...             task_type='image-classification',
...             dataset_type='beans',
...             dataset_name='Beans',
...             metric_type='accuracy',
...             metric_value=0.9,
...         ),
...     ],
...     model_name='my-cool-model',
... )
>>> card = ModelCard.from_template(card_data)

>>> # Using a Custom Template
>>> card_data = ModelCardData(
...     language='en',
...     tags=['image-classification', 'resnet']
... )
>>> card = ModelCard.from_template(
...     card_data=card_data,
...     template_path='./src/huggingface_hub/templates/modelcard_template.md',
...     custom_template_var='custom value',  # will be replaced in template if it exists
... )

```

</ExampleCodeBlock>


</div></div>

### ModelCardData[[huggingface_hub.ModelCardData]][[huggingface_hub.ModelCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelCardData</name><anchor>huggingface_hub.ModelCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L265</source><parameters>[{"name": "base_model", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "datasets", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "eval_results", "val": ": typing.Optional[list[huggingface_hub.repocard_data.EvalResult]] = None"}, {"name": "language", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "license", "val": ": typing.Optional[str] = None"}, {"name": "license_name", "val": ": typing.Optional[str] = None"}, {"name": "license_link", "val": ": typing.Optional[str] = None"}, {"name": "metrics", "val": ": typing.Optional[list[str]] = None"}, {"name": "model_name", "val": ": typing.Optional[str] = None"}, {"name": "pipeline_tag", "val": ": typing.Optional[str] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **base_model** (`str` or `list[str]`, *optional*) --
  The identifier of the base model from which the model derives. This is applicable for example if your model is a
  fine-tune or adapter of an existing model. The value must be the ID of a model on the Hub (or a list of IDs
  if your model derives from multiple models). Defaults to None.
- **datasets** (`Union[str, list[str]]`, *optional*) --
  Dataset or list of datasets that were used to train this model. Should be a dataset ID
  found on https://hf.co/datasets. Defaults to None.
- **eval_results** (`Union[list[EvalResult], EvalResult]`, *optional*) --
  List of `huggingface_hub.EvalResult` that define evaluation results of the model. If provided,
  `model_name` is used to as a name on PapersWithCode's leaderboards. Defaults to `None`.
- **language** (`Union[str, list[str]]`, *optional*) --
  Language of model's training data or metadata. It must be an ISO 639-1, 639-2 or
  639-3 code (two/three letters), or a special value like "code", "multilingual". Defaults to `None`.
- **library_name** (`str`, *optional*) --
  Name of library used by this model. Example: keras or any library from
  https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts.
  Defaults to None.
- **license** (`str`, *optional*) --
  License of this model. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses. Defaults to None.
- **license_name** (`str`, *optional*) --
  Name of the license of this model. Defaults to None. To be used in conjunction with `license_link`.
  Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a name. In that case, use `license` instead.
- **license_link** (`str`, *optional*) --
  Link to the license of this model. Defaults to None. To be used in conjunction with `license_name`.
  Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a link. In that case, use `license` instead.
- **metrics** (`list[str]`, *optional*) --
  List of metrics used to evaluate this model. Should be a metric name that can be found
  at https://hf.co/metrics. Example: 'accuracy'. Defaults to None.
- **model_name** (`str`, *optional*) --
  A name for this model. It is used along with
  `eval_results` to construct the `model-index` within the card's metadata. The name
  you supply here is what will be used on PapersWithCode's leaderboards. If None is provided
  then the repo name is used as a default. Defaults to None.
- **pipeline_tag** (`str`, *optional*) --
  The pipeline tag associated with the model. Example: "text-classification".
- **tags** (`list[str]`, *optional*) --
  List of tags to add to your model that can be used when filtering on the Hugging
  Face Hub. Defaults to None.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.
- **kwargs** (`dict`, *optional*) --
  Additional metadata that will be added to the model card. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>
Model Card Metadata that is used by Hugging Face Hub when included at the top of your README.md



<ExampleCodeBlock anchor="huggingface_hub.ModelCardData.example">

Example:
```python
>>> from huggingface_hub import ModelCardData
>>> card_data = ModelCardData(
...     language="en",
...     license="mit",
...     library_name="timm",
...     tags=['image-classification', 'resnet'],
... )
>>> card_data.to_dict()
{'language': 'en', 'license': 'mit', 'library_name': 'timm', 'tags': ['image-classification', 'resnet']}

```

</ExampleCodeBlock>


</div>

## 데이터 세트 카드[[cards#dataset-cards]]

ML 커뮤니티에서는 데이터 세트 카드를 데이터 카드라고도 합니다.

### DatasetCard[[huggingface_hub.DatasetCard]][[huggingface_hub.DatasetCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetCard</name><anchor>huggingface_hub.DatasetCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L414</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_template</name><anchor>huggingface_hub.DatasetCard.from_template</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L419</source><parameters>[{"name": "card_data", "val": ": DatasetCardData"}, {"name": "template_path", "val": ": typing.Optional[str] = None"}, {"name": "template_str", "val": ": typing.Optional[str] = None"}, {"name": "**template_kwargs", "val": ""}]</parameters><paramsdesc>- **card_data** (`huggingface_hub.DatasetCardData`) --
  A huggingface_hub.DatasetCardData instance containing the metadata you want to include in the YAML
  header of the dataset card on the Hugging Face Hub.
- **template_path** (`str`, *optional*) --
  A path to a markdown file with optional Jinja template variables that can be filled
  in with `template_kwargs`. Defaults to the default template.</paramsdesc><paramgroups>0</paramgroups><rettype>[huggingface_hub.DatasetCard](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.DatasetCard)</rettype><retdesc>A DatasetCard instance with the specified card data and content from the
template.</retdesc></docstring>
Initialize a DatasetCard from a template. By default, it uses the default template, which can be found here:
https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md

Templates are Jinja2 templates that can be customized by passing keyword arguments.







<ExampleCodeBlock anchor="huggingface_hub.DatasetCard.from_template.example">

Example:
```python
>>> from huggingface_hub import DatasetCard, DatasetCardData

>>> # Using the Default Template
>>> card_data = DatasetCardData(
...     language='en',
...     license='mit',
...     annotations_creators='crowdsourced',
...     task_categories=['text-classification'],
...     task_ids=['sentiment-classification', 'text-scoring'],
...     multilinguality='monolingual',
...     pretty_name='My Text Classification Dataset',
... )
>>> card = DatasetCard.from_template(
...     card_data,
...     pretty_name=card_data.pretty_name,
... )

>>> # Using a Custom Template
>>> card_data = DatasetCardData(
...     language='en',
...     license='mit',
... )
>>> card = DatasetCard.from_template(
...     card_data=card_data,
...     template_path='./src/huggingface_hub/templates/datasetcard_template.md',
...     custom_template_var='custom value',  # will be replaced in template if it exists
... )

```

</ExampleCodeBlock>


</div></div>

### DatasetCardData[[huggingface_hub.DatasetCardData]][[huggingface_hub.DatasetCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DatasetCardData</name><anchor>huggingface_hub.DatasetCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L394</source><parameters>[{"name": "language", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "license", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "annotations_creators", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "language_creators", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "multilinguality", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "size_categories", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "source_datasets", "val": ": typing.Optional[list[str]] = None"}, {"name": "task_categories", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "task_ids", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "paperswithcode_id", "val": ": typing.Optional[str] = None"}, {"name": "pretty_name", "val": ": typing.Optional[str] = None"}, {"name": "train_eval_index", "val": ": typing.Optional[dict] = None"}, {"name": "config_names", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **language** (`list[str]`, *optional*) --
  Language of dataset's data or metadata. It must be an ISO 639-1, 639-2 or
  639-3 code (two/three letters), or a special value like "code", "multilingual".
- **license** (`Union[str, list[str]]`, *optional*) --
  License(s) of this dataset. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses.
- **annotations_creators** (`Union[str, list[str]]`, *optional*) --
  How the annotations for the dataset were created.
  Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'no-annotation', 'other'.
- **language_creators** (`Union[str, list[str]]`, *optional*) --
  How the text-based data in the dataset was created.
  Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'other'
- **multilinguality** (`Union[str, list[str]]`, *optional*) --
  Whether the dataset is multilingual.
  Options are: 'monolingual', 'multilingual', 'translation', 'other'.
- **size_categories** (`Union[str, list[str]]`, *optional*) --
  The number of examples in the dataset. Options are: 'n<1K', '1K<n<10K', '10K<n<100K',
  '100K<n<1M', '1M<n<10M', '10M<n<100M', '100M<n<1B', '1B<n<10B', '10B<n<100B', '100B<n<1T', 'n>1T', and 'other'.
- **source_datasets** (`list[str]]`, *optional*) --
  Indicates whether the dataset is an original dataset or extended from another existing dataset.
  Options are: 'original' and 'extended'.
- **task_categories** (`Union[str, list[str]]`, *optional*) --
  What categories of task does the dataset support?
- **task_ids** (`Union[str, list[str]]`, *optional*) --
  What specific tasks does the dataset support?
- **paperswithcode_id** (`str`, *optional*) --
  ID of the dataset on PapersWithCode.
- **pretty_name** (`str`, *optional*) --
  A more human-readable name for the dataset. (ex. "Cats vs. Dogs")
- **train_eval_index** (`dict`, *optional*) --
  A dictionary that describes the necessary spec for doing evaluation on the Hub.
  If not provided, it will be gathered from the 'train-eval-index' key of the kwargs.
- **config_names** (`Union[str, list[str]]`, *optional*) --
  A list of the available dataset configs for the dataset.</paramsdesc><paramgroups>0</paramgroups></docstring>
Dataset Card Metadata that is used by Hugging Face Hub when included at the top of your README.md




</div>

## 공간 카드[[space-cards]]

### SpaceCard[[huggingface_hub.SpaceCardData]][[huggingface_hub.SpaceCard]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceCard</name><anchor>huggingface_hub.SpaceCard</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L479</source><parameters>[{"name": "content", "val": ": str"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}]</parameters></docstring>


</div>

### SpaceCardData[[huggingface_hub.SpaceCardData]][[huggingface_hub.SpaceCardData]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceCardData</name><anchor>huggingface_hub.SpaceCardData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L474</source><parameters>[{"name": "title", "val": ": typing.Optional[str] = None"}, {"name": "sdk", "val": ": typing.Optional[str] = None"}, {"name": "sdk_version", "val": ": typing.Optional[str] = None"}, {"name": "python_version", "val": ": typing.Optional[str] = None"}, {"name": "app_file", "val": ": typing.Optional[str] = None"}, {"name": "app_port", "val": ": typing.Optional[int] = None"}, {"name": "license", "val": ": typing.Optional[str] = None"}, {"name": "duplicated_from", "val": ": typing.Optional[str] = None"}, {"name": "models", "val": ": typing.Optional[list[str]] = None"}, {"name": "datasets", "val": ": typing.Optional[list[str]] = None"}, {"name": "tags", "val": ": typing.Optional[list[str]] = None"}, {"name": "ignore_metadata_errors", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **title** (`str`, *optional*) --
  Title of the Space.
- **sdk** (`str`, *optional*) --
  SDK of the Space (one of `gradio`, `streamlit`, `docker`, or `static`).
- **sdk_version** (`str`, *optional*) --
  Version of the used SDK (if Gradio/Streamlit sdk).
- **python_version** (`str`, *optional*) --
  Python version used in the Space (if Gradio/Streamlit sdk).
- **app_file** (`str`, *optional*) --
  Path to your main application file (which contains either gradio or streamlit Python code, or static html code).
  Path is relative to the root of the repository.
- **app_port** (`str`, *optional*) --
  Port on which your application is running. Used only if sdk is `docker`.
- **license** (`str`, *optional*) --
  License of this model. Example: apache-2.0 or any license from
  https://huggingface.co/docs/hub/repositories-licenses.
- **duplicated_from** (`str`, *optional*) --
  ID of the original Space if this is a duplicated Space.
- **models** (list`str`, *optional*) --
  List of models related to this Space. Should be a dataset ID found on https://hf.co/models.
- **datasets** (`list[str]`, *optional*) --
  List of datasets related to this Space. Should be a dataset ID found on https://hf.co/datasets.
- **tags** (`list[str]`, *optional*) --
  List of tags to add to your Space that can be used when filtering on the Hub.
- **ignore_metadata_errors** (`str`) --
  If True, errors while parsing the metadata section will be ignored. Some information might be lost during
  the process. Use it at your own risk.
- **kwargs** (`dict`, *optional*) --
  Additional metadata that will be added to the space card.</paramsdesc><paramgroups>0</paramgroups></docstring>
Space Card Metadata that is used by Hugging Face Hub when included at the top of your README.md

To get an exhaustive reference of Spaces configuration, please visit https://huggingface.co/docs/hub/spaces-config-reference#spaces-configuration-reference.



<ExampleCodeBlock anchor="huggingface_hub.SpaceCardData.example">

Example:
```python
>>> from huggingface_hub import SpaceCardData
>>> card_data = SpaceCardData(
...     title="Dreambooth Training",
...     license="mit",
...     sdk="gradio",
...     duplicated_from="multimodalart/dreambooth-training"
... )
>>> card_data.to_dict()
{'title': 'Dreambooth Training', 'sdk': 'gradio', 'license': 'mit', 'duplicated_from': 'multimodalart/dreambooth-training'}
```

</ExampleCodeBlock>


</div>

## 유틸리티[[utilities]]

### EvalResult[[huggingface_hub.EvalResult]][[huggingface_hub.EvalResult]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.EvalResult</name><anchor>huggingface_hub.EvalResult</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L13</source><parameters>[{"name": "task_type", "val": ": str"}, {"name": "dataset_type", "val": ": str"}, {"name": "dataset_name", "val": ": str"}, {"name": "metric_type", "val": ": str"}, {"name": "metric_value", "val": ": typing.Any"}, {"name": "task_name", "val": ": typing.Optional[str] = None"}, {"name": "dataset_config", "val": ": typing.Optional[str] = None"}, {"name": "dataset_split", "val": ": typing.Optional[str] = None"}, {"name": "dataset_revision", "val": ": typing.Optional[str] = None"}, {"name": "dataset_args", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "metric_name", "val": ": typing.Optional[str] = None"}, {"name": "metric_config", "val": ": typing.Optional[str] = None"}, {"name": "metric_args", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "verified", "val": ": typing.Optional[bool] = None"}, {"name": "verify_token", "val": ": typing.Optional[str] = None"}, {"name": "source_name", "val": ": typing.Optional[str] = None"}, {"name": "source_url", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **task_type** (`str`) --
  The task identifier. Example: "image-classification".
- **dataset_type** (`str`) --
  The dataset identifier. Example: "common_voice". Use dataset id from https://hf.co/datasets.
- **dataset_name** (`str`) --
  A pretty name for the dataset. Example: "Common Voice (French)".
- **metric_type** (`str`) --
  The metric identifier. Example: "wer". Use metric id from https://hf.co/metrics.
- **metric_value** (`Any`) --
  The metric value. Example: 0.9 or "20.0 ± 1.2".
- **task_name** (`str`, *optional*) --
  A pretty name for the task. Example: "Speech Recognition".
- **dataset_config** (`str`, *optional*) --
  The name of the dataset configuration used in `load_dataset()`.
  Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info:
  https://hf.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
- **dataset_split** (`str`, *optional*) --
  The split used in `load_dataset()`. Example: "test".
- **dataset_revision** (`str`, *optional*) --
  The revision (AKA Git Sha) of the dataset used in `load_dataset()`.
  Example: 5503434ddd753f426f4b38109466949a1217c2bb
- **dataset_args** (`dict[str, Any]`, *optional*) --
  The arguments passed during `Metric.compute()`. Example for `bleu`: `{"max_order": 4}`
- **metric_name** (`str`, *optional*) --
  A pretty name for the metric. Example: "Test WER".
- **metric_config** (`str`, *optional*) --
  The name of the metric configuration used in `load_metric()`.
  Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
  See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
- **metric_args** (`dict[str, Any]`, *optional*) --
  The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4
- **verified** (`bool`, *optional*) --
  Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
- **verify_token** (`str`, *optional*) --
  A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.
- **source_name** (`str`, *optional*) --
  The name of the source of the evaluation result. Example: "Open LLM Leaderboard".
- **source_url** (`str`, *optional*) --
  The URL of the source of the evaluation result. Example: "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard".</paramsdesc><paramgroups>0</paramgroups></docstring>

Flattened representation of individual evaluation results found in model-index of Model Cards.

For more information on the model-index spec, see https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_equal_except_value</name><anchor>huggingface_hub.EvalResult.is_equal_except_value</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L145</source><parameters>[{"name": "other", "val": ": EvalResult"}]</parameters></docstring>

Return True if `self` and `other` describe exactly the same metric but with a
different value.


</div></div>

### model_index_to_eval_results[[huggingface_hub.repocard_data.model_index_to_eval_results]][[huggingface_hub.repocard_data.model_index_to_eval_results]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.repocard_data.model_index_to_eval_results</name><anchor>huggingface_hub.repocard_data.model_index_to_eval_results</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L555</source><parameters>[{"name": "model_index", "val": ": list"}]</parameters><paramsdesc>- **model_index** (`list[dict[str, Any]]`) --
  A model index data structure, likely coming from a README.md file on the
  Hugging Face Hub.</paramsdesc><paramgroups>0</paramgroups><rettype>model_name (`str`)</rettype><retdesc>The name of the model as found in the model index. This is used as the
identifier for the model on leaderboards like PapersWithCode.
eval_results (`list[EvalResult]`):
A list of `huggingface_hub.EvalResult` objects containing the metrics
reported in the provided model_index.</retdesc></docstring>
Takes in a model index and returns the model name and a list of `huggingface_hub.EvalResult` objects.

A detailed spec of the model index can be found here:
https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1







<ExampleCodeBlock anchor="huggingface_hub.repocard_data.model_index_to_eval_results.example">

Example:
```python
>>> from huggingface_hub.repocard_data import model_index_to_eval_results
>>> # Define a minimal model index
>>> model_index = [
...     {
...         "name": "my-cool-model",
...         "results": [
...             {
...                 "task": {
...                     "type": "image-classification"
...                 },
...                 "dataset": {
...                     "type": "beans",
...                     "name": "Beans"
...                 },
...                 "metrics": [
...                     {
...                         "type": "accuracy",
...                         "value": 0.9
...                     }
...                 ]
...             }
...         ]
...     }
... ]
>>> model_name, eval_results = model_index_to_eval_results(model_index)
>>> model_name
'my-cool-model'
>>> eval_results[0].task_type
'image-classification'
>>> eval_results[0].metric_type
'accuracy'

```

</ExampleCodeBlock>


</div>

### eval_results_to_model_index[[huggingface_hub.repocard_data.eval_results_to_model_index]][[huggingface_hub.repocard_data.eval_results_to_model_index]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.repocard_data.eval_results_to_model_index</name><anchor>huggingface_hub.repocard_data.eval_results_to_model_index</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard_data.py#L671</source><parameters>[{"name": "model_name", "val": ": str"}, {"name": "eval_results", "val": ": list"}]</parameters><paramsdesc>- **model_name** (`str`) --
  Name of the model (ex. "my-cool-model"). This is used as the identifier
  for the model on leaderboards like PapersWithCode.
- **eval_results** (`list[EvalResult]`) --
  List of `huggingface_hub.EvalResult` objects containing the metrics to be
  reported in the model-index.</paramsdesc><paramgroups>0</paramgroups><rettype>model_index (`list[dict[str, Any]]`)</rettype><retdesc>The eval_results converted to a model-index.</retdesc></docstring>
Takes in given model name and list of `huggingface_hub.EvalResult` and returns a
valid model-index that will be compatible with the format expected by the
Hugging Face Hub.







<ExampleCodeBlock anchor="huggingface_hub.repocard_data.eval_results_to_model_index.example">

Example:
```python
>>> from huggingface_hub.repocard_data import eval_results_to_model_index, EvalResult
>>> # Define minimal eval_results
>>> eval_results = [
...     EvalResult(
...         task_type="image-classification",  # Required
...         dataset_type="beans",  # Required
...         dataset_name="Beans",  # Required
...         metric_type="accuracy",  # Required
...         metric_value=0.9,  # Required
...     )
... ]
>>> eval_results_to_model_index("my-cool-model", eval_results)
[{'name': 'my-cool-model', 'results': [{'task': {'type': 'image-classification'}, 'dataset': {'name': 'Beans', 'type': 'beans'}, 'metrics': [{'type': 'accuracy', 'value': 0.9}]}]}]

```

</ExampleCodeBlock>


</div>

### metadata_eval_result[[huggingface_hub.metadata_eval_result]][[huggingface_hub.metadata_eval_result]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.metadata_eval_result</name><anchor>huggingface_hub.metadata_eval_result</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L551</source><parameters>[{"name": "model_pretty_name", "val": ": str"}, {"name": "task_pretty_name", "val": ": str"}, {"name": "task_id", "val": ": str"}, {"name": "metrics_pretty_name", "val": ": str"}, {"name": "metrics_id", "val": ": str"}, {"name": "metrics_value", "val": ": typing.Any"}, {"name": "dataset_pretty_name", "val": ": str"}, {"name": "dataset_id", "val": ": str"}, {"name": "metrics_config", "val": ": typing.Optional[str] = None"}, {"name": "metrics_verified", "val": ": bool = False"}, {"name": "dataset_config", "val": ": typing.Optional[str] = None"}, {"name": "dataset_split", "val": ": typing.Optional[str] = None"}, {"name": "dataset_revision", "val": ": typing.Optional[str] = None"}, {"name": "metrics_verification_token", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model_pretty_name** (`str`) --
  The name of the model in natural language.
- **task_pretty_name** (`str`) --
  The name of a task in natural language.
- **task_id** (`str`) --
  Example: automatic-speech-recognition. A task id.
- **metrics_pretty_name** (`str`) --
  A name for the metric in natural language. Example: Test WER.
- **metrics_id** (`str`) --
  Example: wer. A metric id from https://hf.co/metrics.
- **metrics_value** (`Any`) --
  The value from the metric. Example: 20.0 or "20.0 ± 1.2".
- **dataset_pretty_name** (`str`) --
  The name of the dataset in natural language.
- **dataset_id** (`str`) --
  Example: common_voice. A dataset id from https://hf.co/datasets.
- **metrics_config** (`str`, *optional*) --
  The name of the metric configuration used in `load_metric()`.
  Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`.
- **metrics_verified** (`bool`, *optional*, defaults to `False`) --
  Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set.
- **dataset_config** (`str`, *optional*) --
  Example: fr. The name of the dataset configuration used in `load_dataset()`.
- **dataset_split** (`str`, *optional*) --
  Example: test. The name of the dataset split used in `load_dataset()`.
- **dataset_revision** (`str`, *optional*) --
  Example: 5503434ddd753f426f4b38109466949a1217c2bb. The name of the dataset dataset revision
  used in `load_dataset()`.
- **metrics_verification_token** (`bool`, *optional*) --
  A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict`</rettype><retdesc>a metadata dict with the result from a model evaluated on a dataset.</retdesc></docstring>

Creates a metadata dict with the result from a model evaluated on a dataset.







<ExampleCodeBlock anchor="huggingface_hub.metadata_eval_result.example">

Example:
```python
>>> from huggingface_hub import metadata_eval_result
>>> results = metadata_eval_result(
...         model_pretty_name="RoBERTa fine-tuned on ReactionGIF",
...         task_pretty_name="Text Classification",
...         task_id="text-classification",
...         metrics_pretty_name="Accuracy",
...         metrics_id="accuracy",
...         metrics_value=0.2662102282047272,
...         dataset_pretty_name="ReactionJPEG",
...         dataset_id="julien-c/reactionjpeg",
...         dataset_config="default",
...         dataset_split="test",
... )
>>> results == {
...     'model-index': [
...         {
...             'name': 'RoBERTa fine-tuned on ReactionGIF',
...             'results': [
...                 {
...                     'task': {
...                         'type': 'text-classification',
...                         'name': 'Text Classification'
...                     },
...                     'dataset': {
...                         'name': 'ReactionJPEG',
...                         'type': 'julien-c/reactionjpeg',
...                         'config': 'default',
...                         'split': 'test'
...                     },
...                     'metrics': [
...                         {
...                             'type': 'accuracy',
...                             'value': 0.2662102282047272,
...                             'name': 'Accuracy',
...                             'verified': False
...                         }
...                     ]
...                 }
...             ]
...         }
...     ]
... }
True

```

</ExampleCodeBlock>


</div>

### metadata_update[[huggingface_hub.metadata_update]][[huggingface_hub.metadata_update]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.metadata_update</name><anchor>huggingface_hub.metadata_update</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/repocard.py#L679</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "metadata", "val": ": dict"}, {"name": "repo_type", "val": ": typing.Optional[str] = None"}, {"name": "overwrite", "val": ": bool = False"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "commit_description", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": bool = False"}, {"name": "parent_commit", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository.
- **metadata** (`dict`) --
  A dictionary containing the metadata to be updated.
- **repo_type** (`str`, *optional*) --
  Set to `"dataset"` or `"space"` if updating to a dataset or space,
  `None` or `"model"` if updating to a model. Default is `None`.
- **overwrite** (`bool`, *optional*, defaults to `False`) --
  If set to `True` an existing field can be overwritten, otherwise
  attempting to overwrite an existing field will cause an error.
- **token** (`str`, *optional*) --
  The Hugging Face authentication token.
- **commit_message** (`str`, *optional*) --
  The summary / title / first line of the generated commit. Defaults to
  `f"Update metadata with huggingface_hub"`
- **commit_description** (`str` *optional*) --
  The description of the generated commit
- **revision** (`str`, *optional*) --
  The git revision to commit from. Defaults to the head of the
  `"main"` branch.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request from `revision` with that commit.
  Defaults to `False`.
- **parent_commit** (`str`, *optional*) --
  The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported.
  If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`.
  If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`.
  Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be
  especially useful if the repo is updated / committed to concurrently.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>URL of the commit which updated the card metadata.</retdesc></docstring>

Updates the metadata in the README.md of a repository on the Hugging Face Hub.
If the README.md file doesn't exist yet, a new one is created with metadata and an
the default ModelCard or DatasetCard template. For `space` repo, an error is thrown
as a Space cannot exist without a `README.md` file.







<ExampleCodeBlock anchor="huggingface_hub.metadata_update.example">

Example:
```python
>>> from huggingface_hub import metadata_update
>>> metadata = {'model-index': [{'name': 'RoBERTa fine-tuned on ReactionGIF',
...             'results': [{'dataset': {'name': 'ReactionGIF',
...                                      'type': 'julien-c/reactiongif'},
...                           'metrics': [{'name': 'Recall',
...                                        'type': 'recall',
...                                        'value': 0.7762102282047272}],
...                          'task': {'name': 'Text Classification',
...                                   'type': 'text-classification'}}]}]}
>>> url = metadata_update("hf-internal-testing/reactiongif-roberta-card", metadata)

```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/cards.md" />

### 파일 다운로드 하기[[downloading-files]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/file_download.md

# 파일 다운로드 하기[[downloading-files]]

## 단일 파일 다운로드하기[[download-a-single-file]]

### hf_hub_download[[huggingface_hub.hf_hub_download]]

[[autodoc]]huggingface_hub.hf_hub_download

### hf_hub_url[[huggingface_hub.hf_hub_url]]

[[autodoc]]huggingface_hub.hf_hub_url

## 리포지토리의 스냅샷 다운로드하기[[huggingface_hub.snapshot_download]]

[[autodoc]]huggingface_hub.snapshot_download

## 파일에 대한 메타데이터 가져오기[[get-metadata-about-a-file]]

### get_hf_file_metadata[[huggingface_hub.get_hf_file_metadata]]

[[autodoc]]huggingface_hub.get_hf_file_metadata

### HfFileMetadata[[huggingface_hub.HfFileMetadata]]

[[autodoc]]huggingface_hub.HfFileMetadata

## 캐싱[[caching]]

위에 나열된 메소드들은 파일을 재다운로드하지 않도록 하는 캐싱 시스템과 함께 작동하도록 설계되었습니다. v0.8.0에서의 업데이트로, 캐싱 시스템은 Hub를 기반으로 하는 다양한 라이브러리 간의 공유 중앙 캐시 시스템으로 발전했습니다.

Hugging Face에서의 캐싱에 대한 자세한 설명은[캐시 시스템 가이드](../guides/manage-cache)를 참조하세요.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/file_download.md" />

### Overview[[overview]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/overview.md

# Overview[[overview]]

이 섹션은 `huggingface_hub` 클래스와 메서드에 대한 상세하고 기술적인 설명을 포함하고 있습니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/overview.md" />

### Discussions 및 Pull Requests를 이용하여 상호작용하기[[interacting-with-discussions-and-pull-requests]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/community.md

# Discussions 및 Pull Requests를 이용하여 상호작용하기[[interacting-with-discussions-and-pull-requests]]

Hub에서 Discussions 및 Pull Requests를 이용하여 상호 작용할 수 있는 방법에 대해 참조하고자 한다면 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 문서 페이지를 확인하세요.

- [get_repo_discussions()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_repo_discussions)
- [get_discussion_details()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details)
- [create_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_discussion)
- [create_pull_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request)
- [rename_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.rename_discussion)
- [comment_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.comment_discussion)
- [edit_discussion_comment()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.edit_discussion_comment)
- [change_discussion_status()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.change_discussion_status)
- [merge_pull_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.merge_pull_request)

## 데이터 구조[[huggingface_hub.Discussion]][[huggingface_hub.Discussion]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Discussion</name><anchor>huggingface_hub.Discussion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L20</source><parameters>[{"name": "title", "val": ": str"}, {"name": "status", "val": ": typing.Literal['open', 'closed', 'merged', 'draft']"}, {"name": "num", "val": ": int"}, {"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": str"}, {"name": "author", "val": ": str"}, {"name": "is_pull_request", "val": ": bool"}, {"name": "created_at", "val": ": datetime"}, {"name": "endpoint", "val": ": str"}]</parameters><paramsdesc>- **title** (`str`) --
  The title of the Discussion / Pull Request
- **status** (`str`) --
  The status of the Discussion / Pull Request.
  It must be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )
  * `"draft"` (only for Pull Requests )
- **num** (`int`) --
  The number of the Discussion / Pull Request.
- **repo_id** (`str`) --
  The id (`"{namespace}/{repo_name}"`) of the repo on which
  the Discussion / Pull Request was open.
- **repo_type** (`str`) --
  The type of the repo on which the Discussion / Pull Request was open.
  Possible values are: `"model"`, `"dataset"`, `"space"`.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **is_pull_request** (`bool`) --
  Whether or not this is a Pull Request.
- **created_at** (`datetime`) --
  The `datetime` of creation of the Discussion / Pull Request.
- **endpoint** (`str`) --
  Endpoint of the Hub. Default is https://huggingface.co.
- **git_reference** (`str`, *optional*) --
  (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
- **url** (`str`) --
  (property) URL of the discussion on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Discussion or Pull Request on the Hub.

This dataclass is not intended to be instantiated directly.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionWithDetails</name><anchor>huggingface_hub.DiscussionWithDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L88</source><parameters>[{"name": "title", "val": ": str"}, {"name": "status", "val": ": typing.Literal['open', 'closed', 'merged', 'draft']"}, {"name": "num", "val": ": int"}, {"name": "repo_id", "val": ": str"}, {"name": "repo_type", "val": ": str"}, {"name": "author", "val": ": str"}, {"name": "is_pull_request", "val": ": bool"}, {"name": "created_at", "val": ": datetime"}, {"name": "endpoint", "val": ": str"}, {"name": "events", "val": ": list"}, {"name": "conflicting_files", "val": ": typing.Union[list[str], bool, NoneType]"}, {"name": "target_branch", "val": ": typing.Optional[str]"}, {"name": "merge_commit_oid", "val": ": typing.Optional[str]"}, {"name": "diff", "val": ": typing.Optional[str]"}]</parameters><paramsdesc>- **title** (`str`) --
  The title of the Discussion / Pull Request
- **status** (`str`) --
  The status of the Discussion / Pull Request.
  It can be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )
  * `"draft"` (only for Pull Requests )
- **num** (`int`) --
  The number of the Discussion / Pull Request.
- **repo_id** (`str`) --
  The id (`"{namespace}/{repo_name}"`) of the repo on which
  the Discussion / Pull Request was open.
- **repo_type** (`str`) --
  The type of the repo on which the Discussion / Pull Request was open.
  Possible values are: `"model"`, `"dataset"`, `"space"`.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **is_pull_request** (`bool`) --
  Whether or not this is a Pull Request.
- **created_at** (`datetime`) --
  The `datetime` of creation of the Discussion / Pull Request.
- **events** (`list` of [DiscussionEvent](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionEvent)) --
  The list of `DiscussionEvents` in this Discussion or Pull Request.
- **conflicting_files** (`Union[list[str], bool, None]`, *optional*) --
  A list of conflicting files if this is a Pull Request.
  `None` if `self.is_pull_request` is `False`.
  `True` if there are conflicting files but the list can't be retrieved.
- **target_branch** (`str`, *optional*) --
  The branch into which changes are to be merged if this is a
  Pull Request . `None`  if `self.is_pull_request` is `False`.
- **merge_commit_oid** (`str`, *optional*) --
  If this is a merged Pull Request , this is set to the OID / SHA of
  the merge commit, `None` otherwise.
- **diff** (`str`, *optional*) --
  The git diff if this is a Pull Request , `None` otherwise.
- **endpoint** (`str`) --
  Endpoint of the Hub. Default is https://huggingface.co.
- **git_reference** (`str`, *optional*) --
  (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise.
- **url** (`str`) --
  (property) URL of the discussion on the Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Subclass of [Discussion](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.Discussion).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionEvent</name><anchor>huggingface_hub.DiscussionEvent</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L155</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.</paramsdesc><paramgroups>0</paramgroups></docstring>

An event in a Discussion or Pull Request.

Use concrete classes:
* [DiscussionComment](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionComment)
* [DiscussionStatusChange](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionStatusChange)
* [DiscussionCommit](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionCommit)
* [DiscussionTitleChange](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionTitleChange)




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionComment</name><anchor>huggingface_hub.DiscussionComment</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L188</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "content", "val": ": str"}, {"name": "edited", "val": ": bool"}, {"name": "hidden", "val": ": bool"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **content** (`str`) --
  The raw markdown content of the comment. Mentions, links and images are not rendered.
- **edited** (`bool`) --
  Whether or not this comment has been edited.
- **hidden** (`bool`) --
  Whether or not this comment has been hidden.</paramsdesc><paramgroups>0</paramgroups></docstring>
A comment in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionEvent).





</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionStatusChange</name><anchor>huggingface_hub.DiscussionStatusChange</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L243</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "new_status", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **new_status** (`str`) --
  The status of the Discussion / Pull Request after the change.
  It can be one of:
  * `"open"`
  * `"closed"`
  * `"merged"` (only for Pull Requests )</paramsdesc><paramgroups>0</paramgroups></docstring>
A change of status in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionCommit</name><anchor>huggingface_hub.DiscussionCommit</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L271</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "summary", "val": ": str"}, {"name": "oid", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **summary** (`str`) --
  The summary of the commit.
- **oid** (`str`) --
  The OID / SHA of the commit, as a hexadecimal string.</paramsdesc><paramgroups>0</paramgroups></docstring>
A commit in a Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DiscussionTitleChange</name><anchor>huggingface_hub.DiscussionTitleChange</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/community.py#L298</source><parameters>[{"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}, {"name": "created_at", "val": ": datetime"}, {"name": "author", "val": ": str"}, {"name": "_event", "val": ": dict"}, {"name": "old_title", "val": ": str"}, {"name": "new_title", "val": ": str"}]</parameters><paramsdesc>- **id** (`str`) --
  The ID of the event. An hexadecimal string.
- **type** (`str`) --
  The type of the event.
- **created_at** (`datetime`) --
  A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime)
  object holding the creation timestamp for the event.
- **author** (`str`) --
  The username of the Discussion / Pull Request author.
  Can be `"deleted"` if the user has been deleted since.
- **old_title** (`str`) --
  The previous title for the Discussion / Pull Request.
- **new_title** (`str`) --
  The new title.</paramsdesc><paramgroups>0</paramgroups></docstring>
A rename event in a Discussion / Pull Request.

Subclass of [DiscussionEvent](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionEvent).




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/community.md" />

### 파일 시스템 API[[filesystem-api]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/hf_file_system.md

# 파일 시스템 API[[filesystem-api]]

[HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem) 클래스는 [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/)을 기반으로 Hugging Face Hub에 Python 파일 인터페이스를 제공합니다.

## [HfFileSystem](Hf파일시스템)[[huggingface_hub.HfFileSystem]]

[HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)은 [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/)을 기반으로 하므로 제공되는 대부분의 API와 호환됩니다. 자세한 내용은 [가이드](../guides/hf_file_system) 및 fsspec의 [API 레퍼런스](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem)를 확인하세요.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.HfFileSystem</name><anchor>huggingface_hub.HfFileSystem</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L59</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **token** (`str` or `bool`, *optional*) --
  A valid user access token (string). Defaults to the locally saved
  token, which is the recommended method for authentication (see
  https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
  To disable authentication, pass `False`.
- **endpoint** (`str`, *optional*) --
  Endpoint of the Hub. Defaults to <https://huggingface.co>.</paramsdesc><paramgroups>0</paramgroups></docstring>

Access a remote Hugging Face Hub repository as if were a local file system.

> [!WARNING]
> [HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem) provides fsspec compatibility, which is useful for libraries that require it (e.g., reading
>     Hugging Face datasets directly with `pandas`). However, it introduces additional overhead due to this compatibility
>     layer. For better performance and reliability, it's recommended to use `HfApi` methods when possible.



<ExampleCodeBlock anchor="huggingface_hub.HfFileSystem.example">

Usage:

```python
>>> from huggingface_hub import HfFileSystem

>>> fs = HfFileSystem()

>>> # List files
>>> fs.glob("my-username/my-model/*.bin")
['my-username/my-model/pytorch_model.bin']
>>> fs.ls("datasets/my-username/my-dataset", detail=False)
['datasets/my-username/my-dataset/.gitattributes', 'datasets/my-username/my-dataset/README.md', 'datasets/my-username/my-dataset/data.json']

>>> # Read/write files
>>> with fs.open("my-username/my-model/pytorch_model.bin") as f:
...     data = f.read()
>>> with fs.open("my-username/my-model/pytorch_model.bin", "wb") as f:
...     f.write(data)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__init__</name><anchor>huggingface_hub.HfFileSystem.__init__</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L100</source><parameters>[{"name": "*args", "val": ""}, {"name": "endpoint", "val": ": typing.Optional[str] = None"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "block_size", "val": ": typing.Optional[int] = None"}, {"name": "**storage_options", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resolve_path</name><anchor>huggingface_hub.HfFileSystem.resolve_path</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L141</source><parameters>[{"name": "path", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **path** (`str`) --
  Path to resolve.
- **revision** (`str`, *optional*) --
  The revision of the repo to resolve. Defaults to the revision specified in the path.</paramsdesc><paramgroups>0</paramgroups><rettype>`HfFileSystemResolvedPath`</rettype><retdesc>Resolved path information containing `repo_type`, `repo_id`, `revision` and `path_in_repo`.</retdesc><raises>- ``ValueError`` -- 
  If path contains conflicting revision information.
- ``NotImplementedError`` -- 
  If trying to list repositories.</raises><raisederrors>``ValueError`` or ``NotImplementedError``</raisederrors></docstring>

Resolve a Hugging Face file system path into its components.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>ls</name><anchor>huggingface_hub.HfFileSystem.ls</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py#L341</source><parameters>[{"name": "path", "val": ": str"}, {"name": "detail", "val": ": bool = True"}, {"name": "refresh", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path** (`str`) --
  Path to the directory.
- **detail** (`bool`, *optional*) --
  If True, returns a list of dictionaries containing file information. If False,
  returns a list of file paths. Defaults to True.
- **refresh** (`bool`, *optional*) --
  If True, bypass the cache and fetch the latest data. Defaults to False.
- **revision** (`str`, *optional*) --
  The git revision to list from.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[Union[str, dict[str, Any]]]`</rettype><retdesc>List of file paths (if detail=False) or list of file information
dictionaries (if detail=True).</retdesc></docstring>

List the contents of a directory.

For more details, refer to [fsspec documentation](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.ls).

> [!WARNING]
> Note: When possible, use `HfApi.list_repo_tree()` for better performance.








</div></div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/hf_file_system.md" />

### 추론 엔드포인트 [[inference-endpoints]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_endpoints.md

# 추론 엔드포인트 [[inference-endpoints]]

Hugging Face가 관리하는 추론 엔드포인트는 우리가 모델을 쉽고 안전하게 배포할 수 있게 해주는 도구입니다. 이러한 추론 엔드포인트는 [Hub](https://huggingface.co/models)에 있는 모델을 기반으로 설계되었습니다. 이 문서는 `huggingface_hub`와 추론 엔드포인트 통합에 관한 참조 페이지이며, 더욱 자세한 정보는 [공식 문서](https://huggingface.co/docs/inference-endpoints/index)를 통해 확인할 수 있습니다.

> [!TIP]
> 'huggingface_hub'를 사용하여 추론 엔드포인트를 프로그래밍 방식으로 관리하는 방법을 알고 싶다면, [관련 가이드](../guides/inference_endpoints)를 확인해 보세요.

추론 엔드포인트는 API로 쉽게 접근할 수 있습니다. 이 엔드포인트들은 [Swagger](https://api.endpoints.huggingface.cloud/)를 통해 문서화되어 있고, [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) 클래스는 이 API를 사용해 만든 간단한 래퍼입니다.

## 매소드 [[methods]]

다음과 같은 추론 엔드포인트의 기능이 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi)안에 구현되어 있습니다:

- [get_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_inference_endpoint)와 [list_inference_endpoints()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_inference_endpoints)를 사용해 엔드포인트 정보를 조회할 수 있습니다.
- [create_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint), [update_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint), [delete_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint)로 엔드포인트를 배포하고 관리할 수 있습니다.
- [pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint)와 [resume_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint)로 엔드포인트를 잠시 멈추거나 다시 시작할 수 있습니다.
- [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint)로 엔드포인트의 복제본을 0개로 설정할 수 있습니다.

## InferenceEndpoint [[huggingface_hub.InferenceEndpoint]][[huggingface_hub.InferenceEndpoint]]

기본 데이터 클래스는 [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)입니다. 여기에는 구성 및 현재 상태를 가지고 있는 배포된 `InferenceEndpoint`에 대한 정보가 포함되어 있습니다. 배포 후에는 [InferenceEndpoint.client](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.client)와 [InferenceEndpoint.async_client](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.async_client)를 사용해 엔드포인트에서 추론 작업을 할 수 있고, 이때 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)와 [AsyncInferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.AsyncInferenceClient) 객체를 반환합니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpoint</name><anchor>huggingface_hub.InferenceEndpoint</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L38</source><parameters>[{"name": "namespace", "val": ": str"}, {"name": "raw", "val": ": dict"}, {"name": "_token", "val": ": typing.Union[str, bool, NoneType]"}, {"name": "_api", "val": ": HfApi"}]</parameters><paramsdesc>- **name** (`str`) --
  The unique name of the Inference Endpoint.
- **namespace** (`str`) --
  The namespace where the Inference Endpoint is located.
- **repository** (`str`) --
  The name of the model repository deployed on this Inference Endpoint.
- **status** ([InferenceEndpointStatus](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointStatus)) --
  The current status of the Inference Endpoint.
- **url** (`str`, *optional*) --
  The URL of the Inference Endpoint, if available. Only a deployed Inference Endpoint will have a URL.
- **framework** (`str`) --
  The machine learning framework used for the model.
- **revision** (`str`) --
  The specific model revision deployed on the Inference Endpoint.
- **task** (`str`) --
  The task associated with the deployed model.
- **created_at** (`datetime.datetime`) --
  The timestamp when the Inference Endpoint was created.
- **updated_at** (`datetime.datetime`) --
  The timestamp of the last update of the Inference Endpoint.
- **type** ([InferenceEndpointType](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointType)) --
  The type of the Inference Endpoint (public, protected, private).
- **raw** (`dict`) --
  The raw dictionary data returned from the API.
- **token** (`str` or `bool`, *optional*) --
  Authentication token for the Inference Endpoint, if set when requesting the API. Will default to the
  locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server.</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about a deployed Inference Endpoint.



<ExampleCodeBlock anchor="huggingface_hub.InferenceEndpoint.example">

Example:
```python
>>> from huggingface_hub import get_inference_endpoint
>>> endpoint = get_inference_endpoint("my-text-to-image")
>>> endpoint
InferenceEndpoint(name='my-text-to-image', ...)

# Get status
>>> endpoint.status
'running'
>>> endpoint.url
'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud'

# Run inference
>>> endpoint.client.text_to_image(...)

# Pause endpoint to save $$$
>>> endpoint.pause()

# ...
# Resume and wait for deployment
>>> endpoint.resume()
>>> endpoint.wait()
>>> endpoint.client.text_to_image(...)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_raw</name><anchor>huggingface_hub.InferenceEndpoint.from_raw</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L121</source><parameters>[{"name": "raw", "val": ": dict"}, {"name": "namespace", "val": ": str"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType] = None"}, {"name": "api", "val": ": typing.Optional[ForwardRef('HfApi')] = None"}]</parameters></docstring>
Initialize object from raw dictionary.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>client</name><anchor>huggingface_hub.InferenceEndpoint.client</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L140</source><parameters>[]</parameters><rettype>[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)</rettype><retdesc>an inference client pointing to the deployed endpoint.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- If the Inference Endpoint is not yet deployed.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError)</raisederrors></docstring>
Returns a client to make predictions on this Inference Endpoint.










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>async_client</name><anchor>huggingface_hub.InferenceEndpoint.async_client</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L162</source><parameters>[]</parameters><rettype>[AsyncInferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)</rettype><retdesc>an asyncio-compatible inference client pointing to the deployed endpoint.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- If the Inference Endpoint is not yet deployed.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError)</raisederrors></docstring>
Returns a client to make predictions on this Inference Endpoint.










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete</name><anchor>huggingface_hub.InferenceEndpoint.delete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L385</source><parameters>[]</parameters></docstring>
Delete the Inference Endpoint.

This operation is not reversible. If you don't want to be charged for an Inference Endpoint, it is preferable
to pause it with [InferenceEndpoint.pause()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause) or scale it to zero with [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero).

This is an alias for [HfApi.delete_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fetch</name><anchor>huggingface_hub.InferenceEndpoint.fetch</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L237</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Fetch latest information about the Inference Endpoint.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pause</name><anchor>huggingface_hub.InferenceEndpoint.pause</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L328</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Pause the Inference Endpoint.

A paused Inference Endpoint will not be charged. It can be resumed at any time using [InferenceEndpoint.resume()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).
This is different from scaling the Inference Endpoint to zero with [InferenceEndpoint.scale_to_zero()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero), which
would be automatically restarted when a request is made to it.

This is an alias for [HfApi.pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint). The current object is mutated in place with the
latest data from the server.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resume</name><anchor>huggingface_hub.InferenceEndpoint.resume</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L346</source><parameters>[{"name": "running_ok", "val": ": bool = True"}]</parameters><paramsdesc>- **running_ok** (`bool`, *optional*) --
  If `True`, the method will not raise an error if the Inference Endpoint is already running. Defaults to
  `True`.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Resume the Inference Endpoint.

This is an alias for [HfApi.resume_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint). The current object is mutated in place with the
latest data from the server.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_to_zero</name><anchor>huggingface_hub.InferenceEndpoint.scale_to_zero</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L367</source><parameters>[]</parameters><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Scale Inference Endpoint to zero.

An Inference Endpoint scaled to zero will not be charged. It will be resumed on the next request to it, with a
cold start delay. This is different from pausing the Inference Endpoint with [InferenceEndpoint.pause()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause), which
would require a manual resume with [InferenceEndpoint.resume()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume).

This is an alias for [HfApi.scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint). The current object is mutated in place with the
latest data from the server.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update</name><anchor>huggingface_hub.InferenceEndpoint.update</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L248</source><parameters>[{"name": "accelerator", "val": ": typing.Optional[str] = None"}, {"name": "instance_size", "val": ": typing.Optional[str] = None"}, {"name": "instance_type", "val": ": typing.Optional[str] = None"}, {"name": "min_replica", "val": ": typing.Optional[int] = None"}, {"name": "max_replica", "val": ": typing.Optional[int] = None"}, {"name": "scale_to_zero_timeout", "val": ": typing.Optional[int] = None"}, {"name": "repository", "val": ": typing.Optional[str] = None"}, {"name": "framework", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "task", "val": ": typing.Optional[str] = None"}, {"name": "custom_image", "val": ": typing.Optional[dict] = None"}, {"name": "secrets", "val": ": typing.Optional[dict[str, str]] = None"}]</parameters><paramsdesc>- **accelerator** (`str`, *optional*) --
  The hardware accelerator to be used for inference (e.g. `"cpu"`).
- **instance_size** (`str`, *optional*) --
  The size or type of the instance to be used for hosting the model (e.g. `"x4"`).
- **instance_type** (`str`, *optional*) --
  The cloud instance type where the Inference Endpoint will be deployed (e.g. `"intel-icl"`).
- **min_replica** (`int`, *optional*) --
  The minimum number of replicas (instances) to keep running for the Inference Endpoint.
- **max_replica** (`int`, *optional*) --
  The maximum number of replicas (instances) to scale to for the Inference Endpoint.
- **scale_to_zero_timeout** (`int`, *optional*) --
  The duration in minutes before an inactive endpoint is scaled to zero.

- **repository** (`str`, *optional*) --
  The name of the model repository associated with the Inference Endpoint (e.g. `"gpt2"`).
- **framework** (`str`, *optional*) --
  The machine learning framework used for the model (e.g. `"custom"`).
- **revision** (`str`, *optional*) --
  The specific model revision to deploy on the Inference Endpoint (e.g. `"6c0e6080953db56375760c0471a8c5f2929baf11"`).
- **task** (`str`, *optional*) --
  The task on which to deploy the model (e.g. `"text-classification"`).
- **custom_image** (`dict`, *optional*) --
  A custom Docker image to use for the Inference Endpoint. This is useful if you want to deploy an
  Inference Endpoint running on the `text-generation-inference` (TGI) framework (see examples).
- **secrets** (`dict[str, str]`, *optional*) --
  Secret values to inject in the container environment.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc></docstring>
Update the Inference Endpoint.

This method allows the update of either the compute configuration, the deployed model, or both. All arguments are
optional but at least one must be provided.

This is an alias for [HfApi.update_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint). The current object is mutated in place with the
latest data from the server.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wait</name><anchor>huggingface_hub.InferenceEndpoint.wait</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L184</source><parameters>[{"name": "timeout", "val": ": typing.Optional[int] = None"}, {"name": "refresh_every", "val": ": int = 5"}]</parameters><paramsdesc>- **timeout** (`int`, *optional*) --
  The maximum time to wait for the Inference Endpoint to be deployed, in seconds. If `None`, will wait
  indefinitely.
- **refresh_every** (`int`, *optional*) --
  The time to wait between each fetch of the Inference Endpoint status, in seconds. Defaults to 5s.</paramsdesc><paramgroups>0</paramgroups><rettype>[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)</rettype><retdesc>the same Inference Endpoint, mutated in place with the latest data.</retdesc><raises>- [InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) -- 
  If the Inference Endpoint ended up in a failed state.
- `InferenceEndpointTimeoutError` -- 
  If the Inference Endpoint is not deployed after `timeout` seconds.</raises><raisederrors>[InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) or `InferenceEndpointTimeoutError`</raisederrors></docstring>
Wait for the Inference Endpoint to be deployed.

Information from the server will be fetched every 1s. If the Inference Endpoint is not deployed after `timeout`
seconds, a `InferenceEndpointTimeoutError` will be raised. The [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) will be mutated in place with the latest
data.












</div></div>

## InferenceEndpointStatus [[huggingface_hub.InferenceEndpointStatus]][[huggingface_hub.InferenceEndpointStatus]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointStatus</name><anchor>huggingface_hub.InferenceEndpointStatus</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L20</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>
An enumeration.

</div>

## InferenceEndpointType [[huggingface_hub.InferenceEndpointType]][[huggingface_hub.InferenceEndpointType]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointType</name><anchor>huggingface_hub.InferenceEndpointType</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_inference_endpoints.py#L31</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>
An enumeration.

</div>

## InferenceEndpointError [[huggingface_hub.InferenceEndpointError]][[huggingface_hub.InferenceEndpointError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceEndpointError</name><anchor>huggingface_hub.InferenceEndpointError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L98</source><parameters>""</parameters></docstring>
Generic exception when dealing with Inference Endpoints.

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/inference_endpoints.md" />

### Space 런타임 관리[[managing-your-space-runtime]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/space_runtime.md

# Space 런타임 관리[[managing-your-space-runtime]]

Hub의 Space를 관리하는 메소드에 대한 자세한 설명은 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi)페이지를 확인하세요.

- Space 복제: [duplicate_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.duplicate_space)
- 현재 런타임 가져오기: [get_space_runtime()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_space_runtime)
- 보안 관리: [add_space_secret()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.add_space_secret) 및 [delete_space_secret()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_space_secret)
- 하드웨어 관리: [request_space_hardware()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.request_space_hardware)
- 상태 관리: [pause_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_space), [restart_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.restart_space), [set_space_sleep_time()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.set_space_sleep_time)

## 데이터 구조[[data-structures]]

### SpaceRuntime[[huggingface_hub.SpaceRuntime]][[huggingface_hub.SpaceRuntime]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceRuntime</name><anchor>huggingface_hub.SpaceRuntime</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L103</source><parameters>[{"name": "data", "val": ": dict"}]</parameters><paramsdesc>- **stage** (`str`) --
  Current stage of the space. Example: RUNNING.
- **hardware** (`str` or `None`) --
  Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
  is `BUILDING` for the first time.
- **requested_hardware** (`str` or `None`) --
  Requested hardware. Can be different than `hardware` especially if the request
  has just been made. Example: "t4-medium". Can be `None` if no hardware has
  been requested yet.
- **sleep_time** (`int` or `None`) --
  Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the
  Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48
  hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
- **raw** (`dict`) --
  Raw response from the server. Contains more information about the Space
  runtime like number of replicas, number of cpu, memory size,...</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about the current runtime of a Space.




</div>

### SpaceHardware[[huggingface_hub.SpaceHardware]][[huggingface_hub.SpaceHardware]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceHardware</name><anchor>huggingface_hub.SpaceHardware</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L48</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of hardwares available to run your Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceHardware.example">

Value can be compared to a string:
```py
assert SpaceHardware.CPU_BASIC == "cpu-basic"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface-internal/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts (private url).


</div>

### SpaceStage[[huggingface_hub.SpaceStage]][[huggingface_hub.SpaceStage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceStage</name><anchor>huggingface_hub.SpaceStage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L23</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of possible stage of a Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceStage.example">

Value can be compared to a string:
```py
assert SpaceStage.BUILDING == "BUILDING"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url).


</div>

### SpaceStorage[[huggingface_hub.SpaceStorage]][[huggingface_hub.SpaceStorage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceStorage</name><anchor>huggingface_hub.SpaceStorage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L85</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enumeration of persistent storage available for your Space on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.SpaceStorage.example">

Value can be compared to a string:
```py
assert SpaceStorage.SMALL == "small"
```

</ExampleCodeBlock>

Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url).


</div>

### SpaceVariable[[huggingface_hub.SpaceVariable]][[huggingface_hub.SpaceVariable]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SpaceVariable</name><anchor>huggingface_hub.SpaceVariable</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_space_api.py#L143</source><parameters>[{"name": "key", "val": ": str"}, {"name": "values", "val": ": dict"}]</parameters><paramsdesc>- **key** (`str`) --
  Variable key. Example: `"MODEL_REPO_ID"`
- **value** (`str`) --
  Variable value. Example: `"the_model_repo_id"`.
- **description** (`str` or None) --
  Description of the variable. Example: `"Model Repo ID of the implemented model"`.
- **updatedAt** (`datetime` or None) --
  datetime of the last update of the variable (if the variable has been updated at least once).</paramsdesc><paramgroups>0</paramgroups></docstring>

Contains information about the current variables of a Space.




</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/space_runtime.md" />

### 추론 타입[[inference-types]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_types.md

# 추론 타입[[inference-types]]

이 페이지에는 Hugging Face Hub에서 지원하는 타입(예: 데이터 클래스)이 나열되어 있습니다.
각 작업은 JSON 스키마를 사용하여 지정되며, 이러한 스키마에 의해서 타입이 생성됩니다. 이때 Python 요구 사항으로 인해 일부 사용자 정의가 있을 수 있습니다.

각 작업의 JSON 스키마를 확인하려면 [@huggingface.js/tasks](https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks)를 확인하세요.

라이브러리에서 이 부분은 아직 개발 중이며, 향후 릴리즈에서 개선될 예정입니다.



## audio_classification[[huggingface_hub.AudioClassificationInput]][[huggingface_hub.AudioClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationInput</name><anchor>huggingface_hub.AudioClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None"}]</parameters></docstring>
Inputs for Audio Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationOutputElement</name><anchor>huggingface_hub.AudioClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L37</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs for Audio Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioClassificationParameters</name><anchor>huggingface_hub.AudioClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Audio Classification

</div>

## audio_to_audio[[huggingface_hub.AudioToAudioInput]][[huggingface_hub.AudioToAudioInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioToAudioInput</name><anchor>huggingface_hub.AudioToAudioInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_to_audio.py#L12</source><parameters>[{"name": "inputs", "val": ": typing.Any"}]</parameters></docstring>
Inputs for Audio to Audio inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AudioToAudioOutputElement</name><anchor>huggingface_hub.AudioToAudioOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/audio_to_audio.py#L20</source><parameters>[{"name": "blob", "val": ": typing.Any"}, {"name": "content_type", "val": ": str"}, {"name": "label", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Audio To Audio task
A generated audio file with its label.


</div>

## automatic_speech_recognition[[huggingface_hub.AutomaticSpeechRecognitionGenerationParameters]][[huggingface_hub.AutomaticSpeechRecognitionGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionGenerationParameters</name><anchor>huggingface_hub.AutomaticSpeechRecognitionGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionInput</name><anchor>huggingface_hub.AutomaticSpeechRecognitionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L85</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None"}]</parameters></docstring>
Inputs for Automatic Speech Recognition inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionOutput</name><anchor>huggingface_hub.AutomaticSpeechRecognitionOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L105</source><parameters>[{"name": "text", "val": ": str"}, {"name": "chunks", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None"}]</parameters></docstring>
Outputs of inference for the Automatic Speech Recognition task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionOutputChunk</name><anchor>huggingface_hub.AutomaticSpeechRecognitionOutputChunk</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L97</source><parameters>[{"name": "text", "val": ": str"}, {"name": "timestamp", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AutomaticSpeechRecognitionParameters</name><anchor>huggingface_hub.AutomaticSpeechRecognitionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/automatic_speech_recognition.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None"}, {"name": "return_timestamps", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Automatic Speech Recognition

</div>

## chat_completion[[huggingface_hub.ChatCompletionInput]][[huggingface_hub.ChatCompletionInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInput</name><anchor>huggingface_hub.ChatCompletionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L125</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Chat Completion Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputFunctionDefinition</name><anchor>huggingface_hub.ChatCompletionInputFunctionDefinition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L27</source><parameters>[{"name": "name", "val": ": str"}, {"name": "parameters", "val": ": typing.Any"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputFunctionName</name><anchor>huggingface_hub.ChatCompletionInputFunctionName</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L106</source><parameters>[{"name": "name", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputJSONSchema</name><anchor>huggingface_hub.ChatCompletionInputJSONSchema</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L49</source><parameters>[{"name": "name", "val": ": str"}, {"name": "description", "val": ": typing.Optional[str] = None"}, {"name": "schema", "val": ": typing.Optional[dict[str, object]] = None"}, {"name": "strict", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputMessage</name><anchor>huggingface_hub.ChatCompletionInputMessage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L41</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Union[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = None"}, {"name": "name", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputMessageChunk</name><anchor>huggingface_hub.ChatCompletionInputMessageChunk</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L20</source><parameters>[{"name": "type", "val": ": ChatCompletionInputMessageChunkType"}, {"name": "image_url", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = None"}, {"name": "text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatJSONObject</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatJSONObject</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L84</source><parameters>[{"name": "type", "val": ": typing.Literal['json_object']"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatJSONSchema</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatJSONSchema</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L78</source><parameters>[{"name": "type", "val": ": typing.Literal['json_schema']"}, {"name": "json_schema", "val": ": ChatCompletionInputJSONSchema"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputResponseFormatText</name><anchor>huggingface_hub.ChatCompletionInputResponseFormatText</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L73</source><parameters>[{"name": "type", "val": ": typing.Literal['text']"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputStreamOptions</name><anchor>huggingface_hub.ChatCompletionInputStreamOptions</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L96</source><parameters>[{"name": "include_usage", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputTool</name><anchor>huggingface_hub.ChatCompletionInputTool</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L119</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionDefinition"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputToolCall</name><anchor>huggingface_hub.ChatCompletionInputToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L34</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionDefinition"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputToolChoiceClass</name><anchor>huggingface_hub.ChatCompletionInputToolChoiceClass</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L111</source><parameters>[{"name": "function", "val": ": ChatCompletionInputFunctionName"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionInputURL</name><anchor>huggingface_hub.ChatCompletionInputURL</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L12</source><parameters>[{"name": "url", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutput</name><anchor>huggingface_hub.ChatCompletionOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L263</source><parameters>[{"name": "choices", "val": ": list"}, {"name": "created", "val": ": int"}, {"name": "id", "val": ": str"}, {"name": "model", "val": ": str"}, {"name": "system_fingerprint", "val": ": str"}, {"name": "usage", "val": ": ChatCompletionOutputUsage"}]</parameters></docstring>
Chat Completion Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputComplete</name><anchor>huggingface_hub.ChatCompletionOutputComplete</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L248</source><parameters>[{"name": "finish_reason", "val": ": str"}, {"name": "index", "val": ": int"}, {"name": "message", "val": ": ChatCompletionOutputMessage"}, {"name": "logprobs", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputFunctionDefinition</name><anchor>huggingface_hub.ChatCompletionOutputFunctionDefinition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L225</source><parameters>[{"name": "arguments", "val": ": str"}, {"name": "name", "val": ": str"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputLogprob</name><anchor>huggingface_hub.ChatCompletionOutputLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L213</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}, {"name": "top_logprobs", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputLogprobs</name><anchor>huggingface_hub.ChatCompletionOutputLogprobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L220</source><parameters>[{"name": "content", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputMessage</name><anchor>huggingface_hub.ChatCompletionOutputMessage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L239</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "reasoning", "val": ": typing.Optional[str] = None"}, {"name": "tool_call_id", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputToolCall</name><anchor>huggingface_hub.ChatCompletionOutputToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L232</source><parameters>[{"name": "function", "val": ": ChatCompletionOutputFunctionDefinition"}, {"name": "id", "val": ": str"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputTopLogprob</name><anchor>huggingface_hub.ChatCompletionOutputTopLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L207</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionOutputUsage</name><anchor>huggingface_hub.ChatCompletionOutputUsage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L256</source><parameters>[{"name": "completion_tokens", "val": ": int"}, {"name": "prompt_tokens", "val": ": int"}, {"name": "total_tokens", "val": ": int"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutput</name><anchor>huggingface_hub.ChatCompletionStreamOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L335</source><parameters>[{"name": "choices", "val": ": list"}, {"name": "created", "val": ": int"}, {"name": "id", "val": ": str"}, {"name": "model", "val": ": str"}, {"name": "system_fingerprint", "val": ": str"}, {"name": "usage", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None"}]</parameters></docstring>
Chat Completion Stream Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputChoice</name><anchor>huggingface_hub.ChatCompletionStreamOutputChoice</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L320</source><parameters>[{"name": "delta", "val": ": ChatCompletionStreamOutputDelta"}, {"name": "index", "val": ": int"}, {"name": "finish_reason", "val": ": typing.Optional[str] = None"}, {"name": "logprobs", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputDelta</name><anchor>huggingface_hub.ChatCompletionStreamOutputDelta</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L293</source><parameters>[{"name": "role", "val": ": str"}, {"name": "content", "val": ": typing.Optional[str] = None"}, {"name": "reasoning", "val": ": typing.Optional[str] = None"}, {"name": "tool_call_id", "val": ": typing.Optional[str] = None"}, {"name": "tool_calls", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputDeltaToolCall</name><anchor>huggingface_hub.ChatCompletionStreamOutputDeltaToolCall</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L285</source><parameters>[{"name": "function", "val": ": ChatCompletionStreamOutputFunction"}, {"name": "id", "val": ": str"}, {"name": "index", "val": ": int"}, {"name": "type", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputFunction</name><anchor>huggingface_hub.ChatCompletionStreamOutputFunction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L279</source><parameters>[{"name": "arguments", "val": ": str"}, {"name": "name", "val": ": typing.Optional[str] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputLogprob</name><anchor>huggingface_hub.ChatCompletionStreamOutputLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L308</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}, {"name": "top_logprobs", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputLogprobs</name><anchor>huggingface_hub.ChatCompletionStreamOutputLogprobs</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L315</source><parameters>[{"name": "content", "val": ": list"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputTopLogprob</name><anchor>huggingface_hub.ChatCompletionStreamOutputTopLogprob</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L302</source><parameters>[{"name": "logprob", "val": ": float"}, {"name": "token", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ChatCompletionStreamOutputUsage</name><anchor>huggingface_hub.ChatCompletionStreamOutputUsage</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/chat_completion.py#L328</source><parameters>[{"name": "completion_tokens", "val": ": int"}, {"name": "prompt_tokens", "val": ": int"}, {"name": "total_tokens", "val": ": int"}]</parameters></docstring>


</div>

## depth_estimation[[huggingface_hub.DepthEstimationInput]][[huggingface_hub.DepthEstimationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DepthEstimationInput</name><anchor>huggingface_hub.DepthEstimationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/depth_estimation.py#L12</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters></docstring>
Inputs for Depth Estimation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DepthEstimationOutput</name><anchor>huggingface_hub.DepthEstimationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/depth_estimation.py#L22</source><parameters>[{"name": "depth", "val": ": typing.Any"}, {"name": "predicted_depth", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Depth Estimation task

</div>

## document_question_answering[[huggingface_hub.DocumentQuestionAnsweringInput]][[huggingface_hub.DocumentQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringInput</name><anchor>huggingface_hub.DocumentQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L56</source><parameters>[{"name": "inputs", "val": ": DocumentQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Document Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringInputData</name><anchor>huggingface_hub.DocumentQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L12</source><parameters>[{"name": "image", "val": ": typing.Any"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (document, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringOutputElement</name><anchor>huggingface_hub.DocumentQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L66</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}]</parameters></docstring>
Outputs of inference for the Document Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.DocumentQuestionAnsweringParameters</name><anchor>huggingface_hub.DocumentQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/document_question_answering.py#L22</source><parameters>[{"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters></docstring>
Additional inference parameters for Document Question Answering

</div>

## feature_extraction[[huggingface_hub.FeatureExtractionInput]][[huggingface_hub.FeatureExtractionInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FeatureExtractionInput</name><anchor>huggingface_hub.FeatureExtractionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/feature_extraction.py#L15</source><parameters>[{"name": "inputs", "val": ": typing.Union[list[str], str]"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None"}]</parameters></docstring>
Feature Extraction Input.
Auto-generated from TEI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.


</div>

## fill_mask[[huggingface_hub.FillMaskInput]][[huggingface_hub.FillMaskInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskInput</name><anchor>huggingface_hub.FillMaskInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L26</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None"}]</parameters></docstring>
Inputs for Fill Mask inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskOutputElement</name><anchor>huggingface_hub.FillMaskOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L36</source><parameters>[{"name": "score", "val": ": float"}, {"name": "sequence", "val": ": str"}, {"name": "token", "val": ": int"}, {"name": "token_str", "val": ": typing.Any"}, {"name": "fill_mask_output_token_str", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Fill Mask task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.FillMaskParameters</name><anchor>huggingface_hub.FillMaskParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/fill_mask.py#L12</source><parameters>[{"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Fill Mask

</div>

## image_classification[[huggingface_hub.ImageClassificationInput]][[huggingface_hub.ImageClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationInput</name><anchor>huggingface_hub.ImageClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None"}]</parameters></docstring>
Inputs for Image Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationOutputElement</name><anchor>huggingface_hub.ImageClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L37</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Image Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageClassificationParameters</name><anchor>huggingface_hub.ImageClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Image Classification

</div>

## image_segmentation[[huggingface_hub.ImageSegmentationInput]][[huggingface_hub.ImageSegmentationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationInput</name><anchor>huggingface_hub.ImageSegmentationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L29</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None"}]</parameters></docstring>
Inputs for Image Segmentation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationOutputElement</name><anchor>huggingface_hub.ImageSegmentationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L41</source><parameters>[{"name": "label", "val": ": str"}, {"name": "mask", "val": ": str"}, {"name": "score", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Outputs of inference for the Image Segmentation task
A predicted mask / segment


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageSegmentationParameters</name><anchor>huggingface_hub.ImageSegmentationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_segmentation.py#L15</source><parameters>[{"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Additional inference parameters for Image Segmentation

</div>

## image_to_image[[huggingface_hub.ImageToImageInput]][[huggingface_hub.ImageToImageInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageInput</name><anchor>huggingface_hub.ImageToImageInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L44</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None"}]</parameters></docstring>
Inputs for Image To Image inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageOutput</name><anchor>huggingface_hub.ImageToImageOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L56</source><parameters>[{"name": "image", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Image To Image task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageParameters</name><anchor>huggingface_hub.ImageToImageParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L22</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}]</parameters></docstring>
Additional inference parameters for Image To Image

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToImageTargetSize</name><anchor>huggingface_hub.ImageToImageTargetSize</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_image.py#L12</source><parameters>[{"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}]</parameters></docstring>
The size in pixels of the output image. This parameter is only supported by some
providers and for specific models. It will be ignored when unsupported.


</div>

## image_to_text[[huggingface_hub.ImageToTextGenerationParameters]][[huggingface_hub.ImageToTextGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextGenerationParameters</name><anchor>huggingface_hub.ImageToTextGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextInput</name><anchor>huggingface_hub.ImageToTextInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L85</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None"}]</parameters></docstring>
Inputs for Image To Text inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextOutput</name><anchor>huggingface_hub.ImageToTextOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L95</source><parameters>[{"name": "generated_text", "val": ": typing.Any"}, {"name": "image_to_text_output_generated_text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Image To Text task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToTextParameters</name><anchor>huggingface_hub.ImageToTextParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_text.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Image To Text

</div>

## image_to_video[[huggingface_hub.ImageToVideoInput]][[huggingface_hub.ImageToVideoInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoInput</name><anchor>huggingface_hub.ImageToVideoInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L44</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoParameters] = None"}]</parameters></docstring>
Inputs for Image To Video inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoOutput</name><anchor>huggingface_hub.ImageToVideoOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L56</source><parameters>[{"name": "video", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Image To Video task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoParameters</name><anchor>huggingface_hub.ImageToVideoParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L20</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}]</parameters></docstring>
Additional inference parameters for Image To Video

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ImageToVideoTargetSize</name><anchor>huggingface_hub.ImageToVideoTargetSize</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/image_to_video.py#L12</source><parameters>[{"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}]</parameters></docstring>
The size in pixel of the output video frames.

</div>

## object_detection[[huggingface_hub.ObjectDetectionBoundingBox]][[huggingface_hub.ObjectDetectionBoundingBox]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionBoundingBox</name><anchor>huggingface_hub.ObjectDetectionBoundingBox</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L32</source><parameters>[{"name": "xmax", "val": ": int"}, {"name": "xmin", "val": ": int"}, {"name": "ymax", "val": ": int"}, {"name": "ymin", "val": ": int"}]</parameters></docstring>
The predicted bounding box. Coordinates are relative to the top left corner of the input
image.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionInput</name><anchor>huggingface_hub.ObjectDetectionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L20</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None"}]</parameters></docstring>
Inputs for Object Detection inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionOutputElement</name><anchor>huggingface_hub.ObjectDetectionOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L48</source><parameters>[{"name": "box", "val": ": ObjectDetectionBoundingBox"}, {"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Object Detection task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ObjectDetectionParameters</name><anchor>huggingface_hub.ObjectDetectionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/object_detection.py#L12</source><parameters>[{"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Additional inference parameters for Object Detection

</div>

## question_answering[[huggingface_hub.QuestionAnsweringInput]][[huggingface_hub.QuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringInput</name><anchor>huggingface_hub.QuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L54</source><parameters>[{"name": "inputs", "val": ": QuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringInputData</name><anchor>huggingface_hub.QuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L12</source><parameters>[{"name": "context", "val": ": str"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (context, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringOutputElement</name><anchor>huggingface_hub.QuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L64</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}]</parameters></docstring>
Outputs of inference for the Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.QuestionAnsweringParameters</name><anchor>huggingface_hub.QuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/question_answering.py#L22</source><parameters>[{"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Question Answering

</div>

## sentence_similarity[[huggingface_hub.SentenceSimilarityInput]][[huggingface_hub.SentenceSimilarityInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SentenceSimilarityInput</name><anchor>huggingface_hub.SentenceSimilarityInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/sentence_similarity.py#L22</source><parameters>[{"name": "inputs", "val": ": SentenceSimilarityInputData"}, {"name": "parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters></docstring>
Inputs for Sentence similarity inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SentenceSimilarityInputData</name><anchor>huggingface_hub.SentenceSimilarityInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/sentence_similarity.py#L12</source><parameters>[{"name": "sentences", "val": ": list"}, {"name": "source_sentence", "val": ": str"}]</parameters></docstring>


</div>

## summarization[[huggingface_hub.SummarizationInput]][[huggingface_hub.SummarizationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationInput</name><anchor>huggingface_hub.SummarizationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None"}]</parameters></docstring>
Inputs for Summarization inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationOutput</name><anchor>huggingface_hub.SummarizationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L37</source><parameters>[{"name": "summary_text", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Summarization task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.SummarizationParameters</name><anchor>huggingface_hub.SummarizationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/summarization.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for summarization.

</div>

## table_question_answering[[huggingface_hub.TableQuestionAnsweringInput]][[huggingface_hub.TableQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringInput</name><anchor>huggingface_hub.TableQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L40</source><parameters>[{"name": "inputs", "val": ": TableQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Table Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringInputData</name><anchor>huggingface_hub.TableQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L12</source><parameters>[{"name": "question", "val": ": str"}, {"name": "table", "val": ": dict"}]</parameters></docstring>
One (table, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringOutputElement</name><anchor>huggingface_hub.TableQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L50</source><parameters>[{"name": "answer", "val": ": str"}, {"name": "cells", "val": ": list"}, {"name": "coordinates", "val": ": list"}, {"name": "aggregator", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Table Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TableQuestionAnsweringParameters</name><anchor>huggingface_hub.TableQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/table_question_answering.py#L25</source><parameters>[{"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Table Question Answering

</div>

## text2text_generation[[huggingface_hub.Text2TextGenerationInput]][[huggingface_hub.Text2TextGenerationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationInput</name><anchor>huggingface_hub.Text2TextGenerationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None"}]</parameters></docstring>
Inputs for Text2text Generation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationOutput</name><anchor>huggingface_hub.Text2TextGenerationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L37</source><parameters>[{"name": "generated_text", "val": ": typing.Any"}, {"name": "text2_text_generation_output_generated_text", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Text2text Generation task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.Text2TextGenerationParameters</name><anchor>huggingface_hub.Text2TextGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text2text_generation.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for Text2text Generation

</div>

## text_classification[[huggingface_hub.TextClassificationInput]][[huggingface_hub.TextClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationInput</name><anchor>huggingface_hub.TextClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L25</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None"}]</parameters></docstring>
Inputs for Text Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationOutputElement</name><anchor>huggingface_hub.TextClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L35</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Text Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextClassificationParameters</name><anchor>huggingface_hub.TextClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_classification.py#L15</source><parameters>[{"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text Classification

</div>

## text_generation[[huggingface_hub.TextGenerationInput]][[huggingface_hub.TextGenerationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInput</name><anchor>huggingface_hub.TextGenerationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L76</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Text Generation Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInputGenerateParameters</name><anchor>huggingface_hub.TextGenerationInputGenerateParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L25</source><parameters>[{"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationInputGrammarType</name><anchor>huggingface_hub.TextGenerationInputGrammarType</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L15</source><parameters>[{"name": "type", "val": ": TypeEnum"}, {"name": "value", "val": ": typing.Any"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutput</name><anchor>huggingface_hub.TextGenerationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L129</source><parameters>[{"name": "generated_text", "val": ": str"}, {"name": "details", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None"}]</parameters></docstring>
Text Generation Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputBestOfSequence</name><anchor>huggingface_hub.TextGenerationOutputBestOfSequence</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L107</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_text", "val": ": str"}, {"name": "generated_tokens", "val": ": int"}, {"name": "prefill", "val": ": list"}, {"name": "tokens", "val": ": list"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputDetails</name><anchor>huggingface_hub.TextGenerationOutputDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L118</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_tokens", "val": ": int"}, {"name": "prefill", "val": ": list"}, {"name": "tokens", "val": ": list"}, {"name": "best_of_sequences", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputPrefillToken</name><anchor>huggingface_hub.TextGenerationOutputPrefillToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L92</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationOutputToken</name><anchor>huggingface_hub.TextGenerationOutputToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L99</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "special", "val": ": bool"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutput</name><anchor>huggingface_hub.TextGenerationStreamOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L157</source><parameters>[{"name": "index", "val": ": int"}, {"name": "token", "val": ": TextGenerationStreamOutputToken"}, {"name": "details", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = None"}, {"name": "generated_text", "val": ": typing.Optional[str] = None"}, {"name": "top_tokens", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None"}]</parameters></docstring>
Text Generation Stream Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutputStreamDetails</name><anchor>huggingface_hub.TextGenerationStreamOutputStreamDetails</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L141</source><parameters>[{"name": "finish_reason", "val": ": TextGenerationOutputFinishReason"}, {"name": "generated_tokens", "val": ": int"}, {"name": "input_length", "val": ": int"}, {"name": "seed", "val": ": typing.Optional[int] = None"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextGenerationStreamOutputToken</name><anchor>huggingface_hub.TextGenerationStreamOutputToken</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_generation.py#L149</source><parameters>[{"name": "id", "val": ": int"}, {"name": "logprob", "val": ": float"}, {"name": "special", "val": ": bool"}, {"name": "text", "val": ": str"}]</parameters></docstring>


</div>

## text_to_audio[[huggingface_hub.TextToAudioGenerationParameters]][[huggingface_hub.TextToAudioGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioGenerationParameters</name><anchor>huggingface_hub.TextToAudioGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioInput</name><anchor>huggingface_hub.TextToAudioInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L83</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None"}]</parameters></docstring>
Inputs for Text To Audio inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioOutput</name><anchor>huggingface_hub.TextToAudioOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L93</source><parameters>[{"name": "audio", "val": ": typing.Any"}, {"name": "sampling_rate", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Text To Audio task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToAudioParameters</name><anchor>huggingface_hub.TextToAudioParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_audio.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None"}]</parameters></docstring>
Additional inference parameters for Text To Audio

</div>

## text_to_image[[huggingface_hub.TextToImageInput]][[huggingface_hub.TextToImageInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageInput</name><anchor>huggingface_hub.TextToImageInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L36</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None"}]</parameters></docstring>
Inputs for Text To Image inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageOutput</name><anchor>huggingface_hub.TextToImageOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L46</source><parameters>[{"name": "image", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Text To Image task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToImageParameters</name><anchor>huggingface_hub.TextToImageParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_image.py#L12</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text To Image

</div>

## text_to_speech[[huggingface_hub.TextToSpeechGenerationParameters]][[huggingface_hub.TextToSpeechGenerationParameters]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechGenerationParameters</name><anchor>huggingface_hub.TextToSpeechGenerationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L15</source><parameters>[{"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Parametrization of the text generation process

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechInput</name><anchor>huggingface_hub.TextToSpeechInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L83</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None"}]</parameters></docstring>
Inputs for Text To Speech inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechOutput</name><anchor>huggingface_hub.TextToSpeechOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L93</source><parameters>[{"name": "audio", "val": ": typing.Any"}, {"name": "sampling_rate", "val": ": typing.Optional[float] = None"}]</parameters></docstring>
Outputs of inference for the Text To Speech task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToSpeechParameters</name><anchor>huggingface_hub.TextToSpeechParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_speech.py#L75</source><parameters>[{"name": "generation_parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None"}]</parameters></docstring>
Additional inference parameters for Text To Speech

</div>

## text_to_video[[huggingface_hub.TextToVideoInput]][[huggingface_hub.TextToVideoInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoInput</name><anchor>huggingface_hub.TextToVideoInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L32</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None"}]</parameters></docstring>
Inputs for Text To Video inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoOutput</name><anchor>huggingface_hub.TextToVideoOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L42</source><parameters>[{"name": "video", "val": ": typing.Any"}]</parameters></docstring>
Outputs of inference for the Text To Video task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TextToVideoParameters</name><anchor>huggingface_hub.TextToVideoParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/text_to_video.py#L12</source><parameters>[{"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Text To Video

</div>

## token_classification[[huggingface_hub.TokenClassificationInput]][[huggingface_hub.TokenClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationInput</name><anchor>huggingface_hub.TokenClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L27</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None"}]</parameters></docstring>
Inputs for Token Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationOutputElement</name><anchor>huggingface_hub.TokenClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L37</source><parameters>[{"name": "end", "val": ": int"}, {"name": "score", "val": ": float"}, {"name": "start", "val": ": int"}, {"name": "word", "val": ": str"}, {"name": "entity", "val": ": typing.Optional[str] = None"}, {"name": "entity_group", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Token Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TokenClassificationParameters</name><anchor>huggingface_hub.TokenClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/token_classification.py#L15</source><parameters>[{"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Token Classification

</div>

## translation[[huggingface_hub.TranslationInput]][[huggingface_hub.TranslationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationInput</name><anchor>huggingface_hub.TranslationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L35</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None"}]</parameters></docstring>
Inputs for Translation inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationOutput</name><anchor>huggingface_hub.TranslationOutput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L45</source><parameters>[{"name": "translation_text", "val": ": str"}]</parameters></docstring>
Outputs of inference for the Translation task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.TranslationParameters</name><anchor>huggingface_hub.TranslationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/translation.py#L15</source><parameters>[{"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}]</parameters></docstring>
Additional inference parameters for Translation

</div>

## video_classification[[huggingface_hub.VideoClassificationInput]][[huggingface_hub.VideoClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationInput</name><anchor>huggingface_hub.VideoClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L29</source><parameters>[{"name": "inputs", "val": ": typing.Any"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None"}]</parameters></docstring>
Inputs for Video Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationOutputElement</name><anchor>huggingface_hub.VideoClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L39</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Video Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VideoClassificationParameters</name><anchor>huggingface_hub.VideoClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/video_classification.py#L15</source><parameters>[{"name": "frame_sampling_rate", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Video Classification

</div>

## visual_question_answering[[huggingface_hub.VisualQuestionAnsweringInput]][[huggingface_hub.VisualQuestionAnsweringInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringInput</name><anchor>huggingface_hub.VisualQuestionAnsweringInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L33</source><parameters>[{"name": "inputs", "val": ": VisualQuestionAnsweringInputData"}, {"name": "parameters", "val": ": typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None"}]</parameters></docstring>
Inputs for Visual Question Answering inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringInputData</name><anchor>huggingface_hub.VisualQuestionAnsweringInputData</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L12</source><parameters>[{"name": "image", "val": ": typing.Any"}, {"name": "question", "val": ": str"}]</parameters></docstring>
One (image, question) pair to answer

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringOutputElement</name><anchor>huggingface_hub.VisualQuestionAnsweringOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L43</source><parameters>[{"name": "score", "val": ": float"}, {"name": "answer", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Outputs of inference for the Visual Question Answering task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.VisualQuestionAnsweringParameters</name><anchor>huggingface_hub.VisualQuestionAnsweringParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/visual_question_answering.py#L22</source><parameters>[{"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters></docstring>
Additional inference parameters for Visual Question Answering

</div>

## zero_shot_classification[[huggingface_hub.ZeroShotClassificationInput]][[huggingface_hub.ZeroShotClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationInput</name><anchor>huggingface_hub.ZeroShotClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L29</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotClassificationParameters"}]</parameters></docstring>
Inputs for Zero Shot Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationOutputElement</name><anchor>huggingface_hub.ZeroShotClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L39</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotClassificationParameters</name><anchor>huggingface_hub.ZeroShotClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_classification.py#L12</source><parameters>[{"name": "candidate_labels", "val": ": list"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "multi_label", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>
Additional inference parameters for Zero Shot Classification

</div>

## zero_shot_image_classification[[huggingface_hub.ZeroShotImageClassificationInput]][[huggingface_hub.ZeroShotImageClassificationInput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationInput</name><anchor>huggingface_hub.ZeroShotImageClassificationInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L24</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotImageClassificationParameters"}]</parameters></docstring>
Inputs for Zero Shot Image Classification inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationOutputElement</name><anchor>huggingface_hub.ZeroShotImageClassificationOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L34</source><parameters>[{"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Image Classification task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotImageClassificationParameters</name><anchor>huggingface_hub.ZeroShotImageClassificationParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_image_classification.py#L12</source><parameters>[{"name": "candidate_labels", "val": ": list"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Additional inference parameters for Zero Shot Image Classification

</div>

## zero_shot_object_detection[[huggingface_hub.ZeroShotObjectDetectionBoundingBox]][[huggingface_hub.ZeroShotObjectDetectionBoundingBox]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionBoundingBox</name><anchor>huggingface_hub.ZeroShotObjectDetectionBoundingBox</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L28</source><parameters>[{"name": "xmax", "val": ": int"}, {"name": "xmin", "val": ": int"}, {"name": "ymax", "val": ": int"}, {"name": "ymin", "val": ": int"}]</parameters></docstring>
The predicted bounding box. Coordinates are relative to the top left corner of the input
image.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionInput</name><anchor>huggingface_hub.ZeroShotObjectDetectionInput</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L18</source><parameters>[{"name": "inputs", "val": ": str"}, {"name": "parameters", "val": ": ZeroShotObjectDetectionParameters"}]</parameters></docstring>
Inputs for Zero Shot Object Detection inference

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionOutputElement</name><anchor>huggingface_hub.ZeroShotObjectDetectionOutputElement</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L40</source><parameters>[{"name": "box", "val": ": ZeroShotObjectDetectionBoundingBox"}, {"name": "label", "val": ": str"}, {"name": "score", "val": ": float"}]</parameters></docstring>
Outputs of inference for the Zero Shot Object Detection task

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ZeroShotObjectDetectionParameters</name><anchor>huggingface_hub.ZeroShotObjectDetectionParameters</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py#L10</source><parameters>[{"name": "candidate_labels", "val": ": list"}]</parameters></docstring>
Additional inference parameters for Zero Shot Object Detection

</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/inference_types.md" />

### 유틸리티[[utilities]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/utilities.md

# 유틸리티[[utilities]]

## 로깅 구성[[huggingface_hub.utils.logging.get_verbosity]][[huggingface_hub.utils.logging.get_verbosity]]

`huggingface_hub` 패키지는 패키지 로그 레벨을 제어하기 위한 `logging` 유틸리티를 제공합니다. 
다음과 같이 가져올 수 있습니다:

```py
from huggingface_hub import logging
```

그런 다음, 로그의 출력 수를 업데이트하기 위해 로그 레벨을 정의할 수 있습니다:

```python
from huggingface_hub import logging

logging.set_verbosity_error()
logging.set_verbosity_warning()
logging.set_verbosity_info()
logging.set_verbosity_debug()

logging.set_verbosity(...)
```

로그 레벨은 다음과 같이 이해하면 됩니다:

- `error`: 오류 또는 예기치 않은 동작으로 이어질 수 있는 결정적인 로그만 표시합니다.
- `warning`:  결정적이진 않지만 의도치 않은 동작을 초래할 수 있는 로그를 표시합니다. 또한 중요한 정보를 포함한 로그도 표시될 수 있습니다.
- `info`: 하부에서 무슨 일이 일어나고 있는지에 대한 자세한 로그를 포함하여 대부분의 로그를 표시합니다. 무언가 예상치 못한 방식으로 동작하는 경우, 더 많은 정보를 얻기 위해 verbosity 단계로 전환하는 것이 좋습니다.
- `debug`: 하부에서 정확히 무슨 일이 일어나고 있는지를 추적하는 데 사용될 수 있는 일부 내부 로그를 포함하여 모든 로그를 표시합니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.get_verbosity</name><anchor>huggingface_hub.utils.logging.get_verbosity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L105</source><parameters>[]</parameters><retdesc>Logging level, e.g., `huggingface_hub.logging.DEBUG` and
`huggingface_hub.logging.INFO`.</retdesc></docstring>
Return the current level for the HuggingFace Hub's root logger.



> [!TIP]
> HuggingFace Hub has following logging levels:
>
> - `huggingface_hub.logging.CRITICAL`, `huggingface_hub.logging.FATAL`
> - `huggingface_hub.logging.ERROR`
> - `huggingface_hub.logging.WARNING`, `huggingface_hub.logging.WARN`
> - `huggingface_hub.logging.INFO`
> - `huggingface_hub.logging.DEBUG`


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity</name><anchor>huggingface_hub.utils.logging.set_verbosity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L124</source><parameters>[{"name": "verbosity", "val": ": int"}]</parameters><paramsdesc>- **verbosity** (`int`) --
  Logging level, e.g., `huggingface_hub.logging.DEBUG` and
  `huggingface_hub.logging.INFO`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the level for the HuggingFace Hub's root logger.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_info</name><anchor>huggingface_hub.utils.logging.set_verbosity_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L136</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.INFO`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_debug</name><anchor>huggingface_hub.utils.logging.set_verbosity_debug</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L150</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.DEBUG`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_warning</name><anchor>huggingface_hub.utils.logging.set_verbosity_warning</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L143</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.WARNING`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.set_verbosity_error</name><anchor>huggingface_hub.utils.logging.set_verbosity_error</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L157</source><parameters>[]</parameters></docstring>

Sets the verbosity to `logging.ERROR`.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.disable_propagation</name><anchor>huggingface_hub.utils.logging.disable_propagation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L164</source><parameters>[]</parameters></docstring>

Disable propagation of the library log outputs. Note that log propagation is
disabled by default.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.enable_propagation</name><anchor>huggingface_hub.utils.logging.enable_propagation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L172</source><parameters>[]</parameters></docstring>

Enable propagation of the library log outputs. Please disable the
HuggingFace Hub's default handler to prevent double logging if the root
logger has been configured.


</div>

### 리포지토리별 도우미 메소드[[huggingface_hub.utils.logging.get_logger]][[huggingface_hub.utils.logging.get_logger]]

아래 제공된 메소드들은 `huggingface_hub` 라이브러리 모듈을 수정할 때 관련이 있습니다. `huggingface_hub`를 사용하고 해당 모듈을 수정하지 않는 경우에는 사용할 필요가 없습니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.logging.get_logger</name><anchor>huggingface_hub.utils.logging.get_logger</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/logging.py#L80</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the logger to get, usually the filename</paramsdesc><paramgroups>0</paramgroups></docstring>

Returns a logger with the specified name. This function is not supposed
to be directly accessed by library users.



<ExampleCodeBlock anchor="huggingface_hub.utils.logging.get_logger.example">

Example:

```python
>>> from huggingface_hub import get_logger

>>> logger = get_logger(__file__)
>>> logger.set_verbosity_info()
```

</ExampleCodeBlock>


</div>

## 프로그레스 바 구성하기[[configure-progress-bars]]

프로그레스 바는 긴 시간이 걸리는 작업을 실행하는 동안 정보를 표시하는 유용한 도구입니다(예시로 파일을 다운로드하거나 업로드하는 등). `huggingface_hub`는 라이브러리 전체에서 일관된 방식으로 프로그레스 바를 표시하기 위한 `tqdm` 래퍼를 제공합니다.

기본적으로 프로그레스 바가 활성화되어 있습니다. `HF_HUB_DISABLE_PROGRESS_BARS` 환경 변수를 설정하여 전역적으로 비활성화할 수 있습니다. 또한 [enable_progress_bars()](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.utils.enable_progress_bars)와 [disable_progress_bars()](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.utils.disable_progress_bars)를 사용하여 프로그레스 바를 개별적으로 활성화 또는 비활성화할 수도 있습니다. 만약 환경 변수가 설정되어 있다면, 환경 변수가 도우미에서 우선 순위를 가집니다.


```py
>>> from huggingface_hub import snapshot_download
>>> from huggingface_hub.utils import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars

>>> # 전역적으로 프로그레스 바를 비활성화합니다.
>>> disable_progress_bars()

>>> # 프로그레스 바가 표시되지 않습니다!
>>> snapshot_download("gpt2")

>>> are_progress_bars_disabled()
True

>>> # 다시 프로그레스 바가 활성화됩니다
>>> enable_progress_bars()
```

### are_progress_bars_disabled[[huggingface_hub.utils.are_progress_bars_disabled]][[huggingface_hub.utils.are_progress_bars_disabled]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.are_progress_bars_disabled</name><anchor>huggingface_hub.utils.are_progress_bars_disabled</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L172</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The group name to check; if None, checks the global setting.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if progress bars are disabled, False otherwise.</retdesc></docstring>

Check if progress bars are disabled globally or for a specific group.

This function returns whether progress bars are disabled for a given group or globally.
It checks the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable first, then the programmatic
settings.








</div>

### disable_progress_bars[[huggingface_hub.utils.disable_progress_bars]][[huggingface_hub.utils.disable_progress_bars]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.disable_progress_bars</name><anchor>huggingface_hub.utils.disable_progress_bars</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L108</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the group for which to disable the progress bars. If None,
  progress bars are disabled globally.</paramsdesc><paramgroups>0</paramgroups><raises>- ``Warning`` -- If the environment variable precludes changes.</raises><raisederrors>``Warning``</raisederrors></docstring>

Disable progress bars either globally or for a specified group.

This function updates the state of progress bars based on a group name.
If no group name is provided, all progress bars are disabled. The operation
respects the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable's setting.








</div>

### enable_progress_bars[huggingface_hub.utils.enable_progress_bars]][[huggingface_hub.utils.enable_progress_bars]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.enable_progress_bars</name><anchor>huggingface_hub.utils.enable_progress_bars</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/tqdm.py#L140</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (`str`, *optional*) --
  The name of the group for which to enable the progress bars. If None,
  progress bars are enabled globally.</paramsdesc><paramgroups>0</paramgroups><raises>- ``Warning`` -- If the environment variable precludes changes.</raises><raisederrors>``Warning``</raisederrors></docstring>

Enable progress bars either globally or for a specified group.

This function sets the progress bars to enabled for the specified group or globally
if no group is specified. The operation is subject to the `HF_HUB_DISABLE_PROGRESS_BARS`
environment setting.








</div>

## HTTP 오류 다루기[[handle-http-errors]]

`huggingface_hub`는 서버에서 반환된 추가 정보로 `requests`에서 발생한 `HTTPError`를 세분화하기 위해 자체 HTTP 오류를 정의합니다.

### 예외 발생[[huggingface_hub.utils.hf_raise_for_status]][[huggingface_hub.hf_raise_for_status]]

[hf_raise_for_status()](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.hf_raise_for_status)는 Hub에 대한 모든 요청에서 "상태를 확인하고 예외를 발생시키는" 중앙 메소드로 사용됩니다. 이 메서드는 기본 `requests.raise_for_status`를 감싸서 추가 정보를 제공합니다. 발생된 모든 `HTTPError`는 `HfHubHTTPError`로 변환됩니다.

```py
import requests
from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError

response = requests.post(...)
try:
    hf_raise_for_status(response)
except HfHubHTTPError as e:
    print(str(e)) # 형식화된 메시지
    e.request_id, e.server_message # 서버에서 반환된 세부 정보

    # 오류 메시지를 발생시킬 때 추가 정보를 포함하여 완성합니다
    e.append_to_message("\n`create_commit` expects the repository to exist.")
    raise
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.hf_raise_for_status</name><anchor>huggingface_hub.hf_raise_for_status</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_http.py#L516</source><parameters>[{"name": "response", "val": ": Response"}, {"name": "endpoint_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **response** (`Response`) --
  Response from the server.
- **endpoint_name** (`str`, *optional*) --
  Name of the endpoint that has been called. If provided, the error message will be more complete.</paramsdesc><paramgroups>0</paramgroups></docstring>

Internal version of `response.raise_for_status()` that will refine a potential HTTPError.
Raised exception will be an instance of [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError).

This helper is meant to be the unique method to raise_for_status when making a call to the Hugging Face Hub.



> [!WARNING]
> Raises when the request has failed:
>
>     - [RepositoryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RepositoryNotFoundError)
>         If the repository to download from cannot be found. This may be because it
>         doesn't exist, because `repo_type` is not set correctly, or because the repo
>         is `private` and you do not have access.
>     - [GatedRepoError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.GatedRepoError)
>         If the repository exists but is gated and the user is not on the authorized
>         list.
>     - [RevisionNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.RevisionNotFoundError)
>         If the repository exists but the revision couldn't be found.
>     - [EntryNotFoundError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.EntryNotFoundError)
>         If the repository exists but the entry (e.g. the requested file) couldn't be
>         find.
>     - [BadRequestError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.BadRequestError)
>         If request failed with a HTTP 400 BadRequest error.
>     - [HfHubHTTPError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HfHubHTTPError)
>         If request failed for a reason not listed above.


</div>

### HTTP 오류[[http-errors]]

여기에는 `huggingface_hub`에서 발생하는 HTTP 오류 목록이 있습니다.

#### HfHubHTTPError[[huggingface_hub.errors.HfHubHTTPError]][[huggingface_hub.errors.HfHubHTTPError]]

`HfHubHTTPError`는 HF Hub HTTP 오류에 대한 부모 클래스입니다. 이 클래스는 서버 응답을 구문 분석하고 오류 메시지를 형식화하여 사용자에게 가능한 많은 정보를 제공합니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.HfHubHTTPError</name><anchor>huggingface_hub.errors.HfHubHTTPError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L40</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

HTTPError to inherit from for any custom HTTP Error raised in HF Hub.

Any HTTPError is converted at least into a `HfHubHTTPError`. If some information is
sent back by the server, it will be added to the error message.

Added details:
- Request id from "X-Request-Id" header if exists. If not, fallback to "X-Amzn-Trace-Id" header if exists.
- Server error message from the header "X-Error-Message".
- Server error message if we can found one in the response body.

<ExampleCodeBlock anchor="huggingface_hub.errors.HfHubHTTPError.example">

Example:
```py
    import httpx
    from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError

    response = get_session().post(...)
    try:
        hf_raise_for_status(response)
    except HfHubHTTPError as e:
        print(str(e)) # formatted message
        e.request_id, e.server_message # details returned by server

        # Complete the error message with additional information once it's raised
        e.append_to_message("
ate_commit` expects the repository to exist.")
        raise
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>append_to_message</name><anchor>huggingface_hub.errors.HfHubHTTPError.append_to_message</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L83</source><parameters>[{"name": "additional_message", "val": ": str"}]</parameters></docstring>
Append additional information to the `HfHubHTTPError` initial message.

</div></div>

#### RepositoryNotFoundError[[huggingface_hub.errors.RepositoryNotFoundError]][[huggingface_hub.errors.RepositoryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RepositoryNotFoundError</name><anchor>huggingface_hub.errors.RepositoryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L177</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with an invalid repository name, or
with a private repo name the user does not have access to.

<ExampleCodeBlock anchor="huggingface_hub.errors.RepositoryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import model_info
>>> model_info("<non_existent_repository>")
(...)
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: PvMw_VjBMjVdMz53WKIzP)

Repository Not Found for url: https://huggingface.co/api/models/%3Cnon_existent_repository%3E.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
Invalid username or password.
```

</ExampleCodeBlock>


</div>

#### GatedRepoError[[huggingface_hub.errors.GatedRepoError]][[huggingface_hub.errors.GatedRepoError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.GatedRepoError</name><anchor>huggingface_hub.errors.GatedRepoError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L198</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a gated repository for which the user is not on the
authorized list.

Note: derives from `RepositoryNotFoundError` to ensure backward compatibility.

<ExampleCodeBlock anchor="huggingface_hub.errors.GatedRepoError.example">

Example:

```py
>>> from huggingface_hub import model_info
>>> model_info("<gated_repository>")
(...)
huggingface_hub.errors.GatedRepoError: 403 Client Error. (Request ID: ViT1Bf7O_026LGSQuVqfa)

Cannot access gated repo for url https://huggingface.co/api/models/ardent-figment/gated-model.
Access to model ardent-figment/gated-model is restricted and you are not in the authorized list.
Visit https://huggingface.co/ardent-figment/gated-model to ask for access.
```

</ExampleCodeBlock>


</div>

#### RevisionNotFoundError[[huggingface_hub.errors.RevisionNotFoundError]][[huggingface_hub.errors.RevisionNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RevisionNotFoundError</name><anchor>huggingface_hub.errors.RevisionNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L241</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with a valid repository but an invalid
revision.

<ExampleCodeBlock anchor="huggingface_hub.errors.RevisionNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', 'config.json', revision='<non-existent-revision>')
(...)
huggingface_hub.errors.RevisionNotFoundError: 404 Client Error. (Request ID: Mwhe_c3Kt650GcdKEFomX)

Revision Not Found for url: https://huggingface.co/bert-base-cased/resolve/%3Cnon-existent-revision%3E/config.json.
```

</ExampleCodeBlock>


</div>

#### BadRequestError[[huggingface_hub.errors.BadRequestError]][[huggingface_hub.errors.BadRequestError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.BadRequestError</name><anchor>huggingface_hub.errors.BadRequestError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L316</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised by `hf_raise_for_status` when the server returns a HTTP 400 error.

<ExampleCodeBlock anchor="huggingface_hub.errors.BadRequestError.example">

Example:

```py
>>> resp = httpx.post("hf.co/api/check", ...)
>>> hf_raise_for_status(resp, endpoint_name="check")
huggingface_hub.errors.BadRequestError: Bad request for check endpoint: {details} (Request ID: XXX)
```

</ExampleCodeBlock>


</div>

#### EntryNotFoundError[[huggingface_hub.errors.EntryNotFoundError]][[huggingface_hub.errors.EntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.EntryNotFoundError</name><anchor>huggingface_hub.errors.EntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L260</source><parameters>""</parameters></docstring>

Raised when entry not found, either locally or remotely.

<ExampleCodeBlock anchor="huggingface_hub.errors.EntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-existent-file>')
(...)
huggingface_hub.errors.RemoteEntryNotFoundError (...)
>>> hf_hub_download('bert-base-cased', '<non-existent-file>', local_files_only=True)
(...)
huggingface_hub.utils.errors.LocalEntryNotFoundError (...)
```

</ExampleCodeBlock>


</div>

#### RemoteEntryNotFoundError[[huggingface_hub.errors.RemoteEntryNotFoundError]][[huggingface_hub.errors.RemoteEntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.RemoteEntryNotFoundError</name><anchor>huggingface_hub.errors.RemoteEntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L278</source><parameters>[{"name": "message", "val": ": str"}, {"name": "response", "val": ": Response"}, {"name": "server_message", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Raised when trying to access a hf.co URL with a valid repository and revision
but an invalid filename.

<ExampleCodeBlock anchor="huggingface_hub.errors.RemoteEntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-existent-file>')
(...)
huggingface_hub.errors.EntryNotFoundError: 404 Client Error. (Request ID: 53pNl6M0MxsnG5Sw8JA6x)

Entry Not Found for url: https://huggingface.co/bert-base-cased/resolve/main/%3Cnon-existent-file%3E.
```

</ExampleCodeBlock>


</div>

#### LocalEntryNotFoundError[[huggingface_hub.errors.LocalEntryNotFoundError]][[huggingface_hub.errors.LocalEntryNotFoundError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.LocalEntryNotFoundError</name><anchor>huggingface_hub.errors.LocalEntryNotFoundError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L296</source><parameters>[{"name": "message", "val": ": str"}]</parameters></docstring>

Raised when trying to access a file or snapshot that is not on the disk when network is
disabled or unavailable (connection issue). The entry may exist on the Hub.

<ExampleCodeBlock anchor="huggingface_hub.errors.LocalEntryNotFoundError.example">

Example:

```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-cached-file>',  local_files_only=True)
(...)
huggingface_hub.errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
```

</ExampleCodeBlock>


</div>

#### OfflineModeIsEnabledd[[huggingface_hub.errors.OfflineModeIsEnabled]][[huggingface_hub.errors.OfflineModeIsEnabled]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.OfflineModeIsEnabled</name><anchor>huggingface_hub.errors.OfflineModeIsEnabled</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L36</source><parameters>""</parameters></docstring>
Raised when a request is made but `HF_HUB_OFFLINE=1` is set as environment variable.

</div>

## 원격 측정[[huggingface_hub.utils.send_telemetry]][[huggingface_hub.utils.send_telemetry]]

`huggingface_hub`는 원격 측정 데이터를 보내는 도우미가 포함되어 있습니다. 이 정보는 문제를 디버깅하고 새로운 기능을 우선적으로 처리하는 데 도움이 됩니다. 사용자는 `HF_HUB_DISABLE_TELEMETRY=1` 환경 변수를 설정하여 언제든지 원격 측정 수집을 비활성화할 수 있습니다. 또한 오프라인 모드에서도 (즉, HF_HUB_OFFLINE=1로 설정된 경우) 원격 측정이 비활성화됩니다.

서드 파티 라이브러리의 유지 관리자인 경우, 원격 측정 데이터를 보내는 것은 `send_telemetry`를 호출하는 것만큼 간단합니다. 사용자에게 가능한 영향을 최소화하기 위해 데이터는 별도의 스레드에서 전송됩니다.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.send_telemetry</name><anchor>huggingface_hub.utils.send_telemetry</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_telemetry.py#L20</source><parameters>[{"name": "topic", "val": ": str"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "library_version", "val": ": typing.Optional[str] = None"}, {"name": "user_agent", "val": ": typing.Union[dict, str, NoneType] = None"}]</parameters><paramsdesc>- **topic** (`str`) --
  Name of the topic that is monitored. The topic is directly used to build the URL. If you want to monitor
  subtopics, just use "/" separation. Examples: "gradio", "transformers/examples",...
- **library_name** (`str`, *optional*) --
  The name of the library that is making the HTTP request. Will be added to the user-agent header.
- **library_version** (`str`, *optional*) --
  The version of the library that is making the HTTP request. Will be added to the user-agent header.
- **user_agent** (`str`, `dict`, *optional*) --
  The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sends telemetry that helps track usage of different HF libraries.

This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants
to share additional information, and we respect your privacy. You can disable telemetry collection by setting the
`HF_HUB_DISABLE_TELEMETRY=1` as environment variable. Telemetry is also disabled in offline mode (i.e. when setting
`HF_HUB_OFFLINE=1`).

Telemetry collection is run in a separate thread to minimize impact for the user.



<ExampleCodeBlock anchor="huggingface_hub.utils.send_telemetry.example">

Example:
```py
>>> from huggingface_hub.utils import send_telemetry

# Send telemetry without library information
>>> send_telemetry("ping")

# Send telemetry to subtopic with library information
>>> send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1")

# Send telemetry with additional data
>>> send_telemetry(
...     topic="examples",
...     library_name="transformers",
...     library_version="4.26.0",
...     user_agent={"pipeline": "text_classification", "framework": "flax"},
... )
```

</ExampleCodeBlock>


</div>

## 검증기[[validators]]

`huggingface_hub`에는 메소드 인수를 자동으로 유효성 검사하는 사용자 정의 검증기가 포함되어 있습니다. 이 유효성 검사는 타입 힌트를 검증하는 데 [Pydantic](https://pydantic-docs.helpmanual.io/)의 작업을 참고하여 구현되었지만, 기능은 더 제한적입니다.

### 일반 데코레이터[[generic-decorator]]

[validate_hf_hub_args()](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.utils.validate_hf_hub_args)는 `huggingface_hub`의 네이밍을 따르는 인수를 갖는 메소드를 캡슐화하는 일반적인 데코레이터입니다. 기본적으로 구현된 검증기가 있는 모든 인수가 유효성 검사됩니다.

입력이 유효하지 않은 경우 [HFValidationError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HFValidationError)이 발생합니다. 첫 번째 유효하지 않은 값만 오류를 발생시키고 유효성 검사 프로세스를 중지합니다.

사용법:

```py
>>> from huggingface_hub.utils import validate_hf_hub_args

>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
...     print(repo_id)

>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id

>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.

>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

#### validate_hf_hub_args[[huggingface_hub.utils.validate_hf_hub_args]][[huggingface_hub.utils.validate_hf_hub_args]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.validate_hf_hub_args</name><anchor>huggingface_hub.utils.validate_hf_hub_args</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_validators.py#L42</source><parameters>[{"name": "fn", "val": ": ~CallableT"}]</parameters><raises>- [HFValidationError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HFValidationError) -- 
  If an input is not valid.</raises><raisederrors>[HFValidationError](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.HFValidationError)</raisederrors></docstring>
Validate values received as argument for any public method of `huggingface_hub`.

The goal of this decorator is to harmonize validation of arguments reused
everywhere. By default, all defined validators are tested.

Validators:
- [validate_repo_id()](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.utils.validate_repo_id): `repo_id` must be `"repo_name"`
  or `"namespace/repo_name"`. Namespace is a username or an organization.
- `~utils.smoothly_deprecate_legacy_arguments`: Ignore `proxies` when downloading files (should be set globally).

<ExampleCodeBlock anchor="huggingface_hub.utils.validate_hf_hub_args.example">

Example:
```py
>>> from huggingface_hub.utils import validate_hf_hub_args

>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
...     print(repo_id)

>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id

>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.

>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

</ExampleCodeBlock>






</div>

#### HFValidationError[[huggingface_hub.utils.HFValidationError]][[huggingface_hub.errors.HFValidationError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.errors.HFValidationError</name><anchor>huggingface_hub.errors.HFValidationError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L153</source><parameters>""</parameters></docstring>
Generic exception thrown by `huggingface_hub` validators.

Inherits from [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError).


</div>

### Argument validators[[argument-validators]]

검증기는 개별적으로도 사용할 수 있습니다. 다음은 검증할 수 있는 모든 인수 목록입니다.

#### repo_id[[huggingface_hub.utils.validate_repo_id]][[huggingface_hub.utils.validate_repo_id]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.utils.validate_repo_id</name><anchor>huggingface_hub.utils.validate_repo_id</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/_validators.py#L94</source><parameters>[{"name": "repo_id", "val": ": str"}]</parameters></docstring>
Validate `repo_id` is valid.

This is not meant to replace the proper validation made on the Hub but rather to
avoid local inconsistencies whenever possible (example: passing `repo_type` in the
`repo_id` is forbidden).

Rules:
- Between 1 and 96 characters.
- Either "repo_name" or "namespace/repo_name"
- [a-zA-Z0-9] or "-", "_", "."
- "--" and ".." are forbidden

Valid: `"foo"`, `"foo/bar"`, `"123"`, `"Foo-BAR_foo.bar123"`

Not valid: `"datasets/foo/bar"`, `".repo_id"`, `"foo--bar"`, `"foo.git"`

<ExampleCodeBlock anchor="huggingface_hub.utils.validate_repo_id.example">

Example:
```py
>>> from huggingface_hub.utils import validate_repo_id
>>> validate_repo_id(repo_id="valid_repo_id")
>>> validate_repo_id(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
```

</ExampleCodeBlock>

Discussed in https://github.com/huggingface/huggingface_hub/issues/1008.
In moon-landing (internal repository):
- https://github.com/huggingface/moon-landing/blob/main/server/lib/Names.ts#L27
- https://github.com/huggingface/moon-landing/blob/main/server/views/components/NewRepoForm/NewRepoForm.svelte#L138


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/utilities.md" />

### 추론[[inference]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/inference_client.md

# 추론[[inference]]

추론은 학습된 모델을 사용하여 새로운 데이터를 예측하는 과정입니다. 이 과정은 계산량이 많을 수 있기 때문에, 전용 서버에서 실행하는 것이 흥미로운 옵션이 될 수 있습니다. `huggingface_hub` 라이브러리는 호스팅된 모델에 대한 추론을 실행하는 간단한 방법을 제공합니다. 연결할 수 있는 서비스는 여러가지가 있습니다:

- [추론 API](https://huggingface.co/docs/api-inference/index): Hugging Face의 인프라에서 가속화된 추론을 무료로 실행할 수 있는 서비스입니다. 이 서비스는 시작하기 위한 빠른 방법이며, 다양한 모델을 테스트하고 AI 제품을 프로토타입화하는 데에도 유용합니다.
- [추론 엔드포인트](https://huggingface.co/inference-endpoints): 모델을 쉽게 운영 환경으로 배포할 수 있는 제품입니다. 추론은 여러분이 선택한 클라우드 제공업체의 전용 및 완전히 관리되는 인프라에서 Hugging Face에 의해 실행됩니다.

이러한 서비스는 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient) 객체를 사용하여 호출할 수 있습니다. 자세한 사용 방법에 대해서는 [이 가이드](../guides/inference)를 참조해주세요.

## 추론 클라이언트[[huggingface_hub.InferenceClient]][[huggingface_hub.InferenceClient]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceClient</name><anchor>huggingface_hub.InferenceClient</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L123</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "provider", "val": ": typing.Union[typing.Literal['black-forest-labs', 'cerebras', 'clarifai', 'cohere', 'fal-ai', 'featherless-ai', 'fireworks-ai', 'groq', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'nscale', 'openai', 'publicai', 'replicate', 'sambanova', 'scaleway', 'together', 'wavespeed', 'zai-org'], typing.Literal['auto'], NoneType] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "timeout", "val": ": typing.Optional[float] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "cookies", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "bill_to", "val": ": typing.Optional[str] = None"}, {"name": "base_url", "val": ": typing.Optional[str] = None"}, {"name": "api_key", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, `optional`) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is
  automatically selected for the task.
  Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2
  arguments are mutually exclusive. If a URL is passed as `model` or `base_url` for chat completion, the `(/v1)/chat/completions` suffix path will be appended to the URL.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Can be `"black-forest-labs"`, `"cerebras"`, `"clarifai"`, `"cohere"`, `"fal-ai"`, `"featherless-ai"`, `"fireworks-ai"`, `"groq"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"nscale"`, `"openai"`, `"publicai"`, `"replicate"`, `"sambanova"`, `"scaleway"`, `"together"`, `"wavespeed"` or `"zai-org"`.
  Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **token** (`str`, *optional*) --
  Hugging Face token. Will default to the locally saved token if not provided.
  Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2
  arguments are mutually exclusive and have the exact same behavior.
- **timeout** (`float`, `optional`) --
  The maximum number of seconds to wait for a response from the server. Defaults to None, meaning it will loop until the server is available.
- **headers** (`dict[str, str]`, `optional`) --
  Additional headers to send to the server. By default only the authorization and user-agent headers are sent.
  Values in this dictionary will override the default values.
- **bill_to** (`str`, `optional`) --
  The billing account to use for the requests. By default the requests are billed on the user's account.
  Requests can only be billed to an organization the user is a member of, and which has subscribed to Enterprise Hub.
- **cookies** (`dict[str, str]`, `optional`) --
  Additional cookies to send to the server.
- **base_url** (`str`, `optional`) --
  Base URL to run inference. This is a duplicated argument from `model` to make [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
- **api_key** (`str`, `optional`) --
  Token to use for authentication. This is a duplicated argument from `token` to make [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>

Initialize a new Inference Client.

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient) aims to provide a unified experience to perform inference. The client can be used
seamlessly with either the (free) Inference API, self-hosted Inference Endpoints, or third-party Inference Providers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_classification</name><anchor>huggingface_hub.InferenceClient.audio_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L300</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio classification will be used.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"AudioClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioClassificationOutputElement]`</rettype><retdesc>List of [AudioClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AudioClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform audio classification on the provided audio content.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.audio_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.audio_classification("audio.flac")
[
    AudioClassificationOutputElement(score=0.4976358711719513, label='hap'),
    AudioClassificationOutputElement(score=0.3677836060523987, label='neu'),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_to_audio</name><anchor>huggingface_hub.InferenceClient.audio_to_audio</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L357</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content for the model. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model can be any model which takes an audio file and returns another audio file. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio_to_audio will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioToAudioOutputElement]`</rettype><retdesc>A list of [AudioToAudioOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AudioToAudioOutputElement) items containing audios label, content-type, and audio content in blob.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Performs multiple tasks related to audio-to-audio depending on the model (eg: speech enhancement, source separation).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.audio_to_audio.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item.blob)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>automatic_speech_recognition</name><anchor>huggingface_hub.InferenceClient.automatic_speech_recognition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L409</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file.
- **model** (`str`, *optional*) --
  The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for ASR will be used.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[AutomaticSpeechRecognitionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AutomaticSpeechRecognitionOutput)</rettype><retdesc>An item containing the transcribed text and optionally the timestamp chunks.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform automatic speech recognition (ASR or audio-to-text) on the given audio content.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.automatic_speech_recognition.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.automatic_speech_recognition("hello_world.flac").text
"hello world"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>chat_completion</name><anchor>huggingface_hub.InferenceClient.chat_completion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L535</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "stream", "val": ": bool = False"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **messages** (List of [ChatCompletionInputMessage](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputMessage)) --
  Conversation history consisting of roles and content pairs.
- **model** (`str`, *optional*) --
  The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
  See https://huggingface.co/tasks/text-generation for more details.
  If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
  custom URL while setting `model` in the request payload, you must set `base_url` when initializing [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient).
- **frequency_penalty** (`float`, *optional*) --
  Penalizes new tokens based on their existing frequency
  in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
- **logit_bias** (`list[float]`, *optional*) --
  Adjusts the likelihood of specific tokens appearing in the generated output.
- **logprobs** (`bool`, *optional*) --
  Whether to return log probabilities of the output tokens or not. If true, returns the log
  probabilities of each output token returned in the content of message.
- **max_tokens** (`int`, *optional*) --
  Maximum number of tokens allowed in the response. Defaults to 100.
- **n** (`int`, *optional*) --
  The number of completions to generate for each prompt.
- **presence_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
  text so far, increasing the model's likelihood to talk about new topics.
- **response_format** (`ChatCompletionInputGrammarType()`, *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **seed** (Optional`int`, *optional*) --
  Seed for reproducible control flow. Defaults to None.
- **stop** (`list[str]`, *optional*) --
  Up to four strings which trigger the end of the response.
  Defaults to None.
- **stream** (`bool`, *optional*) --
  Enable realtime streaming of responses. Defaults to False.
- **stream_options** ([ChatCompletionInputStreamOptions](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputStreamOptions), *optional*) --
  Options for streaming completions.
- **temperature** (`float`, *optional*) --
  Controls randomness of the generations. Lower values ensure
  less random completions. Range: [0, 2]. Defaults to 1.0.
- **top_logprobs** (`int`, *optional*) --
  An integer between 0 and 5 specifying the number of most likely tokens to return at each token
  position, each with an associated log probability. logprobs must be set to true if this parameter is
  used.
- **top_p** (`float`, *optional*) --
  Fraction of the most likely next words to sample from.
  Must be between 0 and 1. Defaults to 1.0.
- **tool_choice** ([ChatCompletionInputToolChoiceClass](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputToolChoiceClass) or `ChatCompletionInputToolChoiceEnum()`, *optional*) --
  The tool to use for the completion. Defaults to "auto".
- **tool_prompt** (`str`, *optional*) --
  A prompt to be appended before the tools.
- **tools** (List of [ChatCompletionInputTool](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputTool), *optional*) --
  A list of tools the model may call. Currently, only functions are supported as a tool. Use this to
  provide a list of functions the model may generate JSON inputs for.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[ChatCompletionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) or Iterable of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput)</rettype><retdesc>Generated text returned from the server:
- if `stream=False`, the generated text is returned as a [ChatCompletionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) (default).
- if `stream=True`, the generated text is returned token by token as a sequence of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput).</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

A method for completing conversations using a specified language model.

> [!TIP]
> The `client.chat_completion` method is aliased as `client.chat.completions.create` for compatibility with OpenAI's client.
> Inputs and outputs are strictly the same and using either syntax will yield the same results.
> Check out the [Inference guide](https://huggingface.co/docs/huggingface_hub/guides/inference#openai-compatibility)
> for more details about OpenAI's compatibility.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example">

Example:

```py
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputComplete(
            finish_reason='eos_token',
            index=0,
            message=ChatCompletionOutputMessage(
                role='assistant',
                content='The capital of France is Paris.',
                name=None,
                tool_calls=None
            ),
            logprobs=None
        )
    ],
    created=1719907176,
    id='',
    model='meta-llama/Meta-Llama-3-8B-Instruct',
    object='text_completion',
    system_fingerprint='2.0.4-sha-f426a33',
    usage=ChatCompletionOutputUsage(
        completion_tokens=8,
        prompt_tokens=17,
        total_tokens=25
    )
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-2">

Example using streaming:
```py
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> for token in client.chat_completion(messages, max_tokens=10, stream=True):
...     print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-3">

Example using OpenAI's syntax:
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient

# instead of `client = OpenAI(...)`
client = InferenceClient(
    base_url=...,
    api_key=...,
)

output = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)
```

</ExampleCodeBlock>

Example using a third-party provider directly with extra (provider-specific) parameters. Usage will be billed on your Together AI account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
...     extra_body={"safety_model": "Meta-Llama/Llama-Guard-7b"},
... )
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="sambanova",  # Use Sambanova provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-6">

Example using Image + Text as input:
```py
>>> from huggingface_hub import InferenceClient

# provide a remote URL
>>> image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
# or a base64-encoded image
>>> image_path = "/path/to/image.jpeg"
>>> with open(image_path, "rb") as f:
...     base64_image = base64.b64encode(f.read()).decode("utf-8")
>>> image_url = f"data:image/jpeg;base64,{base64_image}"

>>> client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
>>> output = client.chat.completions.create(
...     messages=[
...         {
...             "role": "user",
...             "content": [
...                 {
...                     "type": "image_url",
...                     "image_url": {"url": image_url},
...                 },
...                 {
...                     "type": "text",
...                     "text": "Describe this image in one sentence.",
...                 },
...             ],
...         },
...     ],
... )
>>> output
The image depicts the iconic Statue of Liberty situated in New York Harbor, New York, on a clear day.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-7">

Example using tools:
```py
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "system",
...         "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
...     },
...     {
...         "role": "user",
...         "content": "What's the weather like the next 3 days in San Francisco, CA?",
...     },
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     {
...         "type": "function",
...         "function": {
...             "name": "get_n_day_weather_forecast",
...             "description": "Get an N-day weather forecast",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                     "num_days": {
...                         "type": "integer",
...                         "description": "The number of days to forecast",
...                     },
...                 },
...                 "required": ["location", "format", "num_days"],
...             },
...         },
...     },
... ]

>>> response = client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.chat_completion.example-8">

Example using response_format:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "user",
...         "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
...     },
... ]
>>> response_format = {
...     "type": "json",
...     "value": {
...         "properties": {
...             "location": {"type": "string"},
...             "activity": {"type": "string"},
...             "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...             "animals": {"type": "array", "items": {"type": "string"}},
...         },
...         "required": ["location", "activity", "animals_seen", "animals"],
...     },
... }
>>> response = client.chat_completion(
...     messages=messages,
...     response_format=response_format,
...     max_tokens=500,
... )
>>> response.choices[0].message.content
'{

y": "bike ride",
": ["puppy", "cat", "raccoon"],
_seen": 3,
n": "park"}'
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>document_question_answering</name><anchor>huggingface_hub.InferenceClient.document_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L937</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO]`) --
  The input image for the context. It can be raw bytes, an image file, or a URL to an online image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the document question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended document question answering model will be used.
  Defaults to None.
- **doc_stride** (`int`, *optional*) --
  If the words in the document are too long to fit with the question for the model, it will be split in
  several chunks with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer
- **lang** (`str`, *optional*) --
  Language to use while running OCR. Defaults to english.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using doc_stride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Can return less than top_k
  answers if there are not enough options available within the context.
- **word_boxes** (`list[Union[list[float], str`, *optional*) --
  A list of words and bounding boxes (normalized 0->1000). If provided, the inference will skip the OCR
  step and use the provided bounding boxes instead.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[DocumentQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [DocumentQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.DocumentQuestionAnsweringOutputElement) items containing the predicted label, associated probability, word ids, and page number.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Answer questions on document images.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.document_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.document_question_answering(image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", question="What is the invoice number?")
[DocumentQuestionAnsweringOutputElement(answer='us-001', end=16, score=0.9999666213989258, start=16)]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>feature_extraction</name><anchor>huggingface_hub.InferenceClient.feature_extraction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1024</source><parameters>[{"name": "text", "val": ": str"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[typing.Literal['Left', 'Right']] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (*str*) --
  The text to embed.
- **model** (*str*, *optional*) --
  The model to use for the feature extraction task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended feature extraction model will be used.
  Defaults to None.
- **normalize** (*bool*, *optional*) --
  Whether to normalize the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **prompt_name** (*str*, *optional*) --
  The name of the prompt that should be used by for encoding. If not set, no prompt will be applied.
  Must be a key in the *Sentence Transformers* configuration *prompts* dictionary.
  For example if `prompt_name` is "query" and the `prompts` is &amp;lcub;"query": "query: ",...},
  then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?"
  because the prompt text will be prepended before any text to encode.
- **truncate** (*bool*, *optional*) --
  Whether to truncate the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **truncation_direction** (*Literal["Left", "Right"]*, *optional*) --
  Which side of the input should be truncated when *truncate=True* is passed.</paramsdesc><paramgroups>0</paramgroups><rettype>*np.ndarray*</rettype><retdesc>The embedding representing the input text as a float32 numpy array.</retdesc><raises>- [*InferenceTimeoutError*] -- 
  If the model is unavailable or the request times out.
- [*HfHubHTTPError*] -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[*InferenceTimeoutError*] or [*HfHubHTTPError*]</raisederrors></docstring>

Generate embeddings for a given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.feature_extraction.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.feature_extraction("Hi, who are you?")
array([[ 2.424802  ,  2.93384   ,  1.1750331 , ...,  1.240499, -0.13776633, -0.7889173 ],
[-0.42943227, -0.6364878 , -1.693462  , ...,  0.41978157, -2.4336355 ,  0.6162071 ],
...,
[ 0.28552425, -0.928395  , -1.2077185 , ...,  0.76810825, -2.1069427 ,  0.6236161 ]], dtype=float32)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fill_mask</name><anchor>huggingface_hub.InferenceClient.fill_mask</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1097</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask).
- **model** (`str`, *optional*) --
  The model to use for the fill mask task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended fill mask model will be used.
- **targets** (`list[str`, *optional*) --
  When passed, the model will limit the scores to the passed targets instead of looking up in the whole
  vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first
  resulting token will be used (with a warning, and that might be slower).
- **top_k** (`int`, *optional*) --
  When passed, overrides the number of predictions to return.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[FillMaskOutputElement]`</rettype><retdesc>a list of [FillMaskOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.FillMaskOutputElement) items containing the predicted label, associated
probability, token reference, and completed text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Fill in a hole with a missing word (token to be precise).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.fill_mask.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.fill_mask("The goal of life is <mask>.")
[
    FillMaskOutputElement(score=0.06897063553333282, token=11098, token_str=' happiness', sequence='The goal of life is happiness.'),
    FillMaskOutputElement(score=0.06554922461509705, token=45075, token_str=' immortality', sequence='The goal of life is immortality.')
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_endpoint_info</name><anchor>huggingface_hub.InferenceClient.get_endpoint_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3269</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, Any]`</rettype><retdesc>Information about the endpoint.</retdesc></docstring>

Get information about the deployed endpoint.

This endpoint is only available on endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).
Endpoints powered by `transformers` return an empty payload.







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.get_endpoint_info.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> client.get_endpoint_info()
{
    'model_id': 'meta-llama/Meta-Llama-3-70B-Instruct',
    'model_sha': None,
    'model_dtype': 'torch.float16',
    'model_device_type': 'cuda',
    'model_pipeline_tag': None,
    'max_concurrent_requests': 128,
    'max_best_of': 2,
    'max_stop_sequences': 4,
    'max_input_length': 8191,
    'max_total_tokens': 8192,
    'waiting_served_ratio': 0.3,
    'max_batch_total_tokens': 1259392,
    'max_waiting_tokens': 20,
    'max_batch_size': None,
    'validation_workers': 32,
    'max_client_batch_size': 4,
    'version': '2.0.2',
    'sha': 'dccab72549635c7eb5ddb17f43f0b7cdff07c214',
    'docker_label': 'sha-dccab72'
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>health_check</name><anchor>huggingface_hub.InferenceClient.health_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3327</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  URL of the Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if everything is working fine.</retdesc></docstring>

Check the health of the deployed endpoint.

Health check is only available with Inference Endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.health_check.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient("https://jzgu0buei5.us-east-1.aws.endpoints.huggingface.cloud")
>>> client.health_check()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_classification</name><anchor>huggingface_hub.InferenceClient.image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1153</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to classify. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used.
- **function_to_apply** (`"ImageClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageClassificationOutputElement]`</rettype><retdesc>a list of [ImageClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image classification on the given image using the specified model.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[ImageClassificationOutputElement(label='Blenheim spaniel', score=0.9779096841812134), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_segmentation</name><anchor>huggingface_hub.InferenceClient.image_segmentation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1203</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to segment. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used.
- **mask_threshold** (`float`, *optional*) --
  Threshold to use when turning the predicted masks into binary values.
- **overlap_mask_area_threshold** (`float`, *optional*) --
  Mask overlap threshold to eliminate small, disconnected segments.
- **subtask** (`"ImageSegmentationSubtask"`, *optional*) --
  Segmentation task to be performed, depending on model capabilities.
- **threshold** (`float`, *optional*) --
  Probability threshold to filter out predicted masks.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageSegmentationOutputElement]`</rettype><retdesc>A list of [ImageSegmentationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageSegmentationOutputElement) items containing the segmented masks and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image segmentation on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_segmentation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_segmentation("cat.jpg")
[ImageSegmentationOutputElement(score=0.989008, label='LABEL_184', mask=<PIL.PngImagePlugin.PngImageFile image mode=L size=400x300 at 0x7FDD2B129CC0>), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_image</name><anchor>huggingface_hub.InferenceClient.image_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1270</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for translation. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the image generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate images closely
  linked to the text prompt at the expense of lower image quality.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **target_size** (`ImageToImageTargetSize`, *optional*) --
  The size in pixels of the output image. This parameter is only supported by some providers and for
  specific models. It will be ignored when unsupported.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The translated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image-to-image translation using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_image.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> image = client.image_to_image("cat.jpg", prompt="turn the cat into a tiger")
>>> image.save("tiger.jpg")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_text</name><anchor>huggingface_hub.InferenceClient.image_to_text</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1425</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImageToTextOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageToTextOutput)</rettype><retdesc>The generated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Takes an input image and return text.

Models can have very different outputs depending on your use case (image captioning, optical character recognition
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_text.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_to_text("cat.jpg")
'a cat standing in a grassy field '
>>> client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
'a dog laying on the grass next to a flower pot '
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_video</name><anchor>huggingface_hub.InferenceClient.image_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1346</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to generate a video from. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the video generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate videos closely
  linked to the text prompt at the expense of lower image quality.
- **seed** (`int`, *optional*) --
  The seed to use for the video generation.
- **target_size** (`ImageToVideoTargetSize`, *optional*) --
  The size in pixel of the output video frames.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video from an input image.







<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.image_to_video.example">

Examples:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
...     f.write(video)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>object_detection</name><anchor>huggingface_hub.InferenceClient.object_detection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1471</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to detect objects on. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for object detection. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for object detection (DETR) will be used.
- **threshold** (`float`, *optional*) --
  The probability necessary to make a prediction.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ObjectDetectionOutputElement]`</rettype><retdesc>A list of [ObjectDetectionOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ObjectDetectionOutputElement) items containing the bounding boxes and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If the request output is not a List.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Perform object detection on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.object_detection.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.object_detection("people.jpg")
[ObjectDetectionOutputElement(score=0.9486683011054993, label='person', box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>question_answering</name><anchor>huggingface_hub.InferenceClient.question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1519</source><parameters>[{"name": "question", "val": ": str"}, {"name": "context", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **question** (`str`) --
  Question to be answered.
- **context** (`str`) --
  The context of the question.
- **model** (`str`) --
  The model to use for the question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint.
- **align_to_words** (`bool`, *optional*) --
  Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt
  on non-space-separated languages (like Japanese or Chinese)
- **doc_stride** (`int`, *optional*) --
  If the context is too long to fit with the question for the model, it will be split in several chunks
  with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using docStride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>Union[`QuestionAnsweringOutputElement`, list[QuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.QuestionAnsweringOutputElement)]</rettype><retdesc>When top_k is 1 or not provided, it returns a single `QuestionAnsweringOutputElement`.
When top_k is greater than 1, it returns a list of `QuestionAnsweringOutputElement`.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from a given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.question_answering(question="What's my name?", context="My name is Clara and I live in Berkeley.")
QuestionAnsweringOutputElement(answer='Clara', end=16, score=0.9326565265655518, start=11)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>sentence_similarity</name><anchor>huggingface_hub.InferenceClient.sentence_similarity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1603</source><parameters>[{"name": "sentence", "val": ": str"}, {"name": "other_sentences", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **sentence** (`str`) --
  The main sentence to compare to others.
- **other_sentences** (`list[str]`) --
  The list of sentences to compare to.
- **model** (`str`, *optional*) --
  The model to use for the sentence similarity task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended sentence similarity model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[float]`</rettype><retdesc>The embedding representing the input text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.sentence_similarity.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.sentence_similarity(
...     "Machine learning is so easy.",
...     other_sentences=[
...         "Deep learning is so straightforward.",
...         "This is so difficult, like rocket science.",
...         "I can't believe how much I struggled with this.",
...     ],
... )
[0.7785726189613342, 0.45876261591911316, 0.2906220555305481]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>summarization</name><anchor>huggingface_hub.InferenceClient.summarization</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1656</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to summarize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for summarization will be used.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.
- **truncation** (`"SummarizationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.</paramsdesc><paramgroups>0</paramgroups><rettype>[SummarizationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.SummarizationOutput)</rettype><retdesc>The generated summary text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate a summary of a given text using a specified model.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.summarization.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.summarization("The Eiffel tower...")
SummarizationOutput(generated_text="The Eiffel tower is one of the most famous landmarks in the world....")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>table_question_answering</name><anchor>huggingface_hub.InferenceClient.table_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1714</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "query", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **table** (`str`) --
  A table of data represented as a dict of lists where entries are headers and the lists are all the
  values, all lists must have the same size.
- **query** (`str`) --
  The query in plain text that you want to ask the table.
- **model** (`str`) --
  The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
  Hub or a URL to a deployed Inference Endpoint.
- **padding** (`"Padding"`, *optional*) --
  Activates and controls padding.
- **sequential** (`bool`, *optional*) --
  Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
  inference to be done sequentially to extract relations within sequences, given their conversational
  nature.
- **truncation** (`bool`, *optional*) --
  Activates and controls truncation.</paramsdesc><paramgroups>0</paramgroups><rettype>[TableQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TableQuestionAnsweringOutputElement)</rettype><retdesc>a table question answering output containing the answer, coordinates, cells and the aggregator used.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from information given in a table.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.table_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> query = "How many stars does the transformers repository have?"
>>> table = {"Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"]}
>>> client.table_question_answering(table, query, model="google/tapas-base-finetuned-wtq")
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_classification</name><anchor>huggingface_hub.InferenceClient.tabular_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1776</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes to classify.
- **model** (`str`, *optional*) --
  The model to use for the tabular classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular classification model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of labels, one per row in the initial table.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Classifying a target category (a group) based on a set of attributes.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.tabular_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> table = {
...     "fixed_acidity": ["7.4", "7.8", "10.3"],
...     "volatile_acidity": ["0.7", "0.88", "0.32"],
...     "citric_acid": ["0", "0", "0.45"],
...     "residual_sugar": ["1.9", "2.6", "6.4"],
...     "chlorides": ["0.076", "0.098", "0.073"],
...     "free_sulfur_dioxide": ["11", "25", "5"],
...     "total_sulfur_dioxide": ["34", "67", "13"],
...     "density": ["0.9978", "0.9968", "0.9976"],
...     "pH": ["3.51", "3.2", "3.23"],
...     "sulphates": ["0.56", "0.68", "0.82"],
...     "alcohol": ["9.4", "9.8", "12.6"],
... }
>>> client.tabular_classification(table=table, model="julien-c/wine-quality")
["5", "5", "5"]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_regression</name><anchor>huggingface_hub.InferenceClient.tabular_regression</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1831</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes stored in a table. The attributes used to predict the target can be both numerical and categorical.
- **model** (`str`, *optional*) --
  The model to use for the tabular regression task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular regression model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of predicted numerical target values.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Predicting a numerical target value given a set of attributes/features in a table.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.tabular_regression.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> table = {
...     "Height": ["11.52", "12.48", "12.3778"],
...     "Length1": ["23.2", "24", "23.9"],
...     "Length2": ["25.4", "26.3", "26.5"],
...     "Length3": ["30", "31.2", "31.1"],
...     "Species": ["Bream", "Bream", "Bream"],
...     "Width": ["4.02", "4.3056", "4.6961"],
... }
>>> client.tabular_regression(table, model="scikit-learn/Fish-Weight")
[110, 120, 130]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_classification</name><anchor>huggingface_hub.InferenceClient.text_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L1881</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"TextClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TextClassificationOutputElement]`</rettype><retdesc>a list of [TextClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform text classification (e.g. sentiment-analysis) on the given text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.text_classification("I like you")
[
    TextClassificationOutputElement(label='POSITIVE', score=0.9998695850372314),
    TextClassificationOutputElement(label='NEGATIVE', score=0.0001304351753788069),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_generation</name><anchor>huggingface_hub.InferenceClient.text_generation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2089</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stop_sequences", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  Input text.
- **details** (`bool`, *optional*) --
  By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens,
  probabilities, seed, finish reason, etc.). Only available for models running on with the
  `text-generation-inference` backend.
- **stream** (`bool`, *optional*) --
  By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of
  tokens to be returned. Only available for models running on with the `text-generation-inference`
  backend.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **adapter_id** (`str`, *optional*) --
  Lora adapter id.
- **best_of** (`int`, *optional*) --
  Generate best_of sequences and return the one if the highest token logprobs.
- **decoder_input_details** (`bool`, *optional*) --
  Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken
  into account. Defaults to `False`.
- **do_sample** (`bool`, *optional*) --
  Activate logits sampling
- **frequency_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in
  the text so far, decreasing the model's likelihood to repeat the same line verbatim.
- **grammar** ([TextGenerationInputGrammarType](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationInputGrammarType), *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **max_new_tokens** (`int`, *optional*) --
  Maximum number of generated tokens. Defaults to 100.
- **repetition_penalty** (`float`, *optional*) --
  The parameter for repetition penalty. 1.0 means no penalty. See [this
  paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- **return_full_text** (`bool`, *optional*) --
  Whether to prepend the prompt to the generated text
- **seed** (`int`, *optional*) --
  Random sampling seed
- **stop** (`list[str]`, *optional*) --
  Stop generating tokens if a member of `stop` is generated.
- **stop_sequences** (`list[str]`, *optional*) --
  Deprecated argument. Use `stop` instead.
- **temperature** (`float`, *optional*) --
  The value used to module the logits distribution.
- **top_n_tokens** (`int`, *optional*) --
  Return information about the `top_n_tokens` most likely tokens at each generation step, instead of
  just the sampled token.
- **top_k** (`int`, *optional`) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional`) --
  If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
  higher are kept for generation.
- **truncate** (`int`, *optional`) --
  Truncate inputs tokens to the given size.
- **typical_p** (`float`, *optional`) --
  Typical Decoding mass
  See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information
- **watermark** (`bool`, *optional*) --
  Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[str, TextGenerationOutput, Iterable[str], Iterable[TextGenerationStreamOutput]]`</rettype><retdesc>Generated text returned from the server:
- if `stream=False` and `details=False`, the generated text is returned as a `str` (default)
- if `stream=True` and `details=False`, the generated text is returned token by token as a `Iterable[str]`
- if `stream=False` and `details=True`, the generated text is returned with more details as a [TextGenerationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationOutput)
- if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [TextGenerationStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationStreamOutput)</retdesc><raises>- ``ValidationError`` -- 
  If input values are not valid. No HTTP call is made to the server.
- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``ValidationError`` or [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Given a prompt, generate the following text.

> [!TIP]
> If you want to generate a response from chat messages, you should use the [InferenceClient.chat_completion()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) method.
> It accepts a list of messages instead of a single text prompt and handles the chat templating for you.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_generation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# Case 1: generate text
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# Case 2: iterate over the generated tokens. Useful for large generation.
>>> for token in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True):
...     print(token)
100
%
open
source
and
built
to
be
easy
to
use
.

# Case 3: get more details about the generation process.
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True)
TextGenerationOutput(
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationDetails(
        finish_reason='length',
        generated_tokens=12,
        seed=None,
        prefill=[
            TextGenerationPrefillOutputToken(id=487, text='The', logprob=None),
            TextGenerationPrefillOutputToken(id=53789, text=' hugging', logprob=-13.171875),
            (...)
            TextGenerationPrefillOutputToken(id=204, text=' ', logprob=-7.0390625)
        ],
        tokens=[
            TokenElement(id=1425, text='100', logprob=-1.0175781, special=False),
            TokenElement(id=16, text='%', logprob=-0.0463562, special=False),
            (...)
            TokenElement(id=25, text='.', logprob=-0.5703125, special=False)
        ],
        best_of_sequences=None
    )
)

# Case 4: iterate over the generated tokens with more details.
# Last object is more complete, containing the full generated text and the finish reason.
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
...     print(details)
...
TextGenerationStreamOutput(token=TokenElement(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationStreamOutputStreamDetails(finish_reason='length', generated_tokens=12, seed=None)
)

# Case 5: generate constrained output using grammar
>>> response = client.text_generation(
...     prompt="I saw a puppy a cat and a raccoon during my bike ride in the park",
...     model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
...     max_new_tokens=100,
...     repetition_penalty=1.3,
...     grammar={
...         "type": "json",
...         "value": {
...             "properties": {
...                 "location": {"type": "string"},
...                 "activity": {"type": "string"},
...                 "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...                 "animals": {"type": "array", "items": {"type": "string"}},
...             },
...             "required": ["location", "activity", "animals_seen", "animals"],
...         },
...     },
... )
>>> json.loads(response)
{
    "activity": "bike riding",
    "animals": ["puppy", "cat", "raccoon"],
    "animals_seen": 3,
    "location": "park"
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_image</name><anchor>huggingface_hub.InferenceClient.text_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2428</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate an image from.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **height** (`int`, *optional*) --
  The height in pixels of the output image
- **width** (`int`, *optional*) --
  The width in pixels of the output image
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-image model will be used.
  Defaults to None.
- **scheduler** (`str`, *optional*) --
  Override the scheduler with a compatible one.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The generated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate an image based on a given text using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     negative_prompt="low resolution, blurry",
...     model="stabilityai/stable-diffusion-2-1",
... )
>>> image.save("better_astronaut.png")
```

</ExampleCodeBlock>
Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Use fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> image = client.text_to_image(
...     "A majestic lion in a fantasy forest",
...     model="black-forest-labs/FLUX.1-schnell",
... )
>>> image.save("lion.png")
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-dev",
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>

Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_image.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-schnell",
...     extra_body={"output_quality": 100},
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_speech</name><anchor>huggingface_hub.InferenceClient.text_to_speech</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2665</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The text to synthesize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-speech model will be used.
  Defaults to None.
- **do_sample** (`bool`, *optional*) --
  Whether to use sampling instead of greedy decoding when generating new tokens.
- **early_stopping** (`Union[bool, "TextToSpeechEarlyStoppingEnum"]`, *optional*) --
  Controls the stopping condition for beam-based methods.
- **epsilon_cutoff** (`float`, *optional*) --
  If set to float strictly between 0 and 1, only tokens with a conditional probability greater than
  epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on
  the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **eta_cutoff** (`float`, *optional*) --
  Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly
  between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff)
  * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token
  probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3,
  depending on the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **max_length** (`int`, *optional*) --
  The maximum length (in tokens) of the generated text, including the input.
- **max_new_tokens** (`int`, *optional*) --
  The maximum number of tokens to generate. Takes precedence over max_length.
- **min_length** (`int`, *optional*) --
  The minimum length (in tokens) of the generated text, including the input.
- **min_new_tokens** (`int`, *optional*) --
  The minimum number of tokens to generate. Takes precedence over min_length.
- **num_beam_groups** (`int`, *optional*) --
  Number of groups to divide num_beams into in order to ensure diversity among different groups of beams.
  See [this paper](https://hf.co/papers/1610.02424) for more details.
- **num_beams** (`int`, *optional*) --
  Number of beams to use for beam search.
- **penalty_alpha** (`float`, *optional*) --
  The value balances the model confidence and the degeneration penalty in contrastive search decoding.
- **temperature** (`float`, *optional*) --
  The value used to modulate the next token probabilities.
- **top_k** (`int`, *optional*) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional*) --
  If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to
  top_p or higher are kept for generation.
- **typical_p** (`float`, *optional*) --
  Local typicality measures how similar the conditional probability of predicting a target token next is
  to the expected conditional probability of predicting a random token next, given the partial text
  already generated. If set to float < 1, the smallest set of the most locally typical tokens with
  probabilities that add up to typical_p or higher are kept for generation. See [this
  paper](https://hf.co/papers/2202.00666) for more details.
- **use_cache** (`bool`, *optional*) --
  Whether the model should use the past last key/values attentions to speed up decoding
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated audio.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Synthesize an audio of a voice pronouncing a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example">

Example:
```py
>>> from pathlib import Path
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> audio = client.text_to_speech("Hello world")
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider directly. Usage will be billed on your Replicate account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="your-replicate-api-key",  # Pass your Replicate API key directly
... )
>>> audio = client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio =client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>
Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio = client.text_to_speech(
...     "Hello, my name is Kororo, an awesome text-to-speech model.",
...     model="hexgrad/Kokoro-82M",
...     extra_body={"voice": "af_nicole"},
... )
>>> Path("hello.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example music-gen using "YuE-s1-7B-anneal-en-cot" on fal.ai
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_speech.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> lyrics = '''
... [verse]
... In the town where I was born
... Lived a man who sailed to sea
... And he told us of his life
... In the land of submarines
... So we sailed on to the sun
... 'Til we found a sea of green
... And we lived beneath the waves
... In our yellow submarine

... [chorus]
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... '''
>>> genres = "pavarotti-style tenor voice"
>>> client = InferenceClient(
...     provider="fal-ai",
...     model="m-a-p/YuE-s1-7B-anneal-en-cot",
...     api_key=...,
... )
>>> audio = client.text_to_speech(lyrics, extra_body={"genres": genres})
>>> with open("output.mp3", "wb") as f:
...     f.write(audio)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_video</name><anchor>huggingface_hub.InferenceClient.text_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2568</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate a video from.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-video model will be used.
  Defaults to None.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate videos closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **negative_prompt** (`list[str]`, *optional*) --
  One or several prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video based on a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.







Example:

Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_video.example">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Using fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> video = client.text_to_video(
...     "A majestic lion running in a fantasy forest",
...     model="tencent/HunyuanVideo",
... )
>>> with open("lion.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.text_to_video.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Using replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> video = client.text_to_video(
...     "A cat running in a park",
...     model="genmo/mochi-1-preview",
... )
>>> with open("cat.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>token_classification</name><anchor>huggingface_hub.InferenceClient.token_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2873</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the token classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended token classification model will be used.
  Defaults to None.
- **aggregation_strategy** (`"TokenClassificationAggregationStrategy"`, *optional*) --
  The strategy used to fuse tokens based on model predictions
- **ignore_labels** (`list[str`, *optional*) --
  A list of labels to ignore
- **stride** (`int`, *optional*) --
  The number of overlapping tokens between chunks when splitting the input text.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TokenClassificationOutputElement]`</rettype><retdesc>List of [TokenClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TokenClassificationOutputElement) items containing the entity group, confidence score, word, start and end index.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform token classification on the given text.
Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.token_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.token_classification("My name is Sarah Jessica Parker but you can call me Jessica")
[
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9971321225166321,
        word='Sarah Jessica Parker',
        start=11,
        end=31,
    ),
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9773476123809814,
        word='Jessica',
        start=52,
        end=59,
    )
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>translation</name><anchor>huggingface_hub.InferenceClient.translation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L2948</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be translated.
- **model** (`str`, *optional*) --
  The model to use for the translation task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended translation model will be used.
  Defaults to None.
- **src_lang** (`str`, *optional*) --
  The source language of the text. Required for models that can translate from multiple languages.
- **tgt_lang** (`str`, *optional*) --
  Target language to translate to. Required for models that can translate to multiple languages.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **truncation** (`"TranslationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.</paramsdesc><paramgroups>0</paramgroups><rettype>[TranslationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TranslationOutput)</rettype><retdesc>The generated translated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If only one of the `src_lang` and `tgt_lang` arguments are provided.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Convert text from one language to another.

Check out https://huggingface.co/tasks/translation for more information on how to choose the best model for
your specific use case. Source and target languages usually depend on the model.
However, it is possible to specify source and target languages for certain models. If you are working with one of these models,
you can use `src_lang` and `tgt_lang` arguments to pass the relevant information.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.translation.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Wolfgang and I live in Berlin")
'Mein Name ist Wolfgang und ich lebe in Berlin.'
>>> client.translation("My name is Wolfgang and I live in Berlin", model="Helsinki-NLP/opus-mt-en-fr")
TranslationOutput(translation_text='Je m'appelle Wolfgang et je vis à Berlin.')
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.translation.example-2">

Specifying languages:
```py
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>visual_question_answering</name><anchor>huggingface_hub.InferenceClient.visual_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3037</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for the context. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the visual question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended visual question answering model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[VisualQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [VisualQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.VisualQuestionAnsweringOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Answering open-ended questions based on an image.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.visual_question_answering.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.visual_question_answering(
...     image="https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg",
...     question="What is the animal doing?"
... )
[
    VisualQuestionAnsweringOutputElement(score=0.778609573841095, answer='laying down'),
    VisualQuestionAnsweringOutputElement(score=0.6957435607910156, answer='sitting'),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_classification</name><anchor>huggingface_hub.InferenceClient.zero_shot_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3096</source><parameters>[{"name": "text", "val": ": str"}, {"name": "candidate_labels", "val": ": list"}, {"name": "multi_label", "val": ": typing.Optional[bool] = False"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to classify.
- **candidate_labels** (`list[str]`) --
  The set of possible class labels to classify the text into.
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of strings. Each string is the verbalization of a possible label for the input text.
- **multi_label** (`bool`, *optional*) --
  Whether multiple candidate labels can be true. If false, the scores are normalized such that the sum of
  the label likelihoods for each sequence is 1. If true, the labels are considered independent and
  probabilities are normalized for each candidate.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the text classification by
  replacing the placeholder with the candidate labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot classification model will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ZeroShotClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide as input a text and a set of candidate labels to classify the input text.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_classification.example">

Example with `multi_label=False`:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> text = (
...     "A new model offers an explanation for how the Galilean satellites formed around the solar system's"
...     "largest world. Konstantin Batygin did not set out to solve one of the solar system's most puzzling"
...     " mysteries when he went for a run up a hill in Nice, France."
... )
>>> labels = ["space & cosmos", "scientific discovery", "microbiology", "robots", "archeology"]
>>> client.zero_shot_classification(text, labels)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.7961668968200684),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.18570658564567566),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.00730885099619627),
    ZeroShotClassificationOutputElement(label='archeology', score=0.006258360575884581),
    ZeroShotClassificationOutputElement(label='robots', score=0.004559356719255447),
]
>>> client.zero_shot_classification(text, labels, multi_label=True)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.9829297661781311),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.755190908908844),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.0005462635890580714),
    ZeroShotClassificationOutputElement(label='archeology', score=0.00047131875180639327),
    ZeroShotClassificationOutputElement(label='robots', score=0.00030448526376858354),
]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_classification.example-2">

Example with `multi_label=True` and a custom `hypothesis_template`:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_classification(
...    text="I really like our dinner and I'm very happy. I don't like the weather though.",
...    labels=["positive", "negative", "pessimistic", "optimistic"],
...    multi_label=True,
...    hypothesis_template="This text is {} towards the weather"
... )
[
    ZeroShotClassificationOutputElement(label='negative', score=0.9231801629066467),
    ZeroShotClassificationOutputElement(label='pessimistic', score=0.8760990500450134),
    ZeroShotClassificationOutputElement(label='optimistic', score=0.0008674879791215062),
    ZeroShotClassificationOutputElement(label='positive', score=0.0005250611575320363)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_image_classification</name><anchor>huggingface_hub.InferenceClient.zero_shot_image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py#L3202</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "candidate_labels", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "labels", "val": ": list = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **candidate_labels** (`list[str]`) --
  The candidate labels for this image
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of string possible labels. There must be at least 2 labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot image classification model will be used.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the image classification by
  replacing the placeholder with the candidate labels.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotImageClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotImageClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ZeroShotImageClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide input image and text labels to predict text labels for the image.











<ExampleCodeBlock anchor="huggingface_hub.InferenceClient.zero_shot_image_classification.example">

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[ZeroShotImageClassificationOutputElement(label='dog', score=0.956),...]
```

</ExampleCodeBlock>


</div></div>

## 비동기 추론 클라이언트[[huggingface_hub.AsyncInferenceClient]][[huggingface_hub.AsyncInferenceClient]]

비동기 버전의 클라이언트도 제공되며, 이는 `asyncio`와 `aiohttp`를 기반으로 작동합니다. 
이를 사용하려면 `aiohttp`를 직접 설치하거나 `[inference]` 추가 기능을 사용할 수 있습니다:

```sh
pip install --upgrade huggingface_hub[inference]
# 또는
# pip install aiohttp
```

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.AsyncInferenceClient</name><anchor>huggingface_hub.AsyncInferenceClient</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L114</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "provider", "val": ": typing.Union[typing.Literal['black-forest-labs', 'cerebras', 'clarifai', 'cohere', 'fal-ai', 'featherless-ai', 'fireworks-ai', 'groq', 'hf-inference', 'hyperbolic', 'nebius', 'novita', 'nscale', 'openai', 'publicai', 'replicate', 'sambanova', 'scaleway', 'together', 'wavespeed', 'zai-org'], typing.Literal['auto'], NoneType] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "timeout", "val": ": typing.Optional[float] = None"}, {"name": "headers", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "cookies", "val": ": typing.Optional[dict[str, str]] = None"}, {"name": "bill_to", "val": ": typing.Optional[str] = None"}, {"name": "base_url", "val": ": typing.Optional[str] = None"}, {"name": "api_key", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, `optional`) --
  The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct`
  or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is
  automatically selected for the task.
  Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2
  arguments are mutually exclusive. If a URL is passed as `model` or `base_url` for chat completion, the `(/v1)/chat/completions` suffix path will be appended to the URL.
- **provider** (`str`, *optional*) --
  Name of the provider to use for inference. Can be `"black-forest-labs"`, `"cerebras"`, `"clarifai"`, `"cohere"`, `"fal-ai"`, `"featherless-ai"`, `"fireworks-ai"`, `"groq"`, `"hf-inference"`, `"hyperbolic"`, `"nebius"`, `"novita"`, `"nscale"`, `"openai"`, `"publicai"`, `"replicate"`, `"sambanova"`, `"scaleway"`, `"together"`, `"wavespeed"` or `"zai-org"`.
  Defaults to "auto" i.e. the first of the providers available for the model, sorted by the user's order in https://hf.co/settings/inference-providers.
  If model is a URL or `base_url` is passed, then `provider` is not used.
- **token** (`str`, *optional*) --
  Hugging Face token. Will default to the locally saved token if not provided.
  Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2
  arguments are mutually exclusive and have the exact same behavior.
- **timeout** (`float`, `optional`) --
  The maximum number of seconds to wait for a response from the server. Defaults to None, meaning it will loop until the server is available.
- **headers** (`dict[str, str]`, `optional`) --
  Additional headers to send to the server. By default only the authorization and user-agent headers are sent.
  Values in this dictionary will override the default values.
- **bill_to** (`str`, `optional`) --
  The billing account to use for the requests. By default the requests are billed on the user's account.
  Requests can only be billed to an organization the user is a member of, and which has subscribed to Enterprise Hub.
- **cookies** (`dict[str, str]`, `optional`) --
  Additional cookies to send to the server.
- **base_url** (`str`, `optional`) --
  Base URL to run inference. This is a duplicated argument from `model` to make [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
- **api_key** (`str`, `optional`) --
  Token to use for authentication. This is a duplicated argument from `token` to make [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)
  follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None.</paramsdesc><paramgroups>0</paramgroups></docstring>

Initialize a new Inference Client.

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient) aims to provide a unified experience to perform inference. The client can be used
seamlessly with either the (free) Inference API, self-hosted Inference Endpoints, or third-party Inference Providers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_classification</name><anchor>huggingface_hub.AsyncInferenceClient.audio_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L317</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio classification will be used.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"AudioClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioClassificationOutputElement]`</rettype><retdesc>List of [AudioClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AudioClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform audio classification on the provided audio content.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.audio_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.audio_classification("audio.flac")
[
    AudioClassificationOutputElement(score=0.4976358711719513, label='hap'),
    AudioClassificationOutputElement(score=0.3677836060523987, label='neu'),
    ...
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>audio_to_audio</name><anchor>huggingface_hub.AsyncInferenceClient.audio_to_audio</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L375</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The audio content for the model. It can be raw audio bytes, a local audio file, or a URL pointing to an
  audio file.
- **model** (`str`, *optional*) --
  The model can be any model which takes an audio file and returns another audio file. Can be a model ID hosted on the Hugging Face Hub
  or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
  audio_to_audio will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[AudioToAudioOutputElement]`</rettype><retdesc>A list of [AudioToAudioOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AudioToAudioOutputElement) items containing audios label, content-type, and audio content in blob.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Performs multiple tasks related to audio-to-audio depending on the model (eg: speech enhancement, source separation).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.audio_to_audio.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> audio_output = await client.audio_to_audio("audio.flac")
>>> async for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item.blob)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>automatic_speech_recognition</name><anchor>huggingface_hub.AsyncInferenceClient.automatic_speech_recognition</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L428</source><parameters>[{"name": "audio", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **audio** (Union[str, Path, bytes, BinaryIO]) --
  The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file.
- **model** (`str`, *optional*) --
  The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for ASR will be used.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[AutomaticSpeechRecognitionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.AutomaticSpeechRecognitionOutput)</rettype><retdesc>An item containing the transcribed text and optionally the timestamp chunks.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform automatic speech recognition (ASR or audio-to-text) on the given audio content.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.automatic_speech_recognition.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.automatic_speech_recognition("hello_world.flac").text
"hello world"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>chat_completion</name><anchor>huggingface_hub.AsyncInferenceClient.chat_completion</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L555</source><parameters>[{"name": "messages", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "stream", "val": ": bool = False"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "logit_bias", "val": ": typing.Optional[list[float]] = None"}, {"name": "logprobs", "val": ": typing.Optional[bool] = None"}, {"name": "max_tokens", "val": ": typing.Optional[int] = None"}, {"name": "n", "val": ": typing.Optional[int] = None"}, {"name": "presence_penalty", "val": ": typing.Optional[float] = None"}, {"name": "response_format", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stream_options", "val": ": typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "tool_choice", "val": ": typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None"}, {"name": "tool_prompt", "val": ": typing.Optional[str] = None"}, {"name": "tools", "val": ": typing.Optional[list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = None"}, {"name": "top_logprobs", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **messages** (List of [ChatCompletionInputMessage](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputMessage)) --
  Conversation history consisting of roles and content pairs.
- **model** (`str`, *optional*) --
  The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
  See https://huggingface.co/tasks/text-generation for more details.
  If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
  custom URL while setting `model` in the request payload, you must set `base_url` when initializing [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient).
- **frequency_penalty** (`float`, *optional*) --
  Penalizes new tokens based on their existing frequency
  in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
- **logit_bias** (`list[float]`, *optional*) --
  Adjusts the likelihood of specific tokens appearing in the generated output.
- **logprobs** (`bool`, *optional*) --
  Whether to return log probabilities of the output tokens or not. If true, returns the log
  probabilities of each output token returned in the content of message.
- **max_tokens** (`int`, *optional*) --
  Maximum number of tokens allowed in the response. Defaults to 100.
- **n** (`int`, *optional*) --
  The number of completions to generate for each prompt.
- **presence_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
  text so far, increasing the model's likelihood to talk about new topics.
- **response_format** (`ChatCompletionInputGrammarType()`, *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **seed** (Optional`int`, *optional*) --
  Seed for reproducible control flow. Defaults to None.
- **stop** (`list[str]`, *optional*) --
  Up to four strings which trigger the end of the response.
  Defaults to None.
- **stream** (`bool`, *optional*) --
  Enable realtime streaming of responses. Defaults to False.
- **stream_options** ([ChatCompletionInputStreamOptions](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputStreamOptions), *optional*) --
  Options for streaming completions.
- **temperature** (`float`, *optional*) --
  Controls randomness of the generations. Lower values ensure
  less random completions. Range: [0, 2]. Defaults to 1.0.
- **top_logprobs** (`int`, *optional*) --
  An integer between 0 and 5 specifying the number of most likely tokens to return at each token
  position, each with an associated log probability. logprobs must be set to true if this parameter is
  used.
- **top_p** (`float`, *optional*) --
  Fraction of the most likely next words to sample from.
  Must be between 0 and 1. Defaults to 1.0.
- **tool_choice** ([ChatCompletionInputToolChoiceClass](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputToolChoiceClass) or `ChatCompletionInputToolChoiceEnum()`, *optional*) --
  The tool to use for the completion. Defaults to "auto".
- **tool_prompt** (`str`, *optional*) --
  A prompt to be appended before the tools.
- **tools** (List of [ChatCompletionInputTool](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionInputTool), *optional*) --
  A list of tools the model may call. Currently, only functions are supported as a tool. Use this to
  provide a list of functions the model may generate JSON inputs for.
- **extra_body** (`dict`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>[ChatCompletionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) or Iterable of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput)</rettype><retdesc>Generated text returned from the server:
- if `stream=False`, the generated text is returned as a [ChatCompletionOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionOutput) (default).
- if `stream=True`, the generated text is returned token by token as a sequence of [ChatCompletionStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ChatCompletionStreamOutput).</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

A method for completing conversations using a specified language model.

> [!TIP]
> The `client.chat_completion` method is aliased as `client.chat.completions.create` for compatibility with OpenAI's client.
> Inputs and outputs are strictly the same and using either syntax will yield the same results.
> Check out the [Inference guide](https://huggingface.co/docs/huggingface_hub/guides/inference#openai-compatibility)
> for more details about OpenAI's compatibility.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example">

Example:

```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> await client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputComplete(
            finish_reason='eos_token',
            index=0,
            message=ChatCompletionOutputMessage(
                role='assistant',
                content='The capital of France is Paris.',
                name=None,
                tool_calls=None
            ),
            logprobs=None
        )
    ],
    created=1719907176,
    id='',
    model='meta-llama/Meta-Llama-3-8B-Instruct',
    object='text_completion',
    system_fingerprint='2.0.4-sha-f426a33',
    usage=ChatCompletionOutputUsage(
        completion_tokens=8,
        prompt_tokens=17,
        total_tokens=25
    )
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-2">

Example using streaming:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-8B-Instruct")
>>> async for token in await client.chat_completion(messages, max_tokens=10, stream=True):
...     print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-3">

Example using OpenAI's syntax:
```py
# Must be run in an async context
# instead of `from openai import OpenAI`
from huggingface_hub import AsyncInferenceClient

# instead of `client = OpenAI(...)`
client = AsyncInferenceClient(
    base_url=...,
    api_key=...,
)

output = await client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)
```

</ExampleCodeBlock>

Example using a third-party provider directly with extra (provider-specific) parameters. Usage will be billed on your Together AI account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
...     extra_body={"safety_model": "Meta-Llama/Llama-Guard-7b"},
... )
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="sambanova",  # Use Sambanova provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> client.chat_completion(
...     model="meta-llama/Meta-Llama-3-8B-Instruct",
...     messages=[{"role": "user", "content": "What is the capital of France?"}],
... )
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-6">

Example using Image + Text as input:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient

# provide a remote URL
>>> image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
# or a base64-encoded image
>>> image_path = "/path/to/image.jpeg"
>>> with open(image_path, "rb") as f:
...     base64_image = base64.b64encode(f.read()).decode("utf-8")
>>> image_url = f"data:image/jpeg;base64,{base64_image}"

>>> client = AsyncInferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
>>> output = await client.chat.completions.create(
...     messages=[
...         {
...             "role": "user",
...             "content": [
...                 {
...                     "type": "image_url",
...                     "image_url": {"url": image_url},
...                 },
...                 {
...                     "type": "text",
...                     "text": "Describe this image in one sentence.",
...                 },
...             ],
...         },
...     ],
... )
>>> output
The image depicts the iconic Statue of Liberty situated in New York Harbor, New York, on a clear day.
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-7">

Example using tools:
```py
# Must be run in an async context
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "system",
...         "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
...     },
...     {
...         "role": "user",
...         "content": "What's the weather like the next 3 days in San Francisco, CA?",
...     },
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     {
...         "type": "function",
...         "function": {
...             "name": "get_n_day_weather_forecast",
...             "description": "Get an N-day weather forecast",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                     "num_days": {
...                         "type": "integer",
...                         "description": "The number of days to forecast",
...                     },
...                 },
...                 "required": ["location", "format", "num_days"],
...             },
...         },
...     },
... ]

>>> response = await client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.chat_completion.example-8">

Example using response_format:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {
...         "role": "user",
...         "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I see and when?",
...     },
... ]
>>> response_format = {
...     "type": "json",
...     "value": {
...         "properties": {
...             "location": {"type": "string"},
...             "activity": {"type": "string"},
...             "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...             "animals": {"type": "array", "items": {"type": "string"}},
...         },
...         "required": ["location", "activity", "animals_seen", "animals"],
...     },
... }
>>> response = await client.chat_completion(
...     messages=messages,
...     response_format=response_format,
...     max_tokens=500,
... )
>>> response.choices[0].message.content
'{

y": "bike ride",
": ["puppy", "cat", "raccoon"],
_seen": 3,
n": "park"}'
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>close</name><anchor>huggingface_hub.AsyncInferenceClient.close</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L238</source><parameters>[]</parameters></docstring>
Close the client.

This method is automatically called when using the client as a context manager.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>document_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.document_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L963</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "lang", "val": ": typing.Optional[str] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "word_boxes", "val": ": typing.Optional[list[typing.Union[list[float], str]]] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO]`) --
  The input image for the context. It can be raw bytes, an image file, or a URL to an online image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the document question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended document question answering model will be used.
  Defaults to None.
- **doc_stride** (`int`, *optional*) --
  If the words in the document are too long to fit with the question for the model, it will be split in
  several chunks with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer
- **lang** (`str`, *optional*) --
  Language to use while running OCR. Defaults to english.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using doc_stride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Can return less than top_k
  answers if there are not enough options available within the context.
- **word_boxes** (`list[Union[list[float], str`, *optional*) --
  A list of words and bounding boxes (normalized 0->1000). If provided, the inference will skip the OCR
  step and use the provided bounding boxes instead.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[DocumentQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [DocumentQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.DocumentQuestionAnsweringOutputElement) items containing the predicted label, associated probability, word ids, and page number.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Answer questions on document images.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.document_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.document_question_answering(image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", question="What is the invoice number?")
[DocumentQuestionAnsweringOutputElement(answer='us-001', end=16, score=0.9999666213989258, start=16)]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>feature_extraction</name><anchor>huggingface_hub.AsyncInferenceClient.feature_extraction</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1051</source><parameters>[{"name": "text", "val": ": str"}, {"name": "normalize", "val": ": typing.Optional[bool] = None"}, {"name": "prompt_name", "val": ": typing.Optional[str] = None"}, {"name": "truncate", "val": ": typing.Optional[bool] = None"}, {"name": "truncation_direction", "val": ": typing.Optional[typing.Literal['Left', 'Right']] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (*str*) --
  The text to embed.
- **model** (*str*, *optional*) --
  The model to use for the feature extraction task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended feature extraction model will be used.
  Defaults to None.
- **normalize** (*bool*, *optional*) --
  Whether to normalize the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **prompt_name** (*str*, *optional*) --
  The name of the prompt that should be used by for encoding. If not set, no prompt will be applied.
  Must be a key in the *Sentence Transformers* configuration *prompts* dictionary.
  For example if `prompt_name` is "query" and the `prompts` is &amp;lcub;"query": "query: ",...},
  then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?"
  because the prompt text will be prepended before any text to encode.
- **truncate** (*bool*, *optional*) --
  Whether to truncate the embeddings or not.
  Only available on server powered by Text-Embedding-Inference.
- **truncation_direction** (*Literal["Left", "Right"]*, *optional*) --
  Which side of the input should be truncated when *truncate=True* is passed.</paramsdesc><paramgroups>0</paramgroups><rettype>*np.ndarray*</rettype><retdesc>The embedding representing the input text as a float32 numpy array.</retdesc><raises>- [*InferenceTimeoutError*] -- 
  If the model is unavailable or the request times out.
- [*HfHubHTTPError*] -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[*InferenceTimeoutError*] or [*HfHubHTTPError*]</raisederrors></docstring>

Generate embeddings for a given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.feature_extraction.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.feature_extraction("Hi, who are you?")
array([[ 2.424802  ,  2.93384   ,  1.1750331 , ...,  1.240499, -0.13776633, -0.7889173 ],
[-0.42943227, -0.6364878 , -1.693462  , ...,  0.41978157, -2.4336355 ,  0.6162071 ],
...,
[ 0.28552425, -0.928395  , -1.2077185 , ...,  0.76810825, -2.1069427 ,  0.6236161 ]], dtype=float32)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fill_mask</name><anchor>huggingface_hub.AsyncInferenceClient.fill_mask</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1125</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "targets", "val": ": typing.Optional[list[str]] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask).
- **model** (`str`, *optional*) --
  The model to use for the fill mask task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended fill mask model will be used.
- **targets** (`list[str`, *optional*) --
  When passed, the model will limit the scores to the passed targets instead of looking up in the whole
  vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first
  resulting token will be used (with a warning, and that might be slower).
- **top_k** (`int`, *optional*) --
  When passed, overrides the number of predictions to return.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[FillMaskOutputElement]`</rettype><retdesc>a list of [FillMaskOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.FillMaskOutputElement) items containing the predicted label, associated
probability, token reference, and completed text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Fill in a hole with a missing word (token to be precise).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.fill_mask.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.fill_mask("The goal of life is <mask>.")
[
    FillMaskOutputElement(score=0.06897063553333282, token=11098, token_str=' happiness', sequence='The goal of life is happiness.'),
    FillMaskOutputElement(score=0.06554922461509705, token=45075, token_str=' immortality', sequence='The goal of life is immortality.')
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_endpoint_info</name><anchor>huggingface_hub.AsyncInferenceClient.get_endpoint_info</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3320</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict[str, Any]`</rettype><retdesc>Information about the endpoint.</retdesc></docstring>

Get information about the deployed endpoint.

This endpoint is only available on endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).
Endpoints powered by `transformers` return an empty payload.







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.get_endpoint_info.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> await client.get_endpoint_info()
{
    'model_id': 'meta-llama/Meta-Llama-3-70B-Instruct',
    'model_sha': None,
    'model_dtype': 'torch.float16',
    'model_device_type': 'cuda',
    'model_pipeline_tag': None,
    'max_concurrent_requests': 128,
    'max_best_of': 2,
    'max_stop_sequences': 4,
    'max_input_length': 8191,
    'max_total_tokens': 8192,
    'waiting_served_ratio': 0.3,
    'max_batch_total_tokens': 1259392,
    'max_waiting_tokens': 20,
    'max_batch_size': None,
    'validation_workers': 32,
    'max_client_batch_size': 4,
    'version': '2.0.2',
    'sha': 'dccab72549635c7eb5ddb17f43f0b7cdff07c214',
    'docker_label': 'sha-dccab72'
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>health_check</name><anchor>huggingface_hub.AsyncInferenceClient.health_check</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3380</source><parameters>[{"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`str`, *optional*) --
  URL of the Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if everything is working fine.</retdesc></docstring>

Check the health of the deployed endpoint.

Health check is only available with Inference Endpoints powered by Text-Generation-Inference (TGI) or Text-Embedding-Inference (TEI).







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.health_check.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient("https://jzgu0buei5.us-east-1.aws.endpoints.huggingface.cloud")
>>> await client.health_check()
True
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_classification</name><anchor>huggingface_hub.AsyncInferenceClient.image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1182</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to classify. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used.
- **function_to_apply** (`"ImageClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageClassificationOutputElement]`</rettype><retdesc>a list of [ImageClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image classification on the given image using the specified model.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[ImageClassificationOutputElement(label='Blenheim spaniel', score=0.9779096841812134), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_segmentation</name><anchor>huggingface_hub.AsyncInferenceClient.image_segmentation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1233</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "mask_threshold", "val": ": typing.Optional[float] = None"}, {"name": "overlap_mask_area_threshold", "val": ": typing.Optional[float] = None"}, {"name": "subtask", "val": ": typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to segment. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used.
- **mask_threshold** (`float`, *optional*) --
  Threshold to use when turning the predicted masks into binary values.
- **overlap_mask_area_threshold** (`float`, *optional*) --
  Mask overlap threshold to eliminate small, disconnected segments.
- **subtask** (`"ImageSegmentationSubtask"`, *optional*) --
  Segmentation task to be performed, depending on model capabilities.
- **threshold** (`float`, *optional*) --
  Probability threshold to filter out predicted masks.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ImageSegmentationOutputElement]`</rettype><retdesc>A list of [ImageSegmentationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageSegmentationOutputElement) items containing the segmented masks and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image segmentation on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_segmentation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_segmentation("cat.jpg")
[ImageSegmentationOutputElement(score=0.989008, label='LABEL_184', mask=<PIL.PngImagePlugin.PngImageFile image mode=L size=400x300 at 0x7FDD2B129CC0>), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_image</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1301</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for translation. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the image generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate images closely
  linked to the text prompt at the expense of lower image quality.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **target_size** (`ImageToImageTargetSize`, *optional*) --
  The size in pixels of the output image. This parameter is only supported by some providers and for
  specific models. It will be ignored when unsupported.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The translated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform image-to-image translation using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_image.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.image_to_image("cat.jpg", prompt="turn the cat into a tiger")
>>> image.save("tiger.jpg")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_text</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_text</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1458</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImageToTextOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ImageToTextOutput)</rettype><retdesc>The generated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Takes an input image and return text.

Models can have very different outputs depending on your use case (image captioning, optical character recognition
(OCR), Pix2Struct, etc.). Please have a look to the model card to learn more about a model's specificities.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_text.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.image_to_text("cat.jpg")
'a cat standing in a grassy field '
>>> await client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
'a dog laying on the grass next to a flower pot '
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>image_to_video</name><anchor>huggingface_hub.AsyncInferenceClient.image_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1378</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "target_size", "val": ": typing.Optional[huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to generate a video from. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **prompt** (`str`, *optional*) --
  The text prompt to guide the video generation.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  For diffusion models. A higher guidance scale value encourages the model to generate videos closely
  linked to the text prompt at the expense of lower image quality.
- **seed** (`int`, *optional*) --
  The seed to use for the video generation.
- **target_size** (`ImageToVideoTargetSize`, *optional*) --
  The size in pixel of the output video frames.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video from an input image.







<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.image_to_video.example">

Examples:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> video = await client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
...     f.write(video)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>object_detection</name><anchor>huggingface_hub.AsyncInferenceClient.object_detection</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1505</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "threshold", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The image to detect objects on. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **model** (`str`, *optional*) --
  The model to use for object detection. Can be a model ID hosted on the Hugging Face Hub or a URL to a
  deployed Inference Endpoint. If not provided, the default recommended model for object detection (DETR) will be used.
- **threshold** (`float`, *optional*) --
  The probability necessary to make a prediction.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ObjectDetectionOutputElement]`</rettype><retdesc>A list of [ObjectDetectionOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ObjectDetectionOutputElement) items containing the bounding boxes and associated attributes.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If the request output is not a List.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Perform object detection on the given image using the specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.object_detection.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.object_detection("people.jpg")
[ObjectDetectionOutputElement(score=0.9486683011054993, label='person', box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)), ...]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1554</source><parameters>[{"name": "question", "val": ": str"}, {"name": "context", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "align_to_words", "val": ": typing.Optional[bool] = None"}, {"name": "doc_stride", "val": ": typing.Optional[int] = None"}, {"name": "handle_impossible_answer", "val": ": typing.Optional[bool] = None"}, {"name": "max_answer_len", "val": ": typing.Optional[int] = None"}, {"name": "max_question_len", "val": ": typing.Optional[int] = None"}, {"name": "max_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **question** (`str`) --
  Question to be answered.
- **context** (`str`) --
  The context of the question.
- **model** (`str`) --
  The model to use for the question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint.
- **align_to_words** (`bool`, *optional*) --
  Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt
  on non-space-separated languages (like Japanese or Chinese)
- **doc_stride** (`int`, *optional*) --
  If the context is too long to fit with the question for the model, it will be split in several chunks
  with some overlap. This argument controls the size of that overlap.
- **handle_impossible_answer** (`bool`, *optional*) --
  Whether to accept impossible as an answer.
- **max_answer_len** (`int`, *optional*) --
  The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
- **max_question_len** (`int`, *optional*) --
  The maximum length of the question after tokenization. It will be truncated if needed.
- **max_seq_len** (`int`, *optional*) --
  The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
  model. The context will be split in several chunks (using docStride as overlap) if needed.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>Union[`QuestionAnsweringOutputElement`, list[QuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.QuestionAnsweringOutputElement)]</rettype><retdesc>When top_k is 1 or not provided, it returns a single `QuestionAnsweringOutputElement`.
When top_k is greater than 1, it returns a list of `QuestionAnsweringOutputElement`.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from a given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.question_answering(question="What's my name?", context="My name is Clara and I live in Berkeley.")
QuestionAnsweringOutputElement(answer='Clara', end=16, score=0.9326565265655518, start=11)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>sentence_similarity</name><anchor>huggingface_hub.AsyncInferenceClient.sentence_similarity</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1639</source><parameters>[{"name": "sentence", "val": ": str"}, {"name": "other_sentences", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **sentence** (`str`) --
  The main sentence to compare to others.
- **other_sentences** (`list[str]`) --
  The list of sentences to compare to.
- **model** (`str`, *optional*) --
  The model to use for the sentence similarity task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended sentence similarity model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[float]`</rettype><retdesc>The embedding representing the input text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.sentence_similarity.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.sentence_similarity(
...     "Machine learning is so easy.",
...     other_sentences=[
...         "Deep learning is so straightforward.",
...         "This is so difficult, like rocket science.",
...         "I can't believe how much I struggled with this.",
...     ],
... )
[0.7785726189613342, 0.45876261591911316, 0.2906220555305481]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>summarization</name><anchor>huggingface_hub.AsyncInferenceClient.summarization</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1693</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to summarize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended model for summarization will be used.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.
- **truncation** (`"SummarizationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.</paramsdesc><paramgroups>0</paramgroups><rettype>[SummarizationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.SummarizationOutput)</rettype><retdesc>The generated summary text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate a summary of a given text using a specified model.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.summarization.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.summarization("The Eiffel tower...")
SummarizationOutput(generated_text="The Eiffel tower is one of the most famous landmarks in the world....")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>table_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.table_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1752</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "query", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "padding", "val": ": typing.Optional[ForwardRef('Padding')] = None"}, {"name": "sequential", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **table** (`str`) --
  A table of data represented as a dict of lists where entries are headers and the lists are all the
  values, all lists must have the same size.
- **query** (`str`) --
  The query in plain text that you want to ask the table.
- **model** (`str`) --
  The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face
  Hub or a URL to a deployed Inference Endpoint.
- **padding** (`"Padding"`, *optional*) --
  Activates and controls padding.
- **sequential** (`bool`, *optional*) --
  Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
  inference to be done sequentially to extract relations within sequences, given their conversational
  nature.
- **truncation** (`bool`, *optional*) --
  Activates and controls truncation.</paramsdesc><paramgroups>0</paramgroups><rettype>[TableQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TableQuestionAnsweringOutputElement)</rettype><retdesc>a table question answering output containing the answer, coordinates, cells and the aggregator used.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Retrieve the answer to a question from information given in a table.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.table_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> query = "How many stars does the transformers repository have?"
>>> table = {"Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"]}
>>> await client.table_question_answering(table, query, model="google/tapas-base-finetuned-wtq")
TableQuestionAnsweringOutputElement(answer='36542', coordinates=[[0, 1]], cells=['36542'], aggregator='AVERAGE')
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_classification</name><anchor>huggingface_hub.AsyncInferenceClient.tabular_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1815</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes to classify.
- **model** (`str`, *optional*) --
  The model to use for the tabular classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular classification model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of labels, one per row in the initial table.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Classifying a target category (a group) based on a set of attributes.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.tabular_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> table = {
...     "fixed_acidity": ["7.4", "7.8", "10.3"],
...     "volatile_acidity": ["0.7", "0.88", "0.32"],
...     "citric_acid": ["0", "0", "0.45"],
...     "residual_sugar": ["1.9", "2.6", "6.4"],
...     "chlorides": ["0.076", "0.098", "0.073"],
...     "free_sulfur_dioxide": ["11", "25", "5"],
...     "total_sulfur_dioxide": ["34", "67", "13"],
...     "density": ["0.9978", "0.9968", "0.9976"],
...     "pH": ["3.51", "3.2", "3.23"],
...     "sulphates": ["0.56", "0.68", "0.82"],
...     "alcohol": ["9.4", "9.8", "12.6"],
... }
>>> await client.tabular_classification(table=table, model="julien-c/wine-quality")
["5", "5", "5"]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tabular_regression</name><anchor>huggingface_hub.AsyncInferenceClient.tabular_regression</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1871</source><parameters>[{"name": "table", "val": ": dict"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **table** (`dict[str, Any]`) --
  Set of attributes stored in a table. The attributes used to predict the target can be both numerical and categorical.
- **model** (`str`, *optional*) --
  The model to use for the tabular regression task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended tabular regression model will be used.
  Defaults to None.</paramsdesc><paramgroups>0</paramgroups><rettype>`List`</rettype><retdesc>a list of predicted numerical target values.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Predicting a numerical target value given a set of attributes/features in a table.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.tabular_regression.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> table = {
...     "Height": ["11.52", "12.48", "12.3778"],
...     "Length1": ["23.2", "24", "23.9"],
...     "Length2": ["25.4", "26.3", "26.5"],
...     "Length3": ["30", "31.2", "31.1"],
...     "Species": ["Bream", "Bream", "Bream"],
...     "Width": ["4.02", "4.3056", "4.6961"],
... }
>>> await client.tabular_regression(table, model="scikit-learn/Fish-Weight")
[110, 120, 130]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_classification</name><anchor>huggingface_hub.AsyncInferenceClient.text_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L1922</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "function_to_apply", "val": ": typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  When specified, limits the output to the top K most probable classes.
- **function_to_apply** (`"TextClassificationOutputTransform"`, *optional*) --
  The function to apply to the model outputs in order to retrieve the scores.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TextClassificationOutputElement]`</rettype><retdesc>a list of [TextClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextClassificationOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform text classification (e.g. sentiment-analysis) on the given text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_classification("I like you")
[
    TextClassificationOutputElement(label='POSITIVE', score=0.9998695850372314),
    TextClassificationOutputElement(label='NEGATIVE', score=0.0001304351753788069),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_generation</name><anchor>huggingface_hub.AsyncInferenceClient.text_generation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2131</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "details", "val": ": typing.Optional[bool] = None"}, {"name": "stream", "val": ": typing.Optional[bool] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "adapter_id", "val": ": typing.Optional[str] = None"}, {"name": "best_of", "val": ": typing.Optional[int] = None"}, {"name": "decoder_input_details", "val": ": typing.Optional[bool] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "frequency_penalty", "val": ": typing.Optional[float] = None"}, {"name": "grammar", "val": ": typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "repetition_penalty", "val": ": typing.Optional[float] = None"}, {"name": "return_full_text", "val": ": typing.Optional[bool] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "stop", "val": ": typing.Optional[list[str]] = None"}, {"name": "stop_sequences", "val": ": typing.Optional[list[str]] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_n_tokens", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "truncate", "val": ": typing.Optional[int] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "watermark", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  Input text.
- **details** (`bool`, *optional*) --
  By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens,
  probabilities, seed, finish reason, etc.). Only available for models running on with the
  `text-generation-inference` backend.
- **stream** (`bool`, *optional*) --
  By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of
  tokens to be returned. Only available for models running on with the `text-generation-inference`
  backend.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- **adapter_id** (`str`, *optional*) --
  Lora adapter id.
- **best_of** (`int`, *optional*) --
  Generate best_of sequences and return the one if the highest token logprobs.
- **decoder_input_details** (`bool`, *optional*) --
  Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken
  into account. Defaults to `False`.
- **do_sample** (`bool`, *optional*) --
  Activate logits sampling
- **frequency_penalty** (`float`, *optional*) --
  Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in
  the text so far, decreasing the model's likelihood to repeat the same line verbatim.
- **grammar** ([TextGenerationInputGrammarType](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationInputGrammarType), *optional*) --
  Grammar constraints. Can be either a JSONSchema or a regex.
- **max_new_tokens** (`int`, *optional*) --
  Maximum number of generated tokens. Defaults to 100.
- **repetition_penalty** (`float`, *optional*) --
  The parameter for repetition penalty. 1.0 means no penalty. See [this
  paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- **return_full_text** (`bool`, *optional*) --
  Whether to prepend the prompt to the generated text
- **seed** (`int`, *optional*) --
  Random sampling seed
- **stop** (`list[str]`, *optional*) --
  Stop generating tokens if a member of `stop` is generated.
- **stop_sequences** (`list[str]`, *optional*) --
  Deprecated argument. Use `stop` instead.
- **temperature** (`float`, *optional*) --
  The value used to module the logits distribution.
- **top_n_tokens** (`int`, *optional*) --
  Return information about the `top_n_tokens` most likely tokens at each generation step, instead of
  just the sampled token.
- **top_k** (`int`, *optional`) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional`) --
  If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
  higher are kept for generation.
- **truncate** (`int`, *optional`) --
  Truncate inputs tokens to the given size.
- **typical_p** (`float`, *optional`) --
  Typical Decoding mass
  See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information
- **watermark** (`bool`, *optional*) --
  Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[str, TextGenerationOutput, AsyncIterable[str], AsyncIterable[TextGenerationStreamOutput]]`</rettype><retdesc>Generated text returned from the server:
- if `stream=False` and `details=False`, the generated text is returned as a `str` (default)
- if `stream=True` and `details=False`, the generated text is returned token by token as a `AsyncIterable[str]`
- if `stream=False` and `details=True`, the generated text is returned with more details as a [TextGenerationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationOutput)
- if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [TextGenerationStreamOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TextGenerationStreamOutput)</retdesc><raises>- ``ValidationError`` -- 
  If input values are not valid. No HTTP call is made to the server.
- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``ValidationError`` or [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Given a prompt, generate the following text.

> [!TIP]
> If you want to generate a response from chat messages, you should use the [InferenceClient.chat_completion()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) method.
> It accepts a list of messages instead of a single text prompt and handles the chat templating for you.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_generation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

# Case 1: generate text
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# Case 2: iterate over the generated tokens. Useful for large generation.
>>> async for token in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True):
...     print(token)
100
%
open
source
and
built
to
be
easy
to
use
.

# Case 3: get more details about the generation process.
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True)
TextGenerationOutput(
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationDetails(
        finish_reason='length',
        generated_tokens=12,
        seed=None,
        prefill=[
            TextGenerationPrefillOutputToken(id=487, text='The', logprob=None),
            TextGenerationPrefillOutputToken(id=53789, text=' hugging', logprob=-13.171875),
            (...)
            TextGenerationPrefillOutputToken(id=204, text=' ', logprob=-7.0390625)
        ],
        tokens=[
            TokenElement(id=1425, text='100', logprob=-1.0175781, special=False),
            TokenElement(id=16, text='%', logprob=-0.0463562, special=False),
            (...)
            TokenElement(id=25, text='.', logprob=-0.5703125, special=False)
        ],
        best_of_sequences=None
    )
)

# Case 4: iterate over the generated tokens with more details.
# Last object is more complete, containing the full generated text and the finish reason.
>>> async for details in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
...     print(details)
...
TextGenerationStreamOutput(token=TokenElement(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None)
TextGenerationStreamOutput(token=TokenElement(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=TextGenerationStreamOutputStreamDetails(finish_reason='length', generated_tokens=12, seed=None)
)

# Case 5: generate constrained output using grammar
>>> response = await client.text_generation(
...     prompt="I saw a puppy a cat and a raccoon during my bike ride in the park",
...     model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
...     max_new_tokens=100,
...     repetition_penalty=1.3,
...     grammar={
...         "type": "json",
...         "value": {
...             "properties": {
...                 "location": {"type": "string"},
...                 "activity": {"type": "string"},
...                 "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
...                 "animals": {"type": "array", "items": {"type": "string"}},
...             },
...             "required": ["location", "activity", "animals_seen", "animals"],
...         },
...     },
... )
>>> json.loads(response)
{
    "activity": "bike riding",
    "animals": ["puppy", "cat", "raccoon"],
    "animals_seen": 3,
    "location": "park"
}
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_image</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_image</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2471</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "scheduler", "val": ": typing.Optional[str] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate an image from.
- **negative_prompt** (`str`, *optional*) --
  One prompt to guide what NOT to include in image generation.
- **height** (`int`, *optional*) --
  The height in pixels of the output image
- **width** (`int`, *optional*) --
  The width in pixels of the output image
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-image model will be used.
  Defaults to None.
- **scheduler** (`str`, *optional*) --
  Override the scheduler with a compatible one.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`Image`</rettype><retdesc>The generated image.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Generate an image based on a given text using a specified model.

> [!WARNING]
> You must have `PIL` installed if you want to work with images (`pip install Pillow`).

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> image = await client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     negative_prompt="low resolution, blurry",
...     model="stabilityai/stable-diffusion-2-1",
... )
>>> image.save("better_astronaut.png")
```

</ExampleCodeBlock>
Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Use fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> image = client.text_to_image(
...     "A majestic lion in a fantasy forest",
...     model="black-forest-labs/FLUX.1-schnell",
... )
>>> image.save("lion.png")
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-dev",
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>

Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_image.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> image = client.text_to_image(
...     "An astronaut riding a horse on the moon.",
...     model="black-forest-labs/FLUX.1-schnell",
...     extra_body={"output_quality": 100},
... )
>>> image.save("astronaut.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_speech</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_speech</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2709</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "do_sample", "val": ": typing.Optional[bool] = None"}, {"name": "early_stopping", "val": ": typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None"}, {"name": "epsilon_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "eta_cutoff", "val": ": typing.Optional[float] = None"}, {"name": "max_length", "val": ": typing.Optional[int] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "min_length", "val": ": typing.Optional[int] = None"}, {"name": "min_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "num_beam_groups", "val": ": typing.Optional[int] = None"}, {"name": "num_beams", "val": ": typing.Optional[int] = None"}, {"name": "penalty_alpha", "val": ": typing.Optional[float] = None"}, {"name": "temperature", "val": ": typing.Optional[float] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}, {"name": "top_p", "val": ": typing.Optional[float] = None"}, {"name": "typical_p", "val": ": typing.Optional[float] = None"}, {"name": "use_cache", "val": ": typing.Optional[bool] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The text to synthesize.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-speech model will be used.
  Defaults to None.
- **do_sample** (`bool`, *optional*) --
  Whether to use sampling instead of greedy decoding when generating new tokens.
- **early_stopping** (`Union[bool, "TextToSpeechEarlyStoppingEnum"]`, *optional*) --
  Controls the stopping condition for beam-based methods.
- **epsilon_cutoff** (`float`, *optional*) --
  If set to float strictly between 0 and 1, only tokens with a conditional probability greater than
  epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on
  the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **eta_cutoff** (`float`, *optional*) --
  Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly
  between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff)
  * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token
  probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3,
  depending on the size of the model. See [Truncation Sampling as Language Model
  Desmoothing](https://hf.co/papers/2210.15191) for more details.
- **max_length** (`int`, *optional*) --
  The maximum length (in tokens) of the generated text, including the input.
- **max_new_tokens** (`int`, *optional*) --
  The maximum number of tokens to generate. Takes precedence over max_length.
- **min_length** (`int`, *optional*) --
  The minimum length (in tokens) of the generated text, including the input.
- **min_new_tokens** (`int`, *optional*) --
  The minimum number of tokens to generate. Takes precedence over min_length.
- **num_beam_groups** (`int`, *optional*) --
  Number of groups to divide num_beams into in order to ensure diversity among different groups of beams.
  See [this paper](https://hf.co/papers/1610.02424) for more details.
- **num_beams** (`int`, *optional*) --
  Number of beams to use for beam search.
- **penalty_alpha** (`float`, *optional*) --
  The value balances the model confidence and the degeneration penalty in contrastive search decoding.
- **temperature** (`float`, *optional*) --
  The value used to modulate the next token probabilities.
- **top_k** (`int`, *optional*) --
  The number of highest probability vocabulary tokens to keep for top-k-filtering.
- **top_p** (`float`, *optional*) --
  If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to
  top_p or higher are kept for generation.
- **typical_p** (`float`, *optional*) --
  Local typicality measures how similar the conditional probability of predicting a target token next is
  to the expected conditional probability of predicting a random token next, given the partial text
  already generated. If set to float < 1, the smallest set of the most locally typical tokens with
  probabilities that add up to typical_p or higher are kept for generation. See [this
  paper](https://hf.co/papers/2202.00666) for more details.
- **use_cache** (`bool`, *optional*) --
  Whether the model should use the past last key/values attentions to speed up decoding
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated audio.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Synthesize an audio of a voice pronouncing a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example">

Example:
```py
# Must be run in an async context
>>> from pathlib import Path
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> audio = await client.text_to_speech("Hello world")
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider directly. Usage will be billed on your Replicate account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="your-replicate-api-key",  # Pass your Replicate API key directly
... )
>>> audio = client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-3">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio =client.text_to_speech(
...     text="Hello world",
...     model="OuteAI/OuteTTS-0.3-500M",
... )
>>> Path("hello_world.flac").write_bytes(audio)
```

</ExampleCodeBlock>
Example using Replicate provider with extra parameters
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-4">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Use replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> audio = client.text_to_speech(
...     "Hello, my name is Kororo, an awesome text-to-speech model.",
...     model="hexgrad/Kokoro-82M",
...     extra_body={"voice": "af_nicole"},
... )
>>> Path("hello.flac").write_bytes(audio)
```

</ExampleCodeBlock>

Example music-gen using "YuE-s1-7B-anneal-en-cot" on fal.ai
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_speech.example-5">

```py
>>> from huggingface_hub import InferenceClient
>>> lyrics = '''
... [verse]
... In the town where I was born
... Lived a man who sailed to sea
... And he told us of his life
... In the land of submarines
... So we sailed on to the sun
... 'Til we found a sea of green
... And we lived beneath the waves
... In our yellow submarine

... [chorus]
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... We all live in a yellow submarine
... Yellow submarine, yellow submarine
... '''
>>> genres = "pavarotti-style tenor voice"
>>> client = InferenceClient(
...     provider="fal-ai",
...     model="m-a-p/YuE-s1-7B-anneal-en-cot",
...     api_key=...,
... )
>>> audio = client.text_to_speech(lyrics, extra_body={"genres": genres})
>>> with open("output.mp3", "wb") as f:
...     f.write(audio)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>text_to_video</name><anchor>huggingface_hub.AsyncInferenceClient.text_to_video</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2612</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Optional[list[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "seed", "val": ": typing.Optional[int] = None"}, {"name": "extra_body", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str`) --
  The prompt to generate a video from.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. If not provided, the default recommended text-to-video model will be used.
  Defaults to None.
- **guidance_scale** (`float`, *optional*) --
  A higher guidance scale value encourages the model to generate videos closely linked to the text
  prompt, but values too high may cause saturation and other artifacts.
- **negative_prompt** (`list[str]`, *optional*) --
  One or several prompt to guide what NOT to include in video generation.
- **num_frames** (`float`, *optional*) --
  The num_frames parameter determines how many video frames are generated.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **seed** (`int`, *optional*) --
  Seed for the random number generator.
- **extra_body** (`dict[str, Any]`, *optional*) --
  Additional provider-specific parameters to pass to the model. Refer to the provider's documentation
  for supported parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The generated video.</retdesc></docstring>

Generate a video based on a given text.

> [!TIP]
> You can pass provider-specific parameters to the model by using the `extra_body` argument.







Example:

Example using a third-party provider directly. Usage will be billed on your fal.ai account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_video.example">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="fal-ai",  # Using fal.ai provider
...     api_key="fal-ai-api-key",  # Pass your fal.ai API key
... )
>>> video = client.text_to_video(
...     "A majestic lion running in a fantasy forest",
...     model="tencent/HunyuanVideo",
... )
>>> with open("lion.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>

Example using a third-party provider through Hugging Face Routing. Usage will be billed on your Hugging Face account.
<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.text_to_video.example-2">

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="replicate",  # Using replicate provider
...     api_key="hf_...",  # Pass your HF token
... )
>>> video = client.text_to_video(
...     "A cat running in a park",
...     model="genmo/mochi-1-preview",
... )
>>> with open("cat.mp4", "wb") as file:
...     file.write(video)
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>token_classification</name><anchor>huggingface_hub.AsyncInferenceClient.token_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2918</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "aggregation_strategy", "val": ": typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None"}, {"name": "ignore_labels", "val": ": typing.Optional[list[str]] = None"}, {"name": "stride", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be classified.
- **model** (`str`, *optional*) --
  The model to use for the token classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended token classification model will be used.
  Defaults to None.
- **aggregation_strategy** (`"TokenClassificationAggregationStrategy"`, *optional*) --
  The strategy used to fuse tokens based on model predictions
- **ignore_labels** (`list[str`, *optional*) --
  A list of labels to ignore
- **stride** (`int`, *optional*) --
  The number of overlapping tokens between chunks when splitting the input text.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[TokenClassificationOutputElement]`</rettype><retdesc>List of [TokenClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TokenClassificationOutputElement) items containing the entity group, confidence score, word, start and end index.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Perform token classification on the given text.
Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.token_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.token_classification("My name is Sarah Jessica Parker but you can call me Jessica")
[
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9971321225166321,
        word='Sarah Jessica Parker',
        start=11,
        end=31,
    ),
    TokenClassificationOutputElement(
        entity_group='PER',
        score=0.9773476123809814,
        word='Jessica',
        start=52,
        end=59,
    )
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>translation</name><anchor>huggingface_hub.AsyncInferenceClient.translation</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L2994</source><parameters>[{"name": "text", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "src_lang", "val": ": typing.Optional[str] = None"}, {"name": "tgt_lang", "val": ": typing.Optional[str] = None"}, {"name": "clean_up_tokenization_spaces", "val": ": typing.Optional[bool] = None"}, {"name": "truncation", "val": ": typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None"}, {"name": "generate_parameters", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  A string to be translated.
- **model** (`str`, *optional*) --
  The model to use for the translation task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended translation model will be used.
  Defaults to None.
- **src_lang** (`str`, *optional*) --
  The source language of the text. Required for models that can translate from multiple languages.
- **tgt_lang** (`str`, *optional*) --
  Target language to translate to. Required for models that can translate to multiple languages.
- **clean_up_tokenization_spaces** (`bool`, *optional*) --
  Whether to clean up the potential extra spaces in the text output.
- **truncation** (`"TranslationTruncationStrategy"`, *optional*) --
  The truncation strategy to use.
- **generate_parameters** (`dict[str, Any]`, *optional*) --
  Additional parametrization of the text generation algorithm.</paramsdesc><paramgroups>0</paramgroups><rettype>[TranslationOutput](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.TranslationOutput)</rettype><retdesc>The generated translated text.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.
- ``ValueError`` -- 
  If only one of the `src_lang` and `tgt_lang` arguments are provided.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError` or ``ValueError``</raisederrors></docstring>

Convert text from one language to another.

Check out https://huggingface.co/tasks/translation for more information on how to choose the best model for
your specific use case. Source and target languages usually depend on the model.
However, it is possible to specify source and target languages for certain models. If you are working with one of these models,
you can use `src_lang` and `tgt_lang` arguments to pass the relevant information.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.translation.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.translation("My name is Wolfgang and I live in Berlin")
'Mein Name ist Wolfgang und ich lebe in Berlin.'
>>> await client.translation("My name is Wolfgang and I live in Berlin", model="Helsinki-NLP/opus-mt-en-fr")
TranslationOutput(translation_text='Je m'appelle Wolfgang et je vis à Berlin.')
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.translation.example-2">

Specifying languages:
```py
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>visual_question_answering</name><anchor>huggingface_hub.AsyncInferenceClient.visual_question_answering</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3084</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "question", "val": ": str"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "top_k", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image for the context. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **question** (`str`) --
  Question to be answered.
- **model** (`str`, *optional*) --
  The model to use for the visual question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to
  a deployed Inference Endpoint. If not provided, the default recommended visual question answering model will be used.
  Defaults to None.
- **top_k** (`int`, *optional*) --
  The number of answers to return (will be chosen by order of likelihood). Note that we return less than
  topk answers if there are not enough options available within the context.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[VisualQuestionAnsweringOutputElement]`</rettype><retdesc>a list of [VisualQuestionAnsweringOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.VisualQuestionAnsweringOutputElement) items containing the predicted label and associated probability.</retdesc><raises>- ``InferenceTimeoutError`` -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>``InferenceTimeoutError`` or `HfHubHTTPError`</raisederrors></docstring>

Answering open-ended questions based on an image.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.visual_question_answering.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.visual_question_answering(
...     image="https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg",
...     question="What is the animal doing?"
... )
[
    VisualQuestionAnsweringOutputElement(score=0.778609573841095, answer='laying down'),
    VisualQuestionAnsweringOutputElement(score=0.6957435607910156, answer='sitting'),
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_classification</name><anchor>huggingface_hub.AsyncInferenceClient.zero_shot_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3144</source><parameters>[{"name": "text", "val": ": str"}, {"name": "candidate_labels", "val": ": list"}, {"name": "multi_label", "val": ": typing.Optional[bool] = False"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "model", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **text** (`str`) --
  The input text to classify.
- **candidate_labels** (`list[str]`) --
  The set of possible class labels to classify the text into.
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of strings. Each string is the verbalization of a possible label for the input text.
- **multi_label** (`bool`, *optional*) --
  Whether multiple candidate labels can be true. If false, the scores are normalized such that the sum of
  the label likelihoods for each sequence is 1. If true, the labels are considered independent and
  probabilities are normalized for each candidate.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the text classification by
  replacing the placeholder with the candidate labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot classification model will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ZeroShotClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide as input a text and a set of candidate labels to classify the input text.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_classification.example">

Example with `multi_label=False`:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> text = (
...     "A new model offers an explanation for how the Galilean satellites formed around the solar system's"
...     "largest world. Konstantin Batygin did not set out to solve one of the solar system's most puzzling"
...     " mysteries when he went for a run up a hill in Nice, France."
... )
>>> labels = ["space & cosmos", "scientific discovery", "microbiology", "robots", "archeology"]
>>> await client.zero_shot_classification(text, labels)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.7961668968200684),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.18570658564567566),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.00730885099619627),
    ZeroShotClassificationOutputElement(label='archeology', score=0.006258360575884581),
    ZeroShotClassificationOutputElement(label='robots', score=0.004559356719255447),
]
>>> await client.zero_shot_classification(text, labels, multi_label=True)
[
    ZeroShotClassificationOutputElement(label='scientific discovery', score=0.9829297661781311),
    ZeroShotClassificationOutputElement(label='space & cosmos', score=0.755190908908844),
    ZeroShotClassificationOutputElement(label='microbiology', score=0.0005462635890580714),
    ZeroShotClassificationOutputElement(label='archeology', score=0.00047131875180639327),
    ZeroShotClassificationOutputElement(label='robots', score=0.00030448526376858354),
]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_classification.example-2">

Example with `multi_label=True` and a custom `hypothesis_template`:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.zero_shot_classification(
...    text="I really like our dinner and I'm very happy. I don't like the weather though.",
...    labels=["positive", "negative", "pessimistic", "optimistic"],
...    multi_label=True,
...    hypothesis_template="This text is {} towards the weather"
... )
[
    ZeroShotClassificationOutputElement(label='negative', score=0.9231801629066467),
    ZeroShotClassificationOutputElement(label='pessimistic', score=0.8760990500450134),
    ZeroShotClassificationOutputElement(label='optimistic', score=0.0008674879791215062),
    ZeroShotClassificationOutputElement(label='positive', score=0.0005250611575320363)
]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>zero_shot_image_classification</name><anchor>huggingface_hub.AsyncInferenceClient.zero_shot_image_classification</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py#L3252</source><parameters>[{"name": "image", "val": ": typing.Union[bytes, typing.BinaryIO, str, pathlib.Path, ForwardRef('Image'), bytearray, memoryview]"}, {"name": "candidate_labels", "val": ": list"}, {"name": "model", "val": ": typing.Optional[str] = None"}, {"name": "hypothesis_template", "val": ": typing.Optional[str] = None"}, {"name": "labels", "val": ": list = None"}]</parameters><paramsdesc>- **image** (`Union[str, Path, bytes, BinaryIO, PIL.Image.Image]`) --
  The input image to caption. It can be raw bytes, an image file, a URL to an online image, or a PIL Image.
- **candidate_labels** (`list[str]`) --
  The candidate labels for this image
- **labels** (`list[str]`, *optional*) --
  (deprecated) List of string possible labels. There must be at least 2 labels.
- **model** (`str`, *optional*) --
  The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
  Inference Endpoint. This parameter overrides the model defined at the instance level. If not provided, the default recommended zero-shot image classification model will be used.
- **hypothesis_template** (`str`, *optional*) --
  The sentence used in conjunction with `candidate_labels` to attempt the image classification by
  replacing the placeholder with the candidate labels.</paramsdesc><paramgroups>0</paramgroups><rettype>`list[ZeroShotImageClassificationOutputElement]`</rettype><retdesc>List of [ZeroShotImageClassificationOutputElement](/docs/huggingface_hub/main/ko/package_reference/inference_types#huggingface_hub.ZeroShotImageClassificationOutputElement) items containing the predicted labels and their confidence.</retdesc><raises>- [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) -- 
  If the model is unavailable or the request times out.
- `HfHubHTTPError` -- 
  If the request fails with an HTTP error status code other than HTTP 503.</raises><raisederrors>[InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError) or `HfHubHTTPError`</raisederrors></docstring>

Provide input image and text labels to predict text labels for the image.











<ExampleCodeBlock anchor="huggingface_hub.AsyncInferenceClient.zero_shot_image_classification.example">

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> await client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[ZeroShotImageClassificationOutputElement(label='dog', score=0.956),...]
```

</ExampleCodeBlock>


</div></div>

## 추론 시간 초과 오류[[huggingface_hub.InferenceTimeoutError]][[huggingface_hub.InferenceTimeoutError]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.InferenceTimeoutError</name><anchor>huggingface_hub.InferenceTimeoutError</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/errors.py#L91</source><parameters>[{"name": "message", "val": ": str"}]</parameters></docstring>
Error raised when a model is unavailable or the request times out.

</div>

## 반환 유형[[return-types]]

대부분의 작업에 대해, 반환 값은 내장된 유형(string, list, image...)을 갖습니다. 보다 복잡한 유형을 위한 목록은 다음과 같습니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/inference_client.md" />

### 환경 변수[[environment-variables]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/environment_variables.md

# 환경 변수[[environment-variables]]

`huggingface_hub`는 환경 변수를 사용해 설정할 수 있습니다.

환경 변수에 대해 잘 알지 못하다면 그에 대한 문서인 [macOS and Linux](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/)와 
[Windows](https://phoenixnap.com/kb/windows-set-environment-variable)를 참고하세요.

이 문서에서는 `huggingface_hub`와 관련된 모든 환경 변수와 그 의미에 대해 안내합니다.

## 일반적인 변수[[generic]]

### HF_INFERENCE_ENDPOINT[[hfinferenceendpoint]]

추론 API 기본 URL을 구성합니다. 조직에서 추론 API를 직접 가리키는 것이 아니라 API 게이트웨이를 가리키는 경우 이 변수를 설정할 수 있습니다.

기본값은 `"https://api-inference.huggingface.co"`입니다.

### HF_HOME[[hfhome]]

`huggingface_hub`가 어디에 데이터를 로컬로 저장할 지 위치를 구성합니다. 특히 토큰과 캐시가 이 폴더에 저장됩니다.

[XDG_CACHE_HOME](#xdgcachehome)이 설정되어 있지 않다면, 기본값은 `"~/.cache/huggingface"`입니다.

### HF_HUB_CACHE[[hfhubcache]]

Hub의 리포지토리가 로컬로 캐시될 위치(모델, 데이터세트 및 스페이스)를 구성합니다.

기본값은 `"$HF_HOME/hub"` (예로 들면, 기본 설정은 `"~/.cache/huggingface/hub"`)입니다.

### HF_ASSETS_CACHE[[hfassetscache]]

다운스트림 라이브러리에서 생성된 [assets](../guides/manage-cache#caching-assets)가 로컬로 캐시되는 위치를 구성합니다.
이 assets은 전처리된 데이터, GitHub에서 다운로드한 파일, 로그, ... 등이 될 수 있습니다.

기본값은 `"$HF_HOME/assets"` (예로 들면, 기본 설정은 `"~/.cache/huggingface/assets"`)입니다.

### HF_TOKEN[[hftoken]]

Hub에 인증하기 위한 사용자 액세스 토큰을 구성합니다. 이 값을 설정하면 머신에 저장된 토큰을 덮어씁니다(`$HF_TOKEN_PATH`, 또는 `$HF_TOKEN_PATH`가 설정되지 않은 경우 `"$HF_HOME/token"`에 저장됨).

인증에 대한 자세한 내용은 [이 섹션](../quick-start#인증)을 참조하세요.

### HF_TOKEN_PATH[[hftokenpath]]

`huggingface_hub`가 사용자 액세스 토큰(User Access Token)을 저장할 위치를 구성합니다. 기본값은 `"$HF_HOME/token"`(예로 들면, 기본 설정은 `~/.cache/huggingface/token`)입니다.

### HF_HUB_VERBOSITY[[hfhubverbosity]]

`huggingface_hub`의 로거(logger)의 상세도 수준(verbosity level)을 설정합니다. 다음 중 하나여야 합니다.
`{"debug", "info", "warning", "error", "critical"}` 중 하나여야 합니다.

기본값은 `"warning"`입니다.

더 자세한 정보를 알아보고 싶다면, [logging reference](../package_reference/utilities#huggingface_hub.utils.logging.get_verbosity)를 살펴보세요.

### HF_HUB_ETAG_TIMEOUT[[hfhubetagtimeout]]

파일을 다운로드하기 전에 리포지토리에서 최신 메타데이터를 가져올 때 서버 응답을 기다리는 시간(초)을 정의하는 정수 값입니다. 요청 시간이 초과되면 `huggingface_hub`는 기본적으로 로컬에 캐시된 파일을 사용합니다. 값을 낮게 설정하면 이미 파일을 캐시한 연결 속도가 느린 컴퓨터의 워크플로 속도가 빨라집니다. 값이 클수록 더 많은 경우에서 메타데이터 호출이 성공할 수 있습니다. 기본값은 10초입니다.

### HF_HUB_DOWNLOAD_TIMEOUT[[hfhubdownloadtimeout]]

파일을 다운로드할 때 서버 응답을 기다리는 시간(초)을 정의하는 정수 값입니다. 요청 시간이 초과되면 TimeoutError가 발생합니다. 연결 속도가 느린 컴퓨터에서는 값을 높게 설정하는 것이 좋습니다. 값이 작을수록 네트워크가 완전히 중단된 경우에 프로세스가 더 빨리 실패합니다. 기본값은 10초입니다.

## 불리언 값[[boolean-values]]

다음 환경 변수는 불리언 값을 요구합니다. 변수는 값이 `{"1", "ON", "YES", "TRUE"}`(대소문자 구분 없음) 중 하나이면 `True`로 간주합니다. 다른 값(또는 정의되지 않음)은 `False`로 간주됩니다.

### HF_HUB_OFFLINE[[hfhuboffline]]

이 옵션을 설정하면 Hugging Face Hub에 HTTP 호출이 이루어지지 않습니다. 파일을 다운로드하려고 하면 캐시된 파일만 액세스됩니다. 캐시 파일이 감지되지 않으면 오류를 발생합니다. 네트워크 속도가 느리고 파일의 최신 버전이 중요하지 않은 경우에 유용합니다.

환경 변수로 `HF_HUB_OFFLINE=1`이 설정되어 있고 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi)의 메소드를 호출하면 [OfflineModeIsEnabled](/docs/huggingface_hub/main/ko/package_reference/utilities#huggingface_hub.errors.OfflineModeIsEnabled) 예외가 발생합니다.

**참고:** 최신 버전의 파일이 캐시되어 있더라도 `hf_hub_download`를 호출하면 새 버전을 사용할 수 없는지 확인하기 위해 HTTP 요청이 발생합니다. `HF_HUB_OFFLINE=1`을 설정하면 이 호출을 건너뛰어 로딩 시간이 빨라집니다.

### HF_HUB_DISABLE_IMPLICIT_TOKEN[[hfhubdisableimplicittoken]]

Hub에 대한 모든 요청이 반드시 인증을 필요로 하는 것은 아닙니다. 예를 들어 `"gpt2"` 모델에 대한 세부 정보를 요청하는 경우에는 인증이 필요하지 않습니다. 그러나 사용자가 [로그인](../package_reference/login) 상태인 경우, 기본 동작은 사용자 경험을 편하게 하기 위해 비공개 또는 게이트 리포지토리에 액세스할 때 항상 토큰을 전송하는 것(HTTP 401 권한 없음이 표시되지 않음)입니다. 개인 정보 보호를 위해 `HF_HUB_DISABLE_IMPLICIT_TOKEN=1`로 설정하여 이 동작을 비활성화할 수 있습니다. 이 경우 토큰은 "쓰기 권한" 호출(예: 커밋 생성)에만 전송됩니다.

**참고:** 토큰을 항상 전송하는 것을 비활성화하면 이상한 부작용이 발생할 수 있습니다. 예를 들어 Hub에 모든 모델을 나열하려는 경우 당신의 비공개 모델은 나열되지 않습니다. 사용자 스크립트에 명시적으로 `token=True` 인수를 전달해야 합니다.

### HF_HUB_DISABLE_PROGRESS_BARS[[hfhubdisableprogressbars]]

시간이 오래 걸리는 작업의 경우 `huggingface_hub`는 기본적으로 진행률 표시줄을 표시합니다(tqdm 사용).
모든 진행률 표시줄을 한 번에 비활성화하려면 `HF_HUB_DISABLE_PROGRESS_BARS=1`으로 설정하면 됩니다.

### HF_HUB_DISABLE_SYMLINKS_WARNING[[hfhubdisablesymlinkswarning]]

Windows 머신을 사용하는 경우 개발자 모드를 활성화하거나 관리자 모드에서 `huggingface_hub`를 관리자 모드로 실행하는 것이 좋습니다. 그렇지 않은 경우 `huggingface_hub`가 캐시 시스템에 심볼릭 링크를 생성할 수 없습니다. 모든 스크립트를 실행할 수 있지만 일부 대용량 파일이 하드 드라이브에 중복될 수 있으므로 사용자 경힘이 저하될 수 있습니다. 이 동작을 경고하기 위해 경고 메시지가 나타납니다. `HF_HUB_DISABLE_SYMLINKS_WARNING=1`로 설정하면 이 경고를 비활성화할 수 있습니다.

자세한 내용은 [캐시 제한](../guides/manage-cache#limitations)을 참조하세요.

### HF_HUB_DISABLE_EXPERIMENTAL_WARNING[[hfhubdisableexperimentalwarning]]

`huggingface_hub`의 일부 기능은 실험 단계입니다. 즉, 사용은 가능하지만 향후 유지될 것이라고 보장할 수는 없습니다. 특히 이러한 기능의 API나 동작은 지원 중단 없이 업데이트될 수 있습니다. 실험적 기능을 사용할 때는 이에 대한 경고를 위해 경고 메시지가 나타납니다. 실험적 기능을 사용하여 잠재적인 문제를 디버깅하는 것이 편하다면 `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1`으로 설정하여 경고를 비활성화할 수 있습니다.

실험적인 기능을 사용 중이라면 알려주세요! 여러분의 피드백은 기능을 설계하고 개선하는 데 도움이 됩니다.

### HF_HUB_DISABLE_TELEMETRY[[hfhubdisabletelemetry]]

기본적으로 일부 데이터는 사용량을 모니터링하고 문제를 디버그하며 기능의 우선순위를 정하는 데 도움을 주기 위해 HF 라이브러리(`transformers`, `datasets`, `gradio`,...)에서 수집합니다. 각 라이브러리는 자체 정책(즉, 모니터링할 사용량)을 정의하지만 핵심 구현은 `huggingface_hub`에서 이루어집니다(`send_telemetry` 참조).

환경 변수로 `HF_HUB_DISABLE_TELEMETRY=1`을 설정하여 원격 측정을 전역적으로 비활성화할 수 있습니다.

### HF_HUB_ENABLE_HF_TRANSFER[[hfhubenablehftransfer]]

Hub에서 `hf_transfer`를 사용하여 더 빠르게 업로드 및 다운로드하려면 `True`로 설정하세요.
기본적으로 `huggingface_hub`는 파이썬 기반 `requests.get` 및 `requests.post` 함수를 사용합니다.
이 함수들은 안정적이고 다용도로 사용할 수 있지만 대역폭이 높은 머신에서는 가장 효율적인 선택이 아닐 수 있습니다. [`hf_transfer`](https://github.com/huggingface/hf_transfer)는 대용량 파일을 작은 부분으로 분할하여 사용 대역폭을 최대화하고
여러 스레드를 사용하여 동시에 전송함으로써 대역폭을 최대화하기 위해 개발된 Rust 기반 패키지입니다. 이 접근 방식은 전송 속도를 거의 두 배로 높일 수 있습니다.
`hf_transfer`를 사용하려면:

1. `huggingface_hub`를 설치할 때 `hf_transfer`를 추가로 지정합니다.
   (예시: `pip install huggingface_hub[hf_transfer]`).
2. 환경 변수로 `HF_HUB_ENABLE_HF_TRANSFER=1`을 설정합니다.

`hf_transfer`를 사용하면 특정 제한 사항이 있다는 점에 유의하세요. 순수 파이썬 기반이 아니므로 오류 디버깅이 어려울 수 있습니다. 또한 `hf_transfer`에는 다운로드 재개 및 프록시와 같은 몇 가지 사용자 친화적인 기능이 없습니다. 이런 부족한 부분은 Rust 로직의 단순성과 속도를 유지하기 위해 의도한 것입니다. 이런 이유들로, `hf_transfer`는 `huggingface_hub`에서 기본적으로 활성화되지 않습니다.

## 사용되지 않는 환경 변수[[deprecated-environment-variables]]

Hugging Face 생태계의 모든 환경 변수를 표준화하기 위해 일부 변수는 사용되지 않는 것으로 표시되었습니다. 해당 변수는 여전히 작동하지만 더 이상 대체한 변수보다 우선하지 않습니다. 다음 표에는 사용되지 않는 변수와 해당 대체 변수가 간략하게 설명되어 있습니다:

| 사용되지 않는 변수          | 대체 변수          |
| --------------------------- | ------------------ |
| `HUGGINGFACE_HUB_CACHE`     | `HF_HUB_CACHE`     |
| `HUGGINGFACE_ASSETS_CACHE`  | `HF_ASSETS_CACHE`  |
| `HUGGING_FACE_HUB_TOKEN`    | `HF_TOKEN`         |
| `HUGGINGFACE_HUB_VERBOSITY` | `HF_HUB_VERBOSITY` |

## 외부 도구[[from-external-tools]]

일부 환경 변수는 `huggingface_hub`에만 특정되지는 않지만 설정 시 함께 고려됩니다.

### DO_NOT_TRACK[[donottrack]]

불리언 값입니다. `hf_hub_disable_telemetry`에 해당합니다. True로 설정하면 Hugging Face Python 생태계(`transformers`, `diffusers`, `gradio` 등)에서 원격 측정이 전역적으로 비활성화됩니다. 자세한 내용은 https://consoledonottrack.com/ 을 참조하세요.

### NO_COLOR[[nocolor]]

불리언 값입니다. 이 값을 설정하면 `hf` 도구는 ANSI 색상을 출력하지 않습니다. [no-color.org](https://no-color.org/)를 참조하세요.

### XDG_CACHE_HOME[[xdgcachehome]]

`HF_HOME`이 설정되지 않은 경우에만 사용합니다!

이것은 Linux 시스템에서 [사용자별 비필수(캐시된) 데이터](https://wiki.archlinux.org/title/XDG_Base_Directory)가 쓰여져야 하는 위치를 구성하는 기본 방법입니다.

`HF_HOME`이 설정되지 않은 경우 기본 홈은 `"~/.cache/huggingface"`대신  `"$XDG_CACHE_HOME/huggingface"`가 됩니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/environment_variables.md" />

### 믹스인 & 직렬화 메소드[[mixins--serialization-methods]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/mixins.md

# 믹스인 & 직렬화 메소드[[mixins--serialization-methods]]

## 믹스인[[mixins]]

`huggingface_hub` 라이브러리는 객체에 함수들의 업로드 및 다운로드 기능을 손쉽게 제공하기 위해서, 부모 클래스로 사용될 수 있는 다양한 믹스인을 제공합니다.
ML 프레임워크를 Hub와 통합하는 방법은 [통합 가이드](../guides/integrations)를 통해 배울 수 있습니다.

### 제네릭[[huggingface_hub.ModelHubMixin]][[huggingface_hub.ModelHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.ModelHubMixin</name><anchor>huggingface_hub.ModelHubMixin</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L76</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **repo_url** (`str`, *optional*) --
  URL of the library repository. Used to generate model card.
- **paper_url** (`str`, *optional*) --
  URL of the library paper. Used to generate model card.
- **docs_url** (`str`, *optional*) --
  URL of the library documentation. Used to generate model card.
- **model_card_template** (`str`, *optional*) --
  Template of the model card. Used to generate model card. Defaults to a generic template.
- **language** (`str` or `list[str]`, *optional*) --
  Language supported by the library. Used to generate model card.
- **library_name** (`str`, *optional*) --
  Name of the library integrating ModelHubMixin. Used to generate model card.
- **license** (`str`, *optional*) --
  License of the library integrating ModelHubMixin. Used to generate model card.
  E.g: "apache-2.0"
- **license_name** (`str`, *optional*) --
  Name of the library integrating ModelHubMixin. Used to generate model card.
  Only used if `license` is set to `other`.
  E.g: "coqui-public-model-license".
- **license_link** (`str`, *optional*) --
  URL to the license of the library integrating ModelHubMixin. Used to generate model card.
  Only used if `license` is set to `other` and `license_name` is set.
  E.g: "https://coqui.ai/cpml".
- **pipeline_tag** (`str`, *optional*) --
  Tag of the pipeline. Used to generate model card. E.g. "text-classification".
- **tags** (`list[str]`, *optional*) --
  Tags to be added to the model card. Used to generate model card. E.g. ["computer-vision"]
- **coders** (`dict[Type, tuple[Callable, Callable]]`, *optional*) --
  Dictionary of custom types and their encoders/decoders. Used to encode/decode arguments that are not
  jsonable by default. E.g dataclasses, argparse.Namespace, OmegaConf, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>

A generic mixin to integrate ANY machine learning framework with the Hub.

To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
have to be overwritten in  `_from_pretrained` and `_save_pretrained`. [PyTorchModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) is a good example
of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.

When inheriting from [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin), you can define class-level attributes. These attributes are not passed to
`__init__` but to the class definition itself. This is useful to define metadata about the library integrating
[ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin).

For more details on how to integrate the mixin with your library, checkout the [integration guide](../guides/integrations).



<ExampleCodeBlock anchor="huggingface_hub.ModelHubMixin.example">

Example:

```python
>>> from huggingface_hub import ModelHubMixin

# Inherit from ModelHubMixin
>>> class MyCustomModel(
...         ModelHubMixin,
...         library_name="my-library",
...         tags=["computer-vision"],
...         repo_url="https://github.com/huggingface/my-cool-library",
...         paper_url="https://arxiv.org/abs/2304.12244",
...         docs_url="https://huggingface.co/docs/my-cool-library",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, size: int = 512, device: str = "cpu"):
...         # define how to initialize your model
...         super().__init__()
...         ...
...
...     def _save_pretrained(self, save_directory: Path) -> None:
...         # define how to serialize your model
...         ...
...
...     @classmethod
...     def from_pretrained(
...         cls: type[T],
...         pretrained_model_name_or_path: Union[str, Path],
...         *,
...         force_download: bool = False,
...         token: Optional[Union[str, bool]] = None,
...         cache_dir: Optional[Union[str, Path]] = None,
...         local_files_only: bool = False,
...         revision: Optional[str] = None,
...         **model_kwargs,
...     ) -> T:
...         # define how to deserialize your model
...         ...

>>> model = MyCustomModel(size=256, device="gpu")

# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")

# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# Download and initialize weights from the Hub
>>> reloaded_model = MyCustomModel.from_pretrained("username/my-awesome-model")
>>> reloaded_model.size
256

# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["x-custom-tag", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"my-library"
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>_save_pretrained</name><anchor>huggingface_hub.ModelHubMixin._save_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L449</source><parameters>[{"name": "save_directory", "val": ": Path"}]</parameters><paramsdesc>- **save_directory** (`str` or `Path`) --
  Path to directory in which the model weights and configuration will be saved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Overwrite this method in subclass to define how to save your model.
Check out our [integration guide](../guides/integrations) for instructions.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>_from_pretrained</name><anchor>huggingface_hub.ModelHubMixin._from_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L576</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str]"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType]"}, {"name": "force_download", "val": ": bool"}, {"name": "local_files_only", "val": ": bool"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType]"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`str`) --
  ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`).
- **revision** (`str`, *optional*) --
  Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the
  latest commit on `main` branch.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
  the existing cache.
- **token** (`str` or `bool`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- **model_kwargs** --
  Additional keyword arguments passed along to the [_from_pretrained()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin._from_pretrained) method.</paramsdesc><paramgroups>0</paramgroups></docstring>
Overwrite this method in subclass to define how to load your model from pretrained.

Use `hf_hub_download()` or `snapshot_download()` to download files from the Hub before loading them. Most
args taken as input can be directly passed to those 2 methods. If needed, you can add more arguments to this
method using "model_kwargs". For example `PyTorchModelHubMixin._from_pretrained()` takes as input a `map_location`
parameter to set on which device the model should be loaded.

Check out our [integration guide](../guides/integrations) for more instructions.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>huggingface_hub.ModelHubMixin.from_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L460</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "force_download", "val": ": bool = False"}, {"name": "token", "val": ": typing.Union[str, bool, NoneType] = None"}, {"name": "cache_dir", "val": ": typing.Union[str, pathlib.Path, NoneType] = None"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str`, `Path`) --
  - Either the `model_id` (string) of a model hosted on the Hub, e.g. `bigscience/bloom`.
  - Or a path to a `directory` containing model weights saved using
    `save_pretrained`, e.g., `../path/to/my_model_directory/`.
- **revision** (`str`, *optional*) --
  Revision of the model on the Hub. Can be a branch name, a git tag or any commit id.
  Defaults to the latest commit on `main` branch.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
  the existing cache.
- **token** (`str` or `bool`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **cache_dir** (`str`, `Path`, *optional*) --
  Path to the folder where cached files are stored.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- **model_kwargs** (`dict`, *optional*) --
  Additional kwargs to pass to the model during initialization.</paramsdesc><paramgroups>0</paramgroups></docstring>

Download a model from the Huggingface Hub and instantiate it.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>huggingface_hub.ModelHubMixin.push_to_hub</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L618</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "config", "val": ": typing.Union[dict, huggingface_hub.hub_mixin.DataclassInstance, NoneType] = None"}, {"name": "commit_message", "val": ": str = 'Push model using huggingface_hub.'"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "branch", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "delete_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "model_card_kwargs", "val": ": typing.Optional[dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  ID of the repository to push to (example: `"username/my-model"`).
- **config** (`dict` or `DataclassInstance`, *optional*) --
  Model configuration specified as a key/value dictionary or a dataclass instance.
- **commit_message** (`str`, *optional*) --
  Message to commit while pushing.
- **private** (`bool`, *optional*) --
  Whether the repository created should be private.
  If `None` (default), the repo will be public unless the organization's default is private.
- **token** (`str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. By default, it will use the token
  cached when running `hf auth login`.
- **branch** (`str`, *optional*) --
  The git branch on which to push the model. This defaults to `"main"`.
- **create_pr** (`boolean`, *optional*) --
  Whether or not to create a Pull Request from `branch` with that commit. Defaults to `False`.
- **allow_patterns** (`list[str]` or `str`, *optional*) --
  If provided, only files matching at least one pattern are pushed.
- **ignore_patterns** (`list[str]` or `str`, *optional*) --
  If provided, files matching any of the patterns are not pushed.
- **delete_patterns** (`list[str]` or `str`, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo.
- **model_card_kwargs** (`dict[str, Any]`, *optional*) --
  Additional arguments passed to the model card template to customize the model card.</paramsdesc><paramgroups>0</paramgroups><retdesc>The url of the commit of your model in the given repository.</retdesc></docstring>

Upload model checkpoint to the Hub.

Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) reference for more
details.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>huggingface_hub.ModelHubMixin.save_pretrained</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L381</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "config", "val": ": typing.Union[dict, huggingface_hub.hub_mixin.DataclassInstance, NoneType] = None"}, {"name": "repo_id", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "model_card_kwargs", "val": ": typing.Optional[dict[str, typing.Any]] = None"}, {"name": "**push_to_hub_kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `Path`) --
  Path to directory in which the model weights and configuration will be saved.
- **config** (`dict` or `DataclassInstance`, *optional*) --
  Model configuration specified as a key/value dictionary or a dataclass instance.
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Huggingface Hub after saving it.
- **repo_id** (`str`, *optional*) --
  ID of your repository on the Hub. Used only if `push_to_hub=True`. Will default to the folder name if
  not provided.
- **model_card_kwargs** (`dict[str, Any]`, *optional*) --
  Additional arguments passed to the model card template to customize the model card.
- **push_to_hub_kwargs** --
  Additional key word arguments passed along to the [push_to_hub()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or `None`</rettype><retdesc>url of the commit on the Hub if `push_to_hub=True`, `None` otherwise.</retdesc></docstring>

Save weights in local directory.








</div></div>

### PyTorch[[huggingface_hub.PyTorchModelHubMixin]][[huggingface_hub.PyTorchModelHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class huggingface_hub.PyTorchModelHubMixin</name><anchor>huggingface_hub.PyTorchModelHubMixin</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py#L701</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Implementation of [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin) to provide model Hub upload/download capabilities to PyTorch models. The model
is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
you should first set it back in training mode with `model.train()`.

See [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin) for more details on how to use the mixin.

<ExampleCodeBlock anchor="huggingface_hub.PyTorchModelHubMixin.example">

Example:

```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin

>>> class MyModel(
...         nn.Module,
...         PyTorchModelHubMixin,
...         library_name="keras-nlp",
...         repo_url="https://github.com/keras-team/keras-nlp",
...         paper_url="https://arxiv.org/abs/2304.12244",
...         docs_url="https://keras.io/keras_nlp/",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
...         super().__init__()
...         self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
...         self.linear = nn.Linear(output_size, vocab_size)

...     def forward(self, x):
...         return self.linear(x + self.param)
>>> model = MyModel(hidden_size=256)

# Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")

# Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.hidden_size
256
```

</ExampleCodeBlock>


</div>

### Fastai[[huggingface_hub.from_pretrained_fastai]][[huggingface_hub.from_pretrained_fastai]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.from_pretrained_fastai</name><anchor>huggingface_hub.from_pretrained_fastai</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/fastai_utils.py#L289</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "revision", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The location where the pickled fastai.Learner is. It can be either of the two:
  - Hosted on the Hugging Face Hub. E.g.: 'espejelomar/fatai-pet-breeds-classification' or 'distilgpt2'.
    You can add a `revision` by appending `@` at the end of `repo_id`. E.g.: `dbmdz/bert-base-german-cased@main`.
    Revision is the specific model version to use. Since we use a git-based system for storing models and other
    artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.
  - Hosted locally. `repo_id` would be a directory containing the pickle and a pyproject.toml
    indicating the fastai and fastcore versions used to build the `fastai.Learner`. E.g.: `./my_model_directory/`.
- **revision** (`str`, *optional*) --
  Revision at which the repo's files are downloaded. See documentation of `snapshot_download`.</paramsdesc><paramgroups>0</paramgroups><retdesc>The `fastai.Learner` model in the `repo_id` repo.</retdesc></docstring>

Load pretrained fastai model from the Hub or from a local directory.






</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.push_to_hub_fastai</name><anchor>huggingface_hub.push_to_hub_fastai</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/fastai_utils.py#L334</source><parameters>[{"name": "learner", "val": ""}, {"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": str = 'Push FastAI model using huggingface_hub.'"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "config", "val": ": typing.Optional[dict] = None"}, {"name": "branch", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": typing.Optional[bool] = None"}, {"name": "allow_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "ignore_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "delete_patterns", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "api_endpoint", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **learner** (*Learner*) --
  The *fastai.Learner' you'd like to push to the Hub.
- **repo_id** (*str*) --
  The repository id for your model in Hub in the format of "namespace/repo_name". The namespace can be your individual account or an organization to which you have write access (for example, 'stanfordnlp/stanza-de').
- **commit_message** (*str`, *optional*) -- Message to commit while pushing. Will default to `"add model"`.
- **private** (*bool*, *optional*) --
  Whether or not the repository created should be private.
  If *None* (default), will default to been public except if the organization's default is private.
- **token** (*str*, *optional*) --
  The Hugging Face account token to use as HTTP bearer authorization for remote files. If `None`, the token will be asked by a prompt.
- **config** (*dict*, *optional*) --
  Configuration object to be saved alongside the model weights.
- **branch** (*str*, *optional*) --
  The git branch on which to push the model. This defaults to
  the default branch as specified in your repository, which
  defaults to *"main"*.
- **create_pr** (*boolean*, *optional*) --
  Whether or not to create a Pull Request from *branch* with that commit.
  Defaults to *False*.
- **api_endpoint** (*str*, *optional*) --
  The API endpoint to use when pushing the model to the hub.
- **allow_patterns** (*list[str]* or *str*, *optional*) --
  If provided, only files matching at least one pattern are pushed.
- **ignore_patterns** (*list[str]* or *str*, *optional*) --
  If provided, files matching any of the patterns are not pushed.
- **delete_patterns** (*list[str]* or *str*, *optional*) --
  If provided, remote files matching any of the patterns will be deleted from the repo.</paramsdesc><paramgroups>0</paramgroups><retdesc>The url of the commit of your model in the given repository.</retdesc></docstring>

Upload learner checkpoint files to the Hub.

Use *allow_patterns* and *ignore_patterns* to precisely filter which files should be pushed to the hub. Use
*delete_patterns* to delete existing remote files in the same commit. See [*upload_folder*] reference for more
details.





> [!TIP]
> Raises the following error:
>
>     - [*ValueError*](https://docs.python.org/3/library/exceptions.html#ValueError)
>       if the user is not log on to the Hugging Face Hub.


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/mixins.md" />

### 직렬화[[serialization]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/serialization.md

# 직렬화[[serialization]]

`huggingface_hub`에는 ML 라이브러리가 모델 가중치를 표준화된 방식으로 직렬화 할 수 있도록 돕는 헬퍼를 포함하고 있습니다. 라이브러리의 이 부분은 아직 개발 중이며 향후 버전에서 개선될 예정입니다. 개선 목표는 Hub에서 가중치의 직렬화 방식을 통일하고, 라이브러리 간 코드 중복을 줄이며, Hub에서의 규약을 촉진하는 것입니다.

## 상태 사전을 샤드로 나누기[[split-state-dict-into-shards]]

현재 이 모듈은 상태 딕셔너리(예: 레이어 이름과 관련 텐서 간의 매핑)를 받아 여러 샤드로 나누고, 이 과정에서 적절한 인덱스를 생성하는 단일 헬퍼를 포함하고 있습니다. 이 헬퍼는 `torch` 텐서에 사용 가능하며, 다른 ML 프레임워크로 쉽게 확장될 수 있도록 설계되었습니다.

### split_torch_state_dict_into_shards[[huggingface_hub.split_torch_state_dict_into_shards]][[huggingface_hub.split_torch_state_dict_into_shards]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.split_torch_state_dict_into_shards</name><anchor>huggingface_hub.split_torch_state_dict_into_shards</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L290</source><parameters>[{"name": "state_dict", "val": ": dict"}, {"name": "filename_pattern", "val": ": str = 'model{suffix}.safetensors'"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}]</parameters><paramsdesc>- **state_dict** (`dict[str, torch.Tensor]`) --
  The state dictionary to save.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
  Defaults to `"model{suffix}.safetensors"`.
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.</paramsdesc><paramgroups>0</paramgroups><rettype>`StateDictSplit`</rettype><retdesc>A `StateDictSplit` object containing the shards and the index to retrieve them.</retdesc></docstring>

Split a model state dictionary in shards so that each shard is smaller than a given size.

The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization
made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we
have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not
[6+2+2GB], [6+2GB], [6GB].


> [!TIP]
> To save a model state dictionary to the disk, see `save_torch_state_dict()`. This helper uses
> `split_torch_state_dict_into_shards` under the hood.

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.







<ExampleCodeBlock anchor="huggingface_hub.split_torch_state_dict_into_shards.example">

Example:
```py
>>> import json
>>> import os
>>> from safetensors.torch import save_file as safe_save_file
>>> from huggingface_hub import split_torch_state_dict_into_shards

>>> def save_state_dict(state_dict: dict[str, torch.Tensor], save_directory: str):
...     state_dict_split = split_torch_state_dict_into_shards(state_dict)
...     for filename, tensors in state_dict_split.filename_to_tensors.items():
...         shard = {tensor: state_dict[tensor] for tensor in tensors}
...         safe_save_file(
...             shard,
...             os.path.join(save_directory, filename),
...             metadata={"format": "pt"},
...         )
...     if state_dict_split.is_sharded:
...         index = {
...             "metadata": state_dict_split.metadata,
...             "weight_map": state_dict_split.tensor_to_filename,
...         }
...         with open(os.path.join(save_directory, "model.safetensors.index.json"), "w") as f:
...             f.write(json.dumps(index, indent=2))
```

</ExampleCodeBlock>


</div>

### split_state_dict_into_shards_factory[[huggingface_hub.split_state_dict_into_shards_factory]][[huggingface_hub.split_state_dict_into_shards_factory]]

이것은 각 프레임워크별 헬퍼가 파생되는 기본 틀입니다. 실제로는 아직 지원되지 않는 프레임워크에 맞게 조정할 필요가 있는 경우가 아니면 이 틀을 직접 사용할 것으로 예상되지 않습니다. 그런 경우가 있다면, `huggingface_hub` 리포지토리에 [새로운 이슈를 개설](https://github.com/huggingface/huggingface_hub/issues/new) 하여 알려주세요.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.split_state_dict_into_shards_factory</name><anchor>huggingface_hub.split_state_dict_into_shards_factory</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_base.py#L49</source><parameters>[{"name": "state_dict", "val": ": dict"}, {"name": "get_storage_size", "val": ": typing.Callable[[~TensorT], int]"}, {"name": "filename_pattern", "val": ": str"}, {"name": "get_storage_id", "val": ": typing.Callable[[~TensorT], typing.Optional[typing.Any]] = <function <lambda> at 0x7f52307432e0>"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '5GB'"}]</parameters><paramsdesc>- **state_dict** (`dict[str, Tensor]`) --
  The state dictionary to save.
- **get_storage_size** (`Callable[[Tensor], int]`) --
  A function that returns the size of a tensor when saved on disk in bytes.
- **get_storage_id** (`Callable[[Tensor], Optional[Any]]`, *optional*) --
  A function that returns a unique identifier to a tensor storage. Multiple different tensors can share the
  same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage
  during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id.
- **filename_pattern** (`str`, *optional*) --
  The pattern to generate the files names in which the model will be saved. Pattern must be a string that
  can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix`
- **max_shard_size** (`int` or `str`, *optional*) --
  The maximum size of each shard, in bytes. Defaults to 5GB.</paramsdesc><paramgroups>0</paramgroups><rettype>`StateDictSplit`</rettype><retdesc>A `StateDictSplit` object containing the shards and the index to retrieve them.</retdesc></docstring>

Split a model state dictionary in shards so that each shard is smaller than a given size.

The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization
made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we
have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not
[6+2+2GB], [6+2GB], [6GB].

> [!WARNING]
> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a
> size greater than `max_shard_size`.








</div>

## 도우미

### get_torch_storage_id[[huggingface_hub.get_torch_storage_id]][[huggingface_hub.get_torch_storage_id]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.get_torch_storage_id</name><anchor>huggingface_hub.get_torch_storage_id</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/serialization/_torch.py#L726</source><parameters>[{"name": "tensor", "val": ": torch.Tensor"}]</parameters></docstring>

Return unique identifier to a tensor storage.

Multiple different tensors can share the same underlying storage. This identifier is
guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with
non-overlapping lifetimes may have the same id.
In the case of meta tensors, we return None since we can't tell if they share the same storage.

Taken from https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/src/transformers/pytorch_utils.py#L278.


</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/serialization.md" />

### 로그인 및 로그아웃[[login-and-logout]]
https://huggingface.co/docs/huggingface_hub/main/ko/package_reference/login.md

# 로그인 및 로그아웃[[login-and-logout]]

`huggingface_hub` 라이브러리를 사용하면 사용자의 기기를 Hub에 프로그래밍적으로 로그인/로그아웃할 수 있습니다.

인증에 대한 자세한 내용은 [이 섹션](../quick-start#authentication)을 확인하세요.

## 로그인[[login]][[huggingface_hub.login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.login</name><anchor>huggingface_hub.login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L59</source><parameters>[{"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "add_to_git_credential", "val": ": bool = False"}, {"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **token** (`str`, *optional*) --
  User access token to generate from https://huggingface.co/settings/token.
- **add_to_git_credential** (`bool`, defaults to `False`) --
  If `True`, token will be set as git credential. If no git credential helper
  is configured, a warning will be displayed to the user. If `token` is `None`,
  the value of `add_to_git_credential` is ignored and will be prompted again
  to the end user.
- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If an organization token is passed. Only personal account tokens are valid
  to log in.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If token is invalid.
- [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) -- 
  If running in a notebook but `ipywidgets` is not installed.</raises><raisederrors>``ValueError`` or ``ImportError``</raisederrors></docstring>
Login the machine to access the Hub.

The `token` is persisted in cache and set as a git credential. Once done, the machine
is logged in and the access token will be available across all `huggingface_hub`
components. If `token` is not provided, it will be prompted to the user either with
a widget (in a notebook) or via the terminal.

To log in from outside of a script, one can also use `hf auth login` which is
a cli command that wraps [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login).

> [!TIP]
> [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login) is a drop-in replacement method for [notebook_login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.notebook_login) as it wraps and
> extends its capabilities.

> [!TIP]
> When the token is not passed, [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login) will automatically detect if the script runs
> in a notebook or not. However, this detection might not be accurate due to the
> variety of notebooks that exists nowadays. If that is the case, you can always force
> the UI by using [notebook_login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.notebook_login) or [interpreter_login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.interpreter_login).








</div>

## 인터프리터_로그인[[interpreter_login]][[huggingface_hub.interpreter_login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.interpreter_login</name><anchor>huggingface_hub.interpreter_login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L231</source><parameters>[{"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups></docstring>

Displays a prompt to log in to the HF website and store the token.

This is equivalent to [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login) without passing a token when not run in a notebook.
[interpreter_login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.interpreter_login) is useful if you want to force the use of the terminal prompt
instead of a notebook widget.

For more details, see [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login).




</div>

## 노트북_로그인[[notebook_login]][[huggingface_hub.notebook_login]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.notebook_login</name><anchor>huggingface_hub.notebook_login</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L293</source><parameters>[{"name": "skip_if_logged_in", "val": ": bool = False"}]</parameters><paramsdesc>- **skip_if_logged_in** (`bool`, defaults to `False`) --
  If `True`, do not prompt for token if user is already logged in.</paramsdesc><paramgroups>0</paramgroups></docstring>

Displays a widget to log in to the HF website and store the token.

This is equivalent to [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login) without passing a token when run in a notebook.
[notebook_login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.notebook_login) is useful if you want to force the use of the notebook widget
instead of a prompt in the terminal.

For more details, see [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login).




</div>

## 로그아웃[[logout]][[huggingface_hub.logout]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>huggingface_hub.logout</name><anchor>huggingface_hub.logout</anchor><source>https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/_login.py#L119</source><parameters>[{"name": "token_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **token_name** (`str`, *optional*) --
  Name of the access token to logout from. If `None`, will log out from all saved access tokens.</paramsdesc><paramgroups>0</paramgroups><raises>- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) -- 
  If the access token name is not found.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Logout the machine from the Hub.

Token is deleted from the machine and removed from git credential.








</div>

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/package_reference/login.md" />

### 웹훅 서버[[webhooks-server]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/webhooks_server.md

# 웹훅 서버[[webhooks-server]]

웹훅은 MLOps 관련 기능의 기반이 됩니다. 이를 통해 특정 저장소의 새로운 변경 사항을 수신하거나,
관심 있는 특정 사용자/조직에 속한 모든 저장소의 변경 사항을 받아볼 수 있습니다.
이 가이드에서는 `huggingface_hub`를 활용하여 웹훅을 수신하는 서버를 만들고 Space에 배포하는 방법을 설명합니다. 
이를 위해서는 Huggingface Hub의 웹훅 개념에 대해 익숙해야 합니다. 
웹훅 자체에 대해 더 자세히 알아보려면 이 [가이드](https://huggingface.co/docs/hub/webhooks)를 먼저 읽어보세요.  

이 가이드에서 사용할 기본 클래스는 [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer)입니다. 
이 클래스는 Huggingface Hub에서 웹훅을 받을 수 있는 서버를 쉽게 구성할 수 있습니다. 서버는 [Gradio](https://gradio.app/) 앱을 기반으로 합니다. 
이 서버에는 사용자를 위한 지침을 표시하는 UI와 웹훅을 수신하는 API가 있습니다.

> [!TIP]
> 웹훅 서버의 실행 예시를 보려면 [Spaces CI Bot](https://huggingface.co/spaces/spaces-ci-bot/webhook)을 확인하세요. 
> 이것은 Space의 PR이 열릴 때마다 임시 환경을 실행하는 Space입니다.

> [!WARNING]
> 이것은 [실험적 기능](../package_reference/environment_variables#hfhubdisableexperimentalwarning)입니다. 
> 본 API는 현재 개선 작업 중이며, 향후 사전 통지 없이 주요 변경 사항이 도입될 수 있습니다. 
> requirements에서 `huggingface_hub`의 버전을 고정하는 것을 권장합니다.


## 엔드포인트 생성[[create-an-endpoint]]

웹훅 엔드포인트를 구현하는 것은 함수에 데코레이터를 추가하는 것만큼 간단합니다. 
주요 개념을 설명하기 위해 첫 번째 예시를 살펴보겠습니다:

```python
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # 데이터 세트가 업데이트되면 학습 작업을 트리거합니다.
        ...
```

이 코드 스니펫을 `'app.py'`라는 파일에 저장하고 `'python app.py'`로 실행하면 다음과 같은 메시지가 표시될 것입니다:

```text
Webhook secret is not defined. This means your webhook endpoints will be open to everyone.
To add a secret, set `WEBHOOK_SECRET` as environment variable or pass it at initialization:
        `app = WebhooksServer(webhook_secret='my_secret', ...)`
For more details about webhook secrets, please refer to https://huggingface.co/docs/hub/webhooks#webhook-secret.
Running on local URL:  http://127.0.0.1:7860
Running on public URL: https://1fadb0f52d8bf825fc.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces

Webhooks are correctly setup and ready to use:
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training
Go to https://huggingface.co/settings/webhooks to setup your webhooks.
```

축하합니다! 웹훅 서버를 실행했습니다! 정확히 어떤 일이 일어났는지 살펴보겠습니다:

1. [webhook_endpoint()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.webhook_endpoint)로 함수에 데코레이터를 추가하면 백그라운드에서 [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) 객체가 생성됩니다. 
볼 수 있듯이 이 서버는 http://127.0.0.1:7860 에서 실행되는 Gradio 앱입니다. 
이 URL을 브라우저에서 열면 등록된 웹훅에 대한 지침이 있는 랜딩 페이지를 볼 수 있습니다.
2. Gradio 앱은 내부적으로 FastAPI 서버입니다. 새로운 POST 경로 `/webhooks/trigger_training`이 추가되었습니다. 
이 경로는 웹훅을 수신하고 트리거될 때 `trigger_training` 함수를 실행합니다. 
FastAPI는 자동으로 페이로드를 구문 분석하고 [WebhookPayload](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhookPayload) 객체로 함수에 전달합니다. 
이 `pydantic` 객체에는 웹훅을 트리거한 이벤트에 대한 모든 정보가 포함되어 있습니다.
3. Gradio 앱은 인터넷에서 요청을 받을 수 있는 터널도 열었습니다. 
이것은 흥미로운 부분으로, https://huggingface.co/settings/webhooks 에서 로컬 머신을 가리키는 웹훅을 구성할 수 있습니다. 
이를 통해 웹훅 서버를 디버깅하고 Space에 배포하기 전에 빠르게 반복할 수 있습니다.
4. 마지막으로 로그에는 서버가 현재 비밀로 보호되지 않는다고 알려줍니다. 
이것은 로컬 디버깅에는 문제가 되지 않지만 나중에 고려해야 할 사항입니다.

> [!WARNING]
> 기본적으로 서버는 스크립트 끝에서 시작됩니다. 
> 주피터 노트북에서 실행 중이라면 `decorated_function.run()`을 호출하여 서버를 수동으로 시작할 수 있습니다. 
> 고유한 서버를 사용하기 때문에 여러 엔드포인트가 있더라도 서버를 한 번만 시작하면 됩니다.


## 웹훅 설정하기[[configure-a-webhook]]

웹훅 서버를 실행하고 있으므로, 이제 메시지를 수신하기 위해 웹훅을 구성해야 합니다.
https://huggingface.co/settings/webhooks 로 이동하여 "Add a new webhook"을 클릭하고 웹훅을 구성하세요. 
모니터링할 대상 저장소와 웹훅 URL `https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training`을 설정하세요.

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/configure_webhook.png"/>
</div>

이걸로 끝입니다! 이제 대상 저장소를 업데이트하면 웹훅을 트리거할 수 있습니다. 예를 들면, 커밋 푸시가 그 방법이 될 수 있습니다.
웹훅의 Activity 탭에서 트리거된 이벤트를 확인할 수 있습니다. 이제 작동하는 구성이 있으므로 테스트하고 빠르게 반복할 수 있습니다. 
코드를 수정하고 서버를 다시 시작하면 공개 URL이 변경될 수 있습니다. 
필요한 경우 Hub에서 웹훅 구성을 업데이트하세요.

## Space에 배포하기[[deploy-to-a-space]]

이제 작동하는 웹훅 서버가 마련되었으므로, 다음 목표는 이를 Space에 배포하는 것입니다. https://huggingface.co/new-space 에 가서 Space를 생성합니다. 
이름을 지정하고, Gradio SDK를 선택한 다음 "Create Space"를 클릭합니다. 코드를 `app.py` 파일로 Space에 업로드합니다.
Space가 자동으로 시작됩니다!
Space에 대한 자세한 내용은 이 [가이드](https://huggingface.co/docs/hub/spaces-overview)를 참조하세요.

웹훅 서버가 이제 공개 Space에서 실행 중입니다. 대부분의 경우 비밀번호로 보안을 설정하고 싶을 것입니다.
Space 설정 > "Repository secrets" 섹션 > "Add a secret" 로 이동합니다. `WEBHOOK_SECRET` 환경 변수에 원하는 값을 설정합니다. 
[Webhooks 설정](https://huggingface.co/settings/webhooks)으로 돌아가서 웹훅 구성에 비밀번호를 설정합니다. 
이제 올바른 비밀번호가 있는 요청만 서버에서 허용됩니다.

이게 전부입니다! Space가 이제 Hub의 웹훅을 수신할 준비가 되었습니다.
무료 하드웨어인 'cpu-basic'에서 Space를 실행 시, 48시간 동안 비활성화되면 종료된다는 점을 유념하세요. 
영구적인 Space가 필요한 경우 [업그레이드된 하드웨어](https://huggingface.co/docs/hub/spaces-gpus#hardware-specs)를 설정해야 합니다.

## 고급 사용법[[advanced-usage]]

위의 가이드에서는 [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer)를 설정하는 가장 빠른 방법에 대해 설명했습니다. 
이 섹션에서는 이를 더욱 사용자 정의하는 방법을 살펴보겠습니다.

### 다중 엔드포인트[[multiple-endpoints]]

동일한 서버에 여러 엔드포인트를 등록할 수 있습니다. 
예를 들어, 하나의 엔드포인트는 학습 작업을 트리거하고 다른 엔드포인트는 모델 평가를 트리거하도록 할 수 있습니다. 
이를 위해 여러 개의 `@webhook_endpoint` 데코레이터를 추가하면 됩니다:

```python
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # 데이터 세트가 업데이트되면 학습 작업을 트리거합니다.
        ...

@webhook_endpoint
async def trigger_evaluation(payload: WebhookPayload) -> None:
    if payload.repo.type == "model" and payload.event.action == "update":
        # 모델이 업데이트되면 평가 작업을 트리거합니다. 
        ...
```

이렇게 하면 두 개의 엔드포인트가 생성됩니다:

```text
(...)
Webhooks are correctly setup and ready to use:
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_training
  - POST https://1fadb0f52d8bf825fc.gradio.live/webhooks/trigger_evaluation
```

### 사용자 정의 서버[[custom-server]]

더 많은 유연성을 얻기 위해 [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) 객체를 직접 생성할 수도 있습니다. 
이것은 서버의 랜딩 페이지를 사용자 정의하고자 할 때 유용합니다. 
기본 페이지를 덮어쓸 [Gradio UI](https://gradio.app/docs/#blocks)를 전달하여 이를 수행할 수 있습니다. 
예를 들어, 사용자를 위한 지침을 추가하거나 웹훅을 수동으로 트리거하는 양식을 추가할 수 있습니다. 
[WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer)를 생성할 때, `add_webhook()` 데코레이터를 사용하여 새로운 웹훅을 등록할 수 있습니다.

전체 예제는 다음과 같습니다:

```python
import gradio as gr
from fastapi import Request
from huggingface_hub import WebhooksServer, WebhookPayload

# 1. UI 정의
with gr.Blocks() as ui:
    ...

# 2. 사용자 정의 UI와 시크릿으로 WebhooksServer 생성
app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")

# 3. 명시적 이름으로 웹훅 등록
@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
    return {"message": "hello"}

# 4. 암시적 이름으로 웹훅 등록
@app.add_webhook
async def goodbye(payload: WebhookPayload):
    return {"message": "goodbye"}

# 5. 서버 시작 (선택 사항)
app.run()
```

1. Gradio 블록을 사용하여 사용자 정의 UI를 정의합니다. 이 UI는 서버의 랜딩 페이지에 표시됩니다.
2. 사용자 정의 UI와 시크릿으로 [WebhooksServer()](/docs/huggingface_hub/main/ko/package_reference/webhooks_server#huggingface_hub.WebhooksServer) 객체를 생성합니다. 
시크릿은 선택 사항이며 `WEBHOOK_SECRET` 환경 변수로 설정할 수 있습니다. 
3. 명시적 이름으로 웹훅을 등록합니다. 이렇게 하면 `/webhooks/say_hello` 엔드포인트가 생성됩니다.
4. 암시적 이름으로 웹훅을 등록합니다. 이렇게 하면 `/webhooks/goodbye` 엔드포인트가 생성됩니다.
5. 서버를 시작합니다. 이것은 선택 사항이며 스크립트 끝에서 자동으로 서버가 시작됩니다.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/webhooks_server.md" />

### Collections[[collections]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/collections.md

# Collections[[collections]]

Collection은 Hub(모델, 데이터셋, Spaces, 논문)에 있는 관련 항목들의 그룹으로, 같은 페이지에 함께 구성되어 있습니다. Collections는 자신만의 포트폴리오를 만들거나, 카테고리별로 콘텐츠를 북마크 하거나, 공유하고 싶은 item들의 큐레이팅 된 목록을 제시하는 데 유용합니다. 여기 [가이드](https://huggingface.co/docs/hub/collections)를 확인하여 Collections가 무엇이고 Hub에서 어떻게 보이는지 자세히 알아보세요.

브라우저에서 직접 Collections를 관리할 수 있지만, 이 가이드에서는 프로그래밍 방식으로 Collection을 관리하는 방법에 초점을 맞추겠습니다.

## Collection 가져오기[[fetch-a-collection]]

[get_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_collection)을 사용하여 자신의 Collections나 공개된 Collection을 가져올 수 있습니다. Collection을 가져오려면 Collection의 *slug*가 필요합니다. Slug는 제목과 고유한 ID를 기반으로 한 Collection의 식별자입니다. Collection 페이지의 URL에서 slug를 찾을 수 있습니다.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hfh_collection_slug.png"/>
</div>

`"TheBloke/recent-models-64f9a55bb3115b4f513ec026"` Collection을 가져와 봅시다:

```py
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection
Collection(
  slug='TheBloke/recent-models-64f9a55bb3115b4f513ec026',
  title='Recent models',
  owner='TheBloke',
  items=[...],
  last_updated=datetime.datetime(2023, 10, 2, 22, 56, 48, 632000, tzinfo=datetime.timezone.utc),
  position=1,
  private=False,
  theme='green',
  upvotes=90,
  description="Models I've recently quantized. Please note that currently this list has to be updated manually, and therefore is not guaranteed to be up-to-date."
)
>>> collection.items[0]
CollectionItem(
  item_object_id='651446103cd773a050bf64c2',
  item_id='TheBloke/U-Amethyst-20B-AWQ',
  item_type='model',
  position=88,
  note=None
)
```

[get_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_collection)에 의해 반환된 [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection) 객체에는 다음이 포함되어 있습니다:
- 높은 수준의 메타데이터: `slug`, `owner`, `title`, `description` 등
- [CollectionItem](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.CollectionItem) 객체의 목록; 각 항목은 모델, 데이터셋, Space 또는 논문을 나타냅니다.

모든 Collection 항목에는 다음이 보장됩니다:
- 고유한 `item_object_id`: 데이터베이스에서 Collection 항목의 id
- 기본 항목(모델, 데이터셋, Space, 논문)의 Hub에서의 `item_id`; 고유하지 않으며, `item_id`/`item_type` 쌍만 고유합니다.
- `item_type`: 모델, 데이터셋, Space, 논문
- Collection에서 항목의 `position`으로, 이를 업데이트하여 Collection을 재구성할 수 있습니다(아래의 [update_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item) 참조)

각 항목에는 추가 정보(코멘트, 블로그 포스트 링크 등)를 위한 `note`도 첨부될 수 있습니다. 항목에 note가 없으면 해당 속성값은 `None`이 됩니다.

이러한 기본 속성 외에도, 반환된 항목은 유형에 따라 추가 속성(`author`, `private`, `lastModified`, `gated`, `title`, `likes`, `upvotes` 등)을 가질 수 있습니다. 그러나 이러한 속성이 반환된다는 보장은 없습니다.

## Collections 나열하기[[fetch-a-collection]]

[list_collections()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_collections)를 사용하여 Collections를 나열할 수도 있습니다. Collections는 몇 가지 매개변수를 사용하여 필터링할 수 있습니다. 사용자 [`teknium`](https://huggingface.co/teknium)의 모든 Collections를 나열해 봅시다.

```py
>>> from huggingface_hub import list_collections

>>> collections = list_collections(owner="teknium")
```

이렇게 하면 `Collection` 객체의 반복 가능한 객체가 반환됩니다. 예를 들어 각 Collection의 upvotes 수를 출력하기 위해 반복할 수 있습니다.

```py
>>> for collection in collections:
...   print("Number of upvotes:", collection.upvotes)
Number of upvotes: 1
Number of upvotes: 5
```

> [!WARNING]
> Collections를 나열할 때, 각 Collection의 항목 목록은 최대 4개 항목으로 잘립니다. Collection의 모든 항목을 가져오려면 [get_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_collection)을 사용해야 합니다.

고급 필터링을 수행할 수 있습니다. 예를 들어 모델 [TheBloke/OpenHermes-2.5-Mistral-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF)를 포함하는 트렌딩 순으로 정렬된 Collections를 5개까지만 가져올 수 있습니다.

```py
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
...   print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
```

`sort` 매개변수는 `"last_modified"`, `"trending"` 또는 `"upvotes"` 중 하나여야 합니다. `item` 매개변수는 특정 항목을 받습니다. 예를 들면 다음과 같습니다:
* `"models/teknium/OpenHermes-2.5-Mistral-7B"`
* `"spaces/julien-c/open-gpt-rhyming-robot"`
* `"datasets/squad"`
* `"papers/2311.12983"`

자세한 내용은 [list_collections()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_collections) 참조를 확인하시기 바랍니다.

## 새 Collection 만들기[[fetch-a-collection]]

이제 [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)을 가져오는 방법을 알았으니 우리만의 Collection을 만들어봅시다! 제목과 설명을 사용하여 [create_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_collection)을 호출합니다. 조직 페이지에 Collection을 만들려면 Collection 생성 시 `namespace="my-cool-org"`를 전달합니다. 마지막으로 `private=True`를 전달하여 비공개 Collection을 만들 수도 있습니다.

```py
>>> from huggingface_hub import create_collection

>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
```

이렇게 하면 (제목, 설명, 소유자 등의) 높은 수준의 메타데이터와 빈 항목 목록을 가진 [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection) 객체가 반환됩니다. 이제 `slug`를 사용하여 이 Collection을 참조할 수 있습니다.

```py
>>> collection.slug
'owner/iccv-2023-15e23b46cb98efca45'
>>> collection.title
"ICCV 2023"
>>> collection.owner
"username"
>>> collection.url
'https://huggingface.co/collections/owner/iccv-2023-15e23b46cb98efca45'
```

## Collection의 item 관리[[manage-items-in-a-collection]]

이제 [Collection](/docs/huggingface_hub/main/ko/package_reference/collections#huggingface_hub.Collection)을 가지고 있으므로, 여기에 item을 추가하고 구성해봅시다.

### item 추가[[add-items]]

item은 [add_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.add_collection_item)을 사용하여 하나씩 추가해야 합니다. `collection_slug`, `item_id`, `item_type`만 알면 됩니다. 또한 선택적으로 항목에 `note`를 추가할 수도 있습니다(최대 500자).

```py
>>> from huggingface_hub import create_collection, add_collection_item

>>> collection = create_collection(title="OS Week Highlights - Sept 18 - 24", namespace="osanseviero")
>>> collection.slug
"osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"

>>> add_collection_item(collection.slug, item_id="coqui/xtts", item_type="space")
>>> add_collection_item(
...     collection.slug,
...     item_id="warp-ai/wuerstchen",
...     item_type="model",
...     note="Würstchen is a new fast and efficient high resolution text-to-image architecture and model"
... )
>>> add_collection_item(collection.slug, item_id="lmsys/lmsys-chat-1m", item_type="dataset")
>>> add_collection_item(collection.slug, item_id="warp-ai/wuerstchen", item_type="space") # 동일한 item_id, 다른 item_type
```

Collection에 item이 이미 존재하는 경우(동일한 `item_id`/`item_type` 쌍), HTTP 409 오류가 발생합니다. `exists_ok=True`를 설정하면 이 오류를 무시할 수 있습니다.

### 기존 item에 메모 추가[[add-a-note-to-an-existing-item]]

[update_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item)을 사용하여 기존 item을 수정하여 메모를 추가하거나 변경할 수 있습니다. 위의 예시를 다시 사용해 봅시다:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# 새로 추가된 item과 함께 Collection 가져오기
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# `lmsys-chat-1m` 데이터셋에 메모 추가
>>> update_collection_item(
...     collection_slug=collection_slug,
...     item_object_id=collection.items[2].item_object_id,
...     note="This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.",
... )
```

### item 재정렬[[reorder-items]]

Collection의 item은 순서가 있습니다. 이 순서는 각 item의 `position` 속성에 의해 결정됩니다. 기본적으로 item은 Collection의 끝에 추가되는 방식으로 순서가 지정됩니다. [update_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_collection_item)을 사용하여 메모를 추가하는 것과 같은 방식으로 순서를 업데이트할 수 있습니다.

위의 예시를 다시 사용해 봅시다:

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Collection 가져오기
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# 두 개의 `Wuerstchen` item을 함께 배치하도록 재정렬
>>> update_collection_item(
...     collection_slug=collection_slug,
...     item_object_id=collection.items[3].item_object_id,
...     position=2,
... )
```

### item 제거[[remove-items]]

마지막으로 [delete_collection_item()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_collection_item)을 사용하여 item을 제거할 수도 있습니다.

```py
>>> from huggingface_hub import get_collection, update_collection_item

# Collection 가져오기
>>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
>>> collection = get_collection(collection_slug)

# 목록에서 `coqui/xtts` Space 제거
>>> delete_collection_item(collection_slug=collection_slug, item_object_id=collection.items[0].item_object_id)
```

## Collection 삭제[[delete-collection]]

[delete_collection()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_collection)을 사용하여 Collection을 삭제할 수 있습니다.

> [!WARNING]
> 이 작업은 되돌릴 수 없습니다. 삭제된 Collection은 복구할 수 없습니다.

```py
>>> from huggingface_hub import delete_collection
>>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True)
```

<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/collections.md" />

### Hub에서 파일 다운로드하기[[download-files-from-the-hub]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/download.md

# Hub에서 파일 다운로드하기[[download-files-from-the-hub]]

`huggingface_hub` 라이브러리는 Hub의 저장소에서 파일을 다운로드하는 기능을 제공합니다. 이 기능은 함수로 직접 사용할 수 있고, 사용자가 만든 라이브러리에 통합하여 Hub와 쉽게 상호 작용할 수 있도록 할 수 있습니다. 이 가이드에서는 다음 내용을 다룹니다:

* 파일 하나를 다운로드하고 캐시하는 방법
* 리포지토리 전체를 다운로드하고 캐시하는 방법
* 로컬 폴더에 파일을 다운로드하는 방법

## 파일 하나만 다운로드하기[[download-a-single-file]]

`hf_hub_download()` 함수를 사용하면 Hub에서 파일을 다운로드할 수 있습니다. 이 함수는 원격 파일을 다운로드하여 (버전별로) 디스크에 캐시하고, 로컬 파일 경로를 반환합니다.

> [!TIP]
> 반환된 파일 경로는 HF 로컬 캐시의 위치를 가리킵니다. 그러므로 캐시가 손상되지 않도록 파일을 수정하지 않는 것이 좋습니다. 캐시가 어떻게 작동하는지 자세히 알고 싶으시면 [캐싱 가이드](./manage-cache)를 참조하세요.

### 최신 버전에서 파일 다운로드하기[[from-latest-version]]

다운로드할 파일을 선택하기 위해 `repo_id`, `repo_type`, `filename` 매개변수를 사용합니다. `repo_type` 매개변수를 생략하면 파일은 `model` 리포의 일부라고 간주됩니다.

```python
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json")
'/root/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade/config.json'

# 데이터세트의 경우
>>> hf_hub_download(repo_id="google/fleurs", filename="fleurs.py", repo_type="dataset")
'/root/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34/fleurs.py'
```

### 특정 버전에서 파일 다운로드하기[[from-specific-version]]

기본적으로 `main` 브랜치의 최신 버전의 파일이 다운로드됩니다. 그러나 특정 버전의 파일을 다운로드하고 싶을 수도 있습니다. 예를 들어, 특정 브랜치, 태그, 커밋 해시 등에서 파일을 다운로드하고 싶을 수 있습니다. 이 경우 `revision` 매개변수를 사용하여 원하는 버전을 지정할 수 있습니다:

```python
# `v1.0` 태그에서 다운로드하기
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="v1.0")

# `test-branch` 브랜치에서 다운로드하기
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="test-branch")

# PR #3에서 다운로드하기
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="refs/pr/3")

# 특정 커밋 해시에서 다운로드하기
>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="877b84a8f93f2d619faa2a6e514a32beef88ab0a")
```

**참고**: 커밋 해시를 사용할 때는 7자리의 짧은 커밋 해시가 아니라 전체 길이의 커밋 해시를 사용해야 합니다.

### 다운로드 URL 만들기[[construct-a-download-url]]

리포지토리에서 파일을 다운로드하는 데 사용할 URL을 만들고 싶은 경우 `hf_hub_url()` 함수를 사용하여 URL을 반환받을 수 있습니다. 이 함수는 `hf_hub_download()` 함수가 내부적으로 사용하는 URL을 생성한다는 점을 알아두세요.

## 전체 리포지토리 다운로드하기[[download-an-entire-repository]]

`snapshot_download()` 함수는 특정 버전의 전체 리포지토리를 다운로드합니다. 이 함수는 내부적으로 `hf_hub_download()` 함수를 사용하므로, 다운로드한 모든 파일은 로컬 디스크에 캐시되어 저장됩니다. 다운로드는 여러 파일을 동시에 받아오기 때문에 빠르게 진행됩니다.

전체 리포지토리를 다운로드하려면 `repo_id`와 `repo_type`을 인자로 넘겨주면 됩니다:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp")
'/home/lysandre/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade'

# 또는 데이터세트의 경우
>>> snapshot_download(repo_id="google/fleurs", repo_type="dataset")
'/home/lysandre/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34'
```

`snapshot_download()` 함수는 기본적으로 최신 버전의 리포지토리를 다운로드합니다. 특정 버전의 리포지토리를 다운로드하고 싶은 경우, `revision` 매개변수에 원하는 버전을 지정하면 됩니다:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", revision="refs/pr/1")
```

### 다운로드할 파일 선택하기[[filter-files-to-download]]

`snapshot_download()` 함수는 리포지토리를 쉽게 다운로드할 수 있도록 해줍니다. 그러나 리포지토리의 모든 내용을 다운로드하고 싶지 않을 수도 있습니다. 예를 들어, `.safetensors` 가중치만 사용하고 싶다면, 모든 `.bin` 파일을 다운로드하지 않도록 할 수 있습니다. `allow_pattern`과 `ignore_pattern` 매개변수를 사용하여 원하는 파일만 다운로드할 수 있습니다.

이 매개변수들은 하나의 패턴이나 패턴의 리스트를 받을 수 있습니다. 패턴은 [여기](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm)에서 설명된 것처럼 표준 와일드카드(글로빙 패턴)입니다. 패턴 매칭은 [`fnmatch`](https://docs.python.org/3/library/fnmatch.html)에 기반합니다.

예를 들어, `allow_patterns`를 사용하여 JSON 구성 파일만 다운로드하는 방법은 다음과 같습니다:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", allow_patterns="*.json")
```

반대로 `ignore_patterns`는 특정 파일을 다운로드에서 제외시킬 수 있습니다. 다음 예제는 `.msgpack`과 `.h5` 파일 확장자를 무시하는 방법입니다:

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="lysandre/arxiv-nlp", ignore_patterns=["*.msgpack", "*.h5"])
```

마지막으로, 두 가지 매개변수를 함께 사용하여 다운로드를 정확하게 선택할 수 있습니다. 다음은 `vocab.json`을 제외한 모든 json 및 마크다운 파일을 다운로드하는 예제입니다.

```python
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="gpt2", allow_patterns=["*.md", "*.json"], ignore_patterns="vocab.json")
```

## 로컬 폴더에 파일 다운로드하기[[download-files-to-local-folder]]

Hub에서 파일을 다운로드하는 가장 좋은 (그리고 기본적인) 방법은 [캐시 시스템](./manage-cache)을 사용하는 것입니다.
캐시 위치는 `cache_dir` 매개변수로 설정하여 지정할 수 있습니다(`hf_hub_download()`과 `snapshot_download()`에서 모두 사용 가능).

그러나 파일을 다운로드하여 특정 폴더에 넣고 싶은 경우도 있습니다. 이 기능은 `git` 명령어가 제공하는 기능과 비슷한 워크플로우를 만들 수 있습니다. 이 경우 `local_dir`과 `local_dir_use_symlinks` 매개변수를 사용하여 원하는 대로 파일을 넣을 수 있습니다:
- `local_dir`은 시스템 내의 폴더 경로입니다. 다운로드한 파일은 리포지토리에 있는 것과 같은 파일 구조를 유지합니다. 예를 들어 `filename="data/train.csv"`와 `local_dir="path/to/folder"`라면, 반환된 파일 경로는 `"path/to/folder/data/train.csv"`가 됩니다.
- `local_dir_use_symlinks`는 파일을 로컬 폴더에 어떻게 넣을지 정의합니다.
  - 기본 동작('자동')은 작은 파일(5MB 이하)은 복사하고 큰 파일은 심볼릭 링크를 사용하는 것입니다. 심볼릭 링크를 사용하면 대역폭과 디스크 공간을 모두 절약할 수 있습니다. 그러나 심볼릭 링크된 파일을 직접 수정하면 캐시가 손상될 수 있으므로 작은 파일에 대해서는 복사를 사용합니다. 5MB 임계값은 `HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD` 환경 변수로 설정할 수 있습니다.
  - `local_dir_use_symlinks=true`로 설정하면 디스크 공간을 최대한 절약하기 위해 모든 파일이 심볼릭 링크됩니다. 이는 예를 들어 수천 개의 작은 파일로 이루어진 대용량 데이터 세트를 다운로드할 때 유용합니다.
  - 마지막으로 심볼릭 링크를 전혀 사용하지 않으려면 심볼릭 링크를 비활성화하면 됩니다(`local_dir_use_symlinks=False`). 캐시 디렉토리는 파일이 이미 캐시되었는지 여부를 확인하는 데 계속 사용됩니다. 이미 캐시된 경우 파일이 캐시에서 **복사**됩니다(즉, 대역폭은 절약되지만 디스크 공간이 증가합니다). 파일이 아직 캐시되지 않은 경우 파일을 다운로드하여 로컬 디렉터리에 바로 넣습니다. 즉, 나중에 다른 곳에서 다시 사용하려면 **다시 다운로드**해야 합니다.

다음은 다양한 옵션을 요약한 표입니다. 이 표를 참고하여 자신의 사용 사례에 가장 적합한 매개변수를 선택하세요.

<!-- Generated with https://www.tablesgenerator.com/markdown_tables -->
| 파라미터 | 캐시되었는지 여부 | 반환된 파일경로 | 열람 권한 | 수정 권한 | 대역폭의 효율적인 사용 | 디스크의 효율적인 접근 |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| `local_dir=None` |  | 캐시 속 심볼릭 링크 | ✅ | ❌<br>_(저장하면 캐시가 손상됩니다)_ | ✅ | ✅ |
| `local_dir="path/to/folder"`<br>`local_dir_use_symlinks="auto"` |  | 폴더 속 파일 또는 심볼릭 링크 | ✅ | ✅ _(소규모 파일의 경우)_ <br> ⚠️ _(대규모 파일의 경우 저장하기 전에 경로를 생성하지 마세요)_ | ✅ | ✅ |
| `local_dir="path/to/folder"`<br>`local_dir_use_symlinks=True` |  | 폴더 속 심볼릭 링크 | ✅ | ⚠️<br>_(저장하기 전에 경로를 생성하지 마세요)_ | ✅ | ✅ |
| `local_dir="path/to/folder"`<br>`local_dir_use_symlinks=False` | 아니오 | 폴더 속 파일 | ✅ | ✅ | ❌<br>_(다시 실행하면 파일도 다시 다운로드됩니다)_ | ⚠️<br>(여러 폴더에서 실행하면 그만큼 복사본이 생깁니다) |
| `local_dir="path/to/folder"`<br>`local_dir_use_symlinks=False` | 예 | 폴더 속 파일 | ✅ | ✅ | ⚠️<br>_(파일이 캐시되어 있어야 합니다)_ | ❌<br>_(파일이 중복됩니다)_ |

**참고**: Windows 컴퓨터를 사용하는 경우 심볼릭 링크를 사용하려면 개발자 모드를 켜거나 관리자 권한으로 `huggingface_hub`를 실행해야 합니다. 자세한 내용은 [캐시 제한](../guides/manage-cache#limitations) 섹션을 참조하세요.

## CLI에서 파일 다운로드하기[[download-from-the-cli]]

터미널에서 `hf download` 명령어를 사용하면 Hub에서 파일을 바로 다운로드할 수 있습니다.
이 명령어는 내부적으로 앞서 설명한 `hf_hub_download()`과 `snapshot_download()` 함수를 사용하고, 다운로드한 파일의 로컬 경로를 터미널에 출력합니다:

```bash
>>> hf download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

기본적으로 (`hf auth login` 명령으로) 로컬에 저장된 토큰을 사용합니다. 직접 인증하고 싶다면, `--token` 옵션을 사용하세요:

```bash
>>> hf download gpt2 config.json --token=hf_****
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

여러 파일을 한 번에 다운로드하면 진행률 표시줄이 보이고, 파일이 있는 스냅샷 경로가 반환됩니다:

```bash
>>> hf download gpt2 config.json model.safetensors
Fetching 2 files: 100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 23831.27it/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

진행률 표시줄이나 잠재적 경고가 필요 없다면 `--quiet` 옵션을 사용하세요. 이 옵션은 스크립트에서 다른 명령어로 출력을 넘겨주려는 경우에 유용할 수 있습니다.

```bash
>>> hf download gpt2 config.json model.safetensors
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

기본적으로 파일은 `HF_HOME` 환경 변수에 정의된 캐시 디렉터리(또는 지정하지 않은 경우 `~/.cache/huggingface/hub`)에 다운로드됩니다. 캐시 디렉터리는 `--cache-dir` 옵션으로 변경할 수 있습니다:

```bash
>>> hf download gpt2 config.json --cache-dir=./cache
./cache/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

캐시 디렉터리 구조를 따르지 않고 로컬 폴더에 파일을 다운로드하려면 `--local-dir` 옵션을 사용하세요.
로컬 폴더로 다운로드하면 이 [표](https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder)에 나열된 제한 사항이 있습니다.


```bash
>>> hf download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json
```


다른 리포지토리 유형이나 버전에서 파일을 다운로드하거나 glob 패턴을 사용하여 다운로드할 파일을 선택하거나 제외하도록 지정할 수 있는 인수들이 더 있습니다:

```bash
>>> hf download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files:   100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7
```

인수들의 전체 목록을 보려면 다음 명령어를 실행하세요:

```bash
hf download --help
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/download.md" />

### Space 관리하기[[manage-your-space]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/manage-spaces.md

# Space 관리하기[[manage-your-space]]

이 가이드에서는 `huggingface_hub`를 사용하여 Space 런타임([보안 정보](https://huggingface.co/docs/hub/spaces-overview#managing-secrets), [하드웨어](https://huggingface.co/docs/hub/spaces-gpus) 및 [저장소](https://huggingface.co/docs/hub/spaces-storage#persistent-storage))를 관리하는 방법을 살펴보겠습니다.

## 간단한 예제: 보안 정보 및 하드웨어 구성하기.[[a-simple-example-configure-secrets-and-hardware]]

다음은 Hub에서 Space를 생성하고 설정하는 통합 예시입니다.

**1. Hub에 Space 생성하기.**

```py
>>> from huggingface_hub import HfApi
>>> repo_id = "Wauplin/my-cool-training-space"
>>> api = HfApi()

# Gradio SDK 예제
>>> api.create_repo(repo_id=repo_id, repo_type="space", space_sdk="gradio")
```

**1. (bis) Space 복제하기.**

기존의 Space에서부터 시작하는 대신 새로운 Space를 구축하고 싶을 때 유용할 수 있습니다. 또한 공개된 Space의 구성/설정을 제어하고 싶을 때도 유용합니다. 자세한 내용은 [duplicate_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.duplicate_space)를 참조하세요.

```py
>>> api.duplicate_space("multimodalart/dreambooth-training")
```

**2. 선호하는 솔루션을 사용하여 코드 업로드하기.**

다음은 로컬 폴더 `src/`를 사용자의 컴퓨터에서 Space로 업로드하는 예시입니다:

```py
>>> api.upload_folder(repo_id=repo_id, repo_type="space", folder_path="src/")
```

이 단계에서는 앱이 이미 무료로 Hub에서 실행 중이어야 합니다! 그러나 더 많은 보안 정보와 업그레이드된 하드웨어를 이용하여 추가적으로 구성할 수 있습니다.

**3. 보안 정보와 변수 설정하기**

Space에서 작동하려면 일부 보안 키, 토큰 또는 변수가 필요할 수 있습니다. 자세한 내용은 [문서](https://huggingface.co/docs/hub/spaces-overview#managing-secrets)를 참조하세요. Space에서 생성된 HF 토큰으로 이미지 데이터 세트를 Hub에 업로드하는 경우를 예로 들어봅시다.

```py
>>> api.add_space_secret(repo_id=repo_id, key="HF_TOKEN", value="hf_api_***")
>>> api.add_space_variable(repo_id=repo_id, key="MODEL_REPO_ID", value="user/repo")
```

보안 정보와 변수는 삭제할 수도 있습니다:
```py
>>> api.delete_space_secret(repo_id=repo_id, key="HF_TOKEN")
>>> api.delete_space_variable(repo_id=repo_id, key="MODEL_REPO_ID")
```

> [!TIP]
> Space 내에서 보안 정보는 환경 변수로 사용할 수 있습니다 (Streamlit를 사용하는 경우 Streamlit Secrets를 사용). API를 통해 가져올 필요가 없습니다!

> [!WARNING]
> Space 구성(보안 정보 또는 하드웨어)이 변경되면 앱이 다시 시작됩니다.

**보너스: Space 생성 또는 복제 시 보안 정보와 변수 설정하기!**

Space를 생성하거나 복제할 때 보안 정보와 변수를 설정할 수 있습니다:

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio",
...     space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
```

```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
```

**4. 하드웨어 구성**

기본적으로 Space는 무료로 CPU 환경에서 실행됩니다. GPU에서 실행하기 위해 하드웨어를 업그레이드 할 수도 있습니다. 하드웨어를 업그레이드하려면 결제 카드 또는 커뮤니티 그랜트가 필요합니다. 자세한 내용은 [문서](https://huggingface.co/docs/hub/spaces-gpus)를 참조하세요.

```py
# `SpaceHardware` enum 사용
>>> from huggingface_hub import SpaceHardware
>>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM)

# 또는 간단히 문자열 값 전달
>>> api.request_space_hardware(repo_id=repo_id, hardware="t4-medium")
```

Space가 서버에서 다시 로드되어야 하기 때문에 하드웨어 업데이트는 즉시 이루어지지 않습니다. Space가 어떤 하드웨어에서 실행되고 있는지 언제든지 확인하여 요청이 충족되었는지 확인할 수 있습니다.

```py
>>> runtime = api.get_space_runtime(repo_id=repo_id)
>>> runtime.stage
"RUNNING_BUILDING"
>>> runtime.hardware
"cpu-basic"
>>> runtime.requested_hardware
"t4-medium"
```

이제 완전히 구성된 Space를 가지게 되었습니다. 사용이 끝난 후에는 Space를 "cpu-classic"으로 다운그레이드하는 것을 잊지 마세요.

**보너스: Space를 생성하거나 복제할 때 하드웨어 요청하기!**

Space가 구축되면 업그레이드된 하드웨어가 자동으로 할당됩니다.

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_hardware="cpu-upgrade",
...     space_storage="small",
...     space_sleep_time="7200", # 2시간을 초로 환산
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     hardware="cpu-upgrade",
...     storage="small",
...     sleep_time="7200", # 2시간을 초로 환산
... )
```

**5. Space 일시 중지 및 다시 시작**

기본적으로 Space가 업그레이드된 하드웨어에서 실행 중이면 절대로 중단되지 않습니다. 그러나 요금이 부과되는 것을 피하려면 사용하지 않을 때 일시 중지하는 것이 좋습니다. 이는 [pause_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_space)를 사용하여 가능합니다. 일시 중지된 Space는 Space 소유자가 UI를 통해 또는 [restart_space()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.restart_space)를 사용하여 API를 통해 다시 시작할 때까지 비활성화됩니다. 일시 중지된 모드에 대한 자세한 내용은 [이 섹션](https://huggingface.co/docs/hub/spaces-gpus#pause)을 참조하세요.

```py
# 과금을 피하기 위해 Space를 일시 중지하세요
>>> api.pause_space(repo_id=repo_id)
# (...)
# 필요할 때 다시 시작하세요
>>> api.restart_space(repo_id=repo_id)
```

다른 방법은 Space에 대한 제한 시간을 설정하는 것입니다. Space가 제한 시간을 초과하여 비활성화되면 Space가 sleep 상태로 전환됩니다. Space를 방문한 방문자가 다시 시작시킬 수 있습니다. [set_space_sleep_time()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.set_space_sleep_time)를 사용하여 제한 시간을 설정할 수 있습니다. Sleeping 모드에 대한 자세한 내용은 [이 섹션](https://huggingface.co/docs/hub/spaces-gpus#sleep-time)을 참조하세요.

```py
# 동작이 멈춘 후 1시간 후에 Space를 sleep 상태로 설정하세요
>>> api.set_space_sleep_time(repo_id=repo_id, sleep_time=3600)
```

참고: 'cpu-basic' 하드웨어를 사용하는 경우 사용자 정의 sleep 시간을 구성할 수 없습니다. Space가 48시간 동안 동작을 멈추면 자동으로 일시 중지됩니다.

**보너스: 하드웨어를 요청하는 동안 sleep 시간 설정하기**

업그레이드된 하드웨어가 Space에 자동으로 할당됩니다.

```py
>>> api.request_space_hardware(repo_id=repo_id, hardware=SpaceHardware.T4_MEDIUM, sleep_time=3600)
```

**보너스: Space를 생성하거나 복제할 때 sleep 시간 설정하기!**

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_hardware="t4-medium",
...     space_sleep_time="3600",
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     hardware="t4-medium",
...     sleep_time="3600",
... )
```

**6. Space에 지속적으로 저장소 추가하기**

Space를 다시 시작할 때 지속적으로 디스크 공간에 접근할 수 있는 원하는 저장소 계층을 선택할 수 있습니다. 이는 기존의 하드 드라이브와 같이 디스크에서 읽고 쓸 수 있음을 의미합니다. 자세한 내용은 [문서](https://huggingface.co/docs/hub/spaces-storage#persistent-storage)를 참조하세요.

```py
>>> from huggingface_hub import SpaceStorage
>>> api.request_space_storage(repo_id=repo_id, storage=SpaceStorage.LARGE)
```

또한 모든 데이터를 영구적으로 삭제하여 저장소를 삭제할 수 있습니다.
```py
>>> api.delete_space_storage(repo_id=repo_id)
```

참고: 한 번 승인된 저장소의 저장소 계층을 낮출 수 없습니다. 그렇게 하려면, 먼저 저장소를 삭제한 다음 새로운 원하는 계층을 요청해야 합니다.

**보너스: Space를 생성하거나 복제할 때 저장소 요청하기!**

```py
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio"
...     space_storage="large",
... )
```
```py
>>> api.duplicate_space(
...     from_id=repo_id,
...     storage="large",
... )
```

## 고급 기능: Space를 일시적으로 업그레이드하기![[more-advanced-temporarily-upgrade-your-space-]]

Space는 다양한 사용 사례를 허용합니다. 때로는 특정 하드웨어에서 Space를 일시적으로 실행한 다음 무언가를 수행한 후 종료하고 싶을 수 있습니다. 이 섹션에서는 Space를 활용하여 필요할 때 모델을 세밀하게 조정하는 방법에 대해 탐색할 것입니다. 이는 특정 문제를 해결하는 한 가지 방법에 불과합니다. 이를 바탕으로 사용 사례에 맞게 조정해서 사용해야 합니다.

모델을 세밀하게 조정하기 위한 Space가 있다고 가정해 봅시다. 입력으로 모델 ID와 데이터 세트 ID를 받는 Gradio 앱입니다. 작업 흐름은 다음과 같습니다:

0. (사용자에게 모델과 데이터 세트를 요청)
1. Hub에서 모델을 로드합니다.
2. Hub에서 데이터 세트를 로드합니다.
3. 데이터 세트로 모델을 미세 조정합니다.
4. 새 모델을 Hub에 업로드합니다.

단계 3.에서는 사용자 정의 하드웨어가 필요하지만 유료 GPU에서 Space를 항상 실행하고 싶지는 않을 것입니다. 이 때는 학습을 위해 하드웨어를 동적으로 요청한 다음 종료해야 합니다. 하드웨어를 요청하면 Space가 다시 시작되므로 앱은 현재 수행 중인 작업을 어떻게든 "기억"해야 합니다. 이를 수행하는 여러 가지 방법이 있습니다. 이 가이드에서는 "작업 스케줄러"로서 Dataset을 사용하는 하나의 해결책을 살펴보겠습니다.

### 앱 구조[[app-skeleton]]

다음은 구현된 앱의 모습입니다. 시작할 때 예약된 작업이 있는지 확인하고 있다면 적절한 하드웨어에서 실행합니다. 작업이 완료되면 하드웨어를 무료 요금제 CPU로 다시 설정하고 사용자에게 새 작업을 요청합니다.

> [!WARNING]
> 이 예시는 일반적인 데모처럼 병렬 액세스를 지원하지 않습니다. 특히 학습이 진행되는 동안 인터페이스가 비활성화됩니다. 저장소를 개인으로 설정하여 단일 사용자임을 보장하는 것이 좋습니다.

```py
# Space는 하드웨어를 요청하기 위해 토큰이 필요합니다: Secret으로 설정하세요!
HF_TOKEN = os.environ.get("HF_TOKEN")

# Space를 가진 repo_id
TRAINING_SPACE_ID = "Wauplin/dreambooth-training"

from huggingface_hub import HfApi, SpaceHardware
api = HfApi(token=HF_TOKEN)

# Space 시작 시 예약된 작업을 확인합니다. 예약된 작업이 있는 경우 모델을 미세 조정합니다. 그렇지 않은 경우,
# 새 작업을 요청할 수 있는 인터페이스를 표시합니다.
task = get_task()
if task is None:
    # Gradio 앱 시작
    def gradio_fn(task):
        # 사용자 요청 시 작업 추가 및 하드웨어 요청
        add_task(task)
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM)

    gr.Interface(fn=gradio_fn, ...).launch()
else:
    runtime = api.get_space_runtime(repo_id=TRAINING_SPACE_ID)
    # GPU를 사용 중인지 확인합니다.
    if runtime.hardware == SpaceHardware.T4_MEDIUM:
        # 그렇다면, 기본 모델을 데이터 세트로 미세 조정합니다!
        train_and_upload(task)

        # 그런 다음, 작업을 "DONE"으로 표시합니다.
        mark_as_done(task)

        # 잊지 말아야 할 것: CPU 하드웨어로 다시 설정
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.CPU_BASIC)
    else:
        api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM)
```

### 작업 스케줄러[[task-scheduler]]

작업 스케줄링은 여러 가지 방법으로 수행할 수 있습니다. 여기에는 간단한 CSV 파일을 데이터 세트로 사용하여 작업 스케줄링을 하는 예시입니다.

```py
# 'tasks.csv' 파일을 포함하는 데이터 세트의 Dataset ID.
# 여기서는 입력(기본 모델 및 데이터 세트)과 상태(PENDING 또는 DONE)가 포함된 'tasks.csv' 기본 예제가 주어집니다.
#     multimodalart/sd-fine-tunable,Wauplin/concept-1,DONE
#     multimodalart/sd-fine-tunable,Wauplin/concept-2,PENDING
TASK_DATASET_ID = "Wauplin/dreambooth-task-scheduler"

def _get_csv_file():
    return hf_hub_download(repo_id=TASK_DATASET_ID, filename="tasks.csv", repo_type="dataset", token=HF_TOKEN)

def get_task():
    with open(_get_csv_file()) as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        for row in csv_reader:
            if row[2] == "PENDING":
                return row[0], row[1] # model_id, dataset_id

def add_task(task):
    model_id, dataset_id = task
    with open(_get_csv_file()) as csv_file:
        with open(csv_file, "r") as f:
            tasks = f.read()

    api.upload_file(
        repo_id=repo_id,
        repo_type=repo_type,
        path_in_repo="tasks.csv",
        # 작업을 추가하기 위한 빠르고 더러운 방법
        path_or_fileobj=(tasks + f"\n{model_id},{dataset_id},PENDING").encode()
    )

def mark_as_done(task):
    model_id, dataset_id = task
    with open(_get_csv_file()) as csv_file:
        with open(csv_file, "r") as f:
            tasks = f.read()

    api.upload_file(
        repo_id=repo_id,
        repo_type=repo_type,
        path_in_repo="tasks.csv",
        # 작업을 DONE으로 설정하는 빠르고 더러운 방법
        path_or_fileobj=tasks.replace(
            f"{model_id},{dataset_id},PENDING",
            f"{model_id},{dataset_id},DONE"
        ).encode()
    )
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/manage-spaces.md" />

### 서버에서 추론 진행하기[[run-inference-on-servers]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/inference.md

# 서버에서 추론 진행하기[[run-inference-on-servers]]

추론은 훈련된 모델을 사용하여 새 데이터에 대한 예측을 수행하는 과정입니다. 이 과정은 계산이 많이 필요할 수 있으므로, 전용 서버에서 실행하는 것이 좋은 방안이 될 수 있습니다. `huggingface_hub` 라이브러리는 호스팅된 모델에 대한 추론을 실행하는 서비스를 호출하는 간편한 방법을 제공합니다. 다음과 같은 여러 서비스에 연결할 수 있습니다:
- [추론 API](https://huggingface.co/docs/api-inference/index): Hugging Face의 인프라에서 가속화된 추론을 실행할 수 있는 서비스로 무료로 제공됩니다. 이 서비스는 추론을 시작하고 다양한 모델을 테스트하며 AI 제품의 프로토타입을 만드는 빠른 방법입니다.
- [추론 엔드포인트](https://huggingface.co/docs/inference-endpoints/index): 모델을 제품 환경에 쉽게 배포할 수 있는 제품입니다. 사용자가 선택한 클라우드 환경에서 완전 관리되는 전용 인프라에서 Hugging Face를 통해 추론이 실행됩니다.


> [!TIP]
> [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)는 API에 HTTP 호출을 수행하는 Python 클라이언트입니다. HTTP 호출을 원하는 툴을 이용하여 직접 사용하려면 (curl, postman 등) [추론 API](https://huggingface.co/docs/api-inference/index) 또는 [추론 엔드포인트](https://huggingface.co/docs/inference-endpoints/index) 문서 페이지를 참조하세요.
>
> 웹 개발을 위해 [JS 클라이언트](https://huggingface.co/docs/huggingface.js/inference/README)가 출시되었습니다. 게임 개발에 관심이 있다면 [C# 프로젝트](https://github.com/huggingface/unity-api)를 살펴보세요.

## 시작하기[[getting-started]]

text-to-image 작업을 시작해보겠습니다.

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")
```

우리는 기본 매개변수로 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)를 초기화했습니다. 수행하고자 하는 [작업](#supported-tasks)만 알면 됩니다. 기본적으로 클라이언트는 추론 API에 연결하고 작업을 완료할 모델을 선택합니다. 예제에서는 텍스트 프롬프트에서 이미지를 생성했습니다. 반환된 값은 파일로 저장할 수 있는 `PIL.Image` 객체입니다.

> [!WARNING]
> API는 간단하게 설계되었습니다. 모든 매개변수와 옵션이 사용 가능하거나 설명되어 있는 것은 아닙니다. 각 작업에서 사용 가능한 모든 매개변수에 대해 자세히 알아보려면 [이 페이지](https://huggingface.co/docs/api-inference/detailed_parameters)를 확인하세요.

### 특정 모델 사용하기[[using-a-specific-model]]

특정 모델을 사용하고 싶다면 어떻게 해야 할까요? 매개변수로 직접 지정하거나 인스턴스 수준에서 직접 지정할 수 있습니다:

```python
>>> from huggingface_hub import InferenceClient
# 특정 모델을 위한 클라이언트를 초기화합니다.
>>> client = InferenceClient(model="prompthero/openjourney-v4")
>>> client.text_to_image(...)
# 또는 일반적인 클라이언트를 사용하되 모델을 인수로 전달하세요.
>>> client = InferenceClient()
>>> client.text_to_image(..., model="prompthero/openjourney-v4")
```

> [!TIP]
> Hugging Face Hub에는 20만 개가 넘는 모델이 있습니다! [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)의 각 작업에는 추천되는 모델이 포함되어 있습니다. HF의 추천은 사전 고지 없이 시간이 지남에 따라 변경될 수 있음을 유의하십시오. 따라서 모델을 결정한 후에는 명시적으로 모델을 설정하는 것이 좋습니다. 또한 대부분의 경우 자신의 필요에 맞는 모델을 직접 찾고자 할 것입니다. 허브의 [모델](https://huggingface.co/models) 페이지를 방문하여 찾아보세요.

### 특정 URL 사용하기[[using-a-specific-url]]

위에서 본 예제들은 서버리스 추론 API를 사용합니다. 이는 빠르게 프로토타입을 정하고 테스트할 때 매우 유용합니다. 모델을 프로덕션 환경에 배포할 준비가 되면 전용 인프라를 사용해야 합니다. 그것이 [추론 엔드포인트](https://huggingface.co/docs/inference-endpoints/index)가 필요한 이유입니다. 이를 사용하면 모든 모델을 배포하고 개인 API로 노출시킬 수 있습니다. 한 번 배포되면 이전과 완전히 동일한 코드를 사용하여 연결할 수 있는 URL을 얻게 됩니다. `model` 매개변수만 변경하면 됩니다:

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if")
# 또는
>>> client = InferenceClient()
>>> client.text_to_image(..., model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if")
```

### 인증[[authentication]]

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)로 수행된 호출은 [사용자 액세스 토큰](https://huggingface.co/docs/hub/security-tokens)을 사용하여 인증할 수 있습니다. 기본적으로 로그인한 경우 기기에 저장된 토큰을 사용합니다 ([인증 방법](https://huggingface.co/docs/huggingface_hub/quick-start#authentication)을 확인하세요). 로그인하지 않은 경우 인스턴스 매개변수로 토큰을 전달할 수 있습니다.

```python
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(token="hf_***")
```

> [!TIP]
> 추론 API를 사용할 때 인증은 필수가 아닙니다. 그러나 인증된 사용자는 서비스를 이용할 수 있는 더 높은 무료 티어를 받습니다. 토큰은 개인 모델이나 개인 엔드포인트에서 추론을 실행하려면 필수입니다.

## 지원되는 작업[[supported-tasks]]

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)의 목표는 Hugging Face 모델에서 추론을 실행하기 위한 가장 쉬운 인터페이스를 제공하는 것입니다. 이는 가장 일반적인 작업들을 지원하는 간단한 API를 가지고 있습니다. 현재 지원되는 작업 목록은 다음과 같습니다:

| 도메인      | 작업                                                                              | 지원 여부 | 문서                                                |
| ----------- | --------------------------------------------------------------------------------- | --------- | --------------------------------------------------- |
| 오디오      | [오디오 분류](https://huggingface.co/tasks/audio-classification)                  | ✅         | [audio_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.audio_classification)           |
| 오디오      | [오디오 투 오디오](https://huggingface.co/tasks/audio-to-audio)                   | ✅         | [audio_to_audio()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.audio_to_audio)                 |
|             | [자동 음성 인식](https://huggingface.co/tasks/automatic-speech-recognition)       | ✅         | [automatic_speech_recognition()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.automatic_speech_recognition)   |
|             | [텍스트 투 스피치](https://huggingface.co/tasks/text-to-speech)                   | ✅         | [text_to_speech()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_speech)                 |
| 컴퓨터 비전 | [이미지 분류](https://huggingface.co/tasks/image-classification)                  | ✅         | [image_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.image_classification)           |
|             | [이미지 분할](https://huggingface.co/tasks/image-segmentation)                    | ✅         | [image_segmentation()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.image_segmentation)             |
|             | [이미지 투 이미지](https://huggingface.co/tasks/image-to-image)                   | ✅         | [image_to_image()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.image_to_image)                 |
|             | [이미지 투 텍스트](https://huggingface.co/tasks/image-to-text)                    | ✅         | [image_to_text()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.image_to_text)                  |
|             | [객체 탐지](https://huggingface.co/tasks/object-detection)                        | ✅         | [object_detection()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.object_detection)               |
|             | [텍스트 투 이미지](https://huggingface.co/tasks/text-to-image)                    | ✅         | [text_to_image()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.text_to_image)                  |
|             | [제로샷 이미지 분류](https://huggingface.co/tasks/zero-shot-image-classification) | ✅         | [zero_shot_image_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_image_classification) |
| 멀티모달    | [문서 질의 응답](https://huggingface.co/tasks/document-question-answering)        | ✅         | [document_question_answering()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.document_question_answering)    |
|             | [시각적 질의 응답](https://huggingface.co/tasks/visual-question-answering)        | ✅         | [visual_question_answering()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.visual_question_answering)      |
| 자연어 처리 | [대화형](https://huggingface.co/tasks/conversational)                             | ✅         | `~InferenceClient.conversational`                 |
|             | [특성 추출](https://huggingface.co/tasks/feature-extraction)                      | ✅         | [feature_extraction()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.feature_extraction)             |
|             | [마스크 채우기](https://huggingface.co/tasks/fill-mask)                           | ✅         | [fill_mask()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.fill_mask)                      |
|             | [질의 응답](https://huggingface.co/tasks/question-answering)                      | ✅         | [question_answering()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.question_answering)             |
|             | [문장 유사도](https://huggingface.co/tasks/sentence-similarity)                   | ✅         | [sentence_similarity()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.sentence_similarity)            |
|             | [요약](https://huggingface.co/tasks/summarization)                                | ✅         | [summarization()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.summarization)                  |
|             | [테이블 질의 응답](https://huggingface.co/tasks/table-question-answering)         | ✅         | [table_question_answering()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.table_question_answering)       |
|             | [텍스트 분류](https://huggingface.co/tasks/text-classification)                   | ✅         | [text_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.text_classification)            |
|             | [텍스트 생성](https://huggingface.co/tasks/text-generation)                       | ✅         | [text_generation()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation)                |
|             | [토큰 분류](https://huggingface.co/tasks/token-classification)                    | ✅         | [token_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.token_classification)           |
|             | [번역](https://huggingface.co/tasks/translation)                                  | ✅         | [translation()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.translation)                    |
|             | [제로샷 분류](https://huggingface.co/tasks/zero-shot-classification)              | ✅         | [zero_shot_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_classification)       |
| 타블로      | [타블로 작업 분류](https://huggingface.co/tasks/tabular-classification)           | ✅         | [tabular_classification()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.tabular_classification)         |
|             | [타블로 회귀](https://huggingface.co/tasks/tabular-regression)                    | ✅         | [tabular_regression()](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient.tabular_regression)             |

> [!TIP]
> 각 작업에 대해 더 자세히 알고 싶거나 사용 방법 및 각 작업에 대한 가장 인기 있는 모델을 알아보려면 [Tasks](https://huggingface.co/tasks) 페이지를 확인하세요.



## 비동기 클라이언트[[async-client]]

`asyncio`와 `aiohttp`를 기반으로 한 클라이언트의 비동기 버전도 제공됩니다. `aiohttp`를 직접 설치하거나 `[inference]` 추가 옵션을 사용할 수 있습니다:

```sh
pip install --upgrade huggingface_hub[inference]
# 또는
# pip install aiohttp
```

설치 후 모든 비동기 API 엔드포인트는 [AsyncInferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)를 통해 사용할 수 있습니다. 초기화 및 API는 동기 전용 버전과 완전히 동일합니다.

```py
# 코드는 비동기 asyncio 라이브러리 동시성 컨텍스트에서 실행되어야 합니다.
# $ python -m asyncio
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> async for token in await client.text_generation("The Huggingface Hub is", stream=True):
...     print(token, end="")
 a platform for sharing and discussing ML-related content.
```

`asyncio` 모듈에 대한 자세한 정보는 [공식 문서](https://docs.python.org/3/library/asyncio.html)를 참조하세요.

## 고급 팁[[advanced-tips]]

위 섹션에서는 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)의 주요 측면을 살펴보았습니다. 이제 몇 가지 고급 팁에 대해 자세히 알아보겠습니다.

### 타임아웃[[timeout]]

추론을 수행할 때 타임아웃이 발생하는 주요 원인은 두 가지입니다:
- 추론 프로세스가 완료되는 데 오랜 시간이 걸리는 경우
- 모델이 사용 불가능한 경우, 예를 들어 Inference API를 처음으로 가져오는 경우

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)에는 이 두 가지를 처리하기 위한 전역 `timeout` 매개변수가 있습니다. 기본값은 `None`으로 설정되어 있으며, 클라이언트가 추론이 완료될 때까지 무기한으로 기다리게 합니다. 워크플로우에서 더 많은 제어를 원하는 경우 초 단위의 특정한 값으로 설정할 수 있습니다. 타임아웃 딜레이가 만료되면 [InferenceTimeoutError](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceTimeoutError)가 발생합니다. 이를 코드에서 처리할 수 있습니다:

```python
>>> from huggingface_hub import InferenceClient, InferenceTimeoutError
>>> client = InferenceClient(timeout=30)
>>> try:
...     client.text_to_image(...)
... except InferenceTimeoutError:
...     print("Inference timed out after 30s.")
```

### 이진 입력[[binary-inputs]]

일부 작업에는 이미지 또는 오디오 파일을 처리할 때와 같이 이진 입력이 필요한 경우가 있습니다. 이 경우 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)는 최대한 다양한 유형을 융통성 있게 허용합니다:
- 원시 `bytes`
- 이진으로 열린 파일과 유사한 객체 (`with open("audio.flac", "rb") as f: ...`)
- 로컬 파일을 가리키는 경로 (`str` 또는 `Path`)
- 원격 파일을 가리키는 URL (`str`) (예: `https://...`). 이 경우 파일은 Inference API로 전송되기 전에 로컬로 다운로드됩니다.

```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/inference.md" />

### How-to 가이드 [[howto-guides]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/overview.md

# How-to 가이드 [[howto-guides]]

특정 목표를 달성하는 데 도움이 되는 실용적인 가이드들입니다. huggingface_hub로 실제 문제를 해결하는 방법을 배우려면 다음 문서들을 살펴보세요.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./repository">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        리포지토리
      </div><p class="text-gray-700">
        Hub에서 리포지토리를 만드는 방법은 무엇인가요? 구성하는 방법은요? 리포지토리와 상호 작용하려면 어떻게 해야하나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./download">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        파일 다운로드
      </div><p class="text-gray-700">
        Hub에서 파일을 다운로드하려면 어떻게 하나요? 리포지토리는요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./upload">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        파일 업로드
      </div><p class="text-gray-700">
        파일이나 폴더를 어떻게 업로드하나요? Hub의 기존 리포지토리를 변경하려면 어떻게 해야 하나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./search">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        검색
      </div><p class="text-gray-700">
        20만 개가 넘게 공개된 모델, 데이터 세트 및 Space를 효율적으로 검색하는 방법은 무엇인가요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./hf_file_system">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        HfFileSystem
      </div><p class="text-gray-700">
        Python의 파일 인터페이스를 모방한 편리한 인터페이스를 통해 Hub와 상호 작용하는 방법은 무엇인가요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./inference">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Inference
      </div><p class="text-gray-700">
        가속화된 Inference API로 추론하려면 어떻게 하나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./community">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        커뮤니티 탭
      </div><p class="text-gray-700">
        커뮤니티 탭에서 PR과 댓글을 통해 어떻게 소통할 수 있나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./manage-cache">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        캐시
      </div><p class="text-gray-700">
        캐시 시스템은 어떻게 작동하나요? 이점은 무엇인가요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./model-cards">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        모델 카드
      </div><p class="text-gray-700">
        모델 카드는 어떻게 만들고 공유하나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./manage-spaces">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Space 관리
      </div><p class="text-gray-700">
        Space 하드웨어와 구성은 어떻게 관리하나요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./integrations">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        라이브러리 통합
      </div><p class="text-gray-700">
        라이브러리를 Hub와 통합한다는 것은 무엇을 의미하나요? 그리고 어떻게 할 수 있을까요?
      </p>
    </a>

    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
       href="./webhooks_server">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        웹훅 서버
      </div><p class="text-gray-700">
        웹훅을 수신할 서버를 만들고 Space로 배포하는 방법은 무엇인가요?
      </p>
    </a>

  </div>
</div>


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/overview.md" />

### Discussions 및 Pull Requests를 이용하여 상호작용하기[[interact-with-discussions-and-pull-requests]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/community.md

# Discussions 및 Pull Requests를 이용하여 상호작용하기[[interact-with-discussions-and-pull-requests]]

`huggingface_hub` 라이브러리는 Hub의 Pull Requests 및 Discussions와 상호작용할 수 있는 Python 인터페이스를 제공합니다.
[전용 문서 페이지](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)를 방문하여 Hub의 Discussions와 Pull Requests가 무엇이고 어떻게 작동하는지 자세히 살펴보세요.

## Hub에서 Discussions 및 Pull Requests 가져오기[[retrieve-discussions-and-pull-requests-from-the-hub]]

`HfApi` 클래스를 사용하면 지정된 리포지토리에 대한 Discussions 및 Pull Requests를 검색할 수 있습니다:

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(repo_id="bigscience/bloom"):
...     print(f"{discussion.num} - {discussion.title}, pr: {discussion.is_pull_request}")

# 11 - Add Flax weights, pr: True
# 10 - Update README.md, pr: True
# 9 - Training languages in the model card, pr: True
# 8 - Update tokenizer_config.json, pr: True
# 7 - Slurm training script, pr: False
[...]
```

`HfApi.get_repo_discussion`은 작성자, 유형(Pull Requests 또는 Discussion) 및 상태(`open` 또는 `closed`)별로 필터링을 지원합니다:

```python
>>> from huggingface_hub import get_repo_discussions
>>> for discussion in get_repo_discussions(
...    repo_id="bigscience/bloom",
...    author="ArthurZ",
...    discussion_type="pull_request",
...    discussion_status="open",
... ):
...     print(f"{discussion.num} - {discussion.title} by {discussion.author}, pr: {discussion.is_pull_request}")

# 19 - Add Flax weights by ArthurZ, pr: True
```

`HfApi.get_repo_discussions`는 [Discussion](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.Discussion) 객체를 생성하는 [생성자](https://docs.python.org/3.7/howto/functional.html#generators)를 반환합니다. 모든 Discussions를 하나의 리스트로 가져오려면 다음을 실행합니다:

```python
>>> from huggingface_hub import get_repo_discussions
>>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))
```

[HfApi.get_repo_discussions()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_repo_discussions)가 반환하는 [Discussion](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.Discussion) 객체에는 Discussions 또는 Pull Request에 대한 개략적인 개요가 포함되어 있습니다. [HfApi.get_discussion_details()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details)를 사용하여 더 자세한 정보를 얻을 수도 있습니다:

```python
>>> from huggingface_hub import get_discussion_details

>>> get_discussion_details(
...     repo_id="bigscience/bloom-1b3",
...     discussion_num=2
... )
DiscussionWithDetails(
    num=2,
    author='cakiki',
    title='Update VRAM memory for the V100s',
    status='open',
    is_pull_request=True,
    events=[
        DiscussionComment(type='comment', author='cakiki', ...),
        DiscussionCommit(type='commit', author='cakiki', summary='Update VRAM memory for the V100s', oid='1256f9d9a33fa8887e1c1bf0e09b4713da96773a', ...),
    ],
    conflicting_files=[],
    target_branch='refs/heads/main',
    merge_commit_oid=None,
    diff='diff --git a/README.md b/README.md\nindex a6ae3b9294edf8d0eda0d67c7780a10241242a7e..3a1814f212bc3f0d3cc8f74bdbd316de4ae7b9e3 100644\n--- a/README.md\n+++ b/README.md\n@@ -132,7 +132,7 [...]',
)
```

[HfApi.get_discussion_details()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_discussion_details)는 Discussion 또는 Pull Request에 대한 자세한 정보가 포함된 [Discussion](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.Discussion)의 하위 클래스인 [DiscussionWithDetails](/docs/huggingface_hub/main/ko/package_reference/community#huggingface_hub.DiscussionWithDetails) 객체를 반환합니다. 해당 정보는 `DiscussionWithDetails.events`를 통해 Discussion의 모든 댓글, 상태 변경 및 이름 변경을 포함하고 있습니다.

Pull Request의 경우, `DiscussionWithDetails.diff`를 통해 원시 git diff를 검색할 수 있습니다. Pull Request의 모든 커밋은 `DiscussionWithDetails.events`에 나열됩니다.


## 프로그래밍 방식으로 Discussion 또는 Pull Request를 생성하고 수정하기[[create-and-edit-a-discussion-or-pull-request-programmatically]]

[HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 클래스는 Discussions 및 Pull Requests를 생성하고 수정하는 방법도 제공합니다.
Discussions와 Pull Requests를 만들고 편집하려면 [접근 토큰](https://huggingface.co/docs/hub/security-tokens)이 필요합니다.

Hub의 리포지토리에 변경 사항을 제안하는 가장 간단한 방법은 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit) API를 사용하는 것입니다. `create_pr` 매개변수를 `True`로 설정하기만 하면 됩니다. 이 매개변수는 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)을 래핑하는 다른 함수에서도 사용할 수 있습니다:

    * [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file)
    * [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder)
    * [delete_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_file)
    * [delete_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_folder)
    * [metadata_update()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.metadata_update)

```python
>>> from huggingface_hub import metadata_update

>>> metadata_update(
...     repo_id="username/repo_name",
...     metadata={"tags": ["computer-vision", "awesome-model"]},
...     create_pr=True,
... )
```

리포지토리에 대한 Discussion(또는 Pull Request)을 만들려면 [`HfApi.create_discussion`](또는 [HfApi.create_pull_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request))을 사용할 수도 있습니다.
이 방법으로 Pull Request를 열면 로컬에서 변경 작업을 해야 하는 경우에 유용할 수 있습니다. 이 방법으로 열린 Pull Request는 `"draft"` 모드가 됩니다.

```python
>>> from huggingface_hub import create_discussion, create_pull_request

>>> create_discussion(
...     repo_id="username/repo-name",
...     title="Hi from the huggingface_hub library!",
...     token="<insert your access token here>",
... )
DiscussionWithDetails(...)

>>> create_pull_request(
...     repo_id="username/repo-name",
...     title="Hi from the huggingface_hub library!",
...     token="<insert your access token here>",
... )
DiscussionWithDetails(..., is_pull_request=True)
```

Pull Requests 및 Discussions 관리는 전적으로 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 클래스로 할 수 있습니다. 예를 들어:

    * 댓글을 추가하려면 [comment_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.comment_discussion)
    * 댓글을 수정하려면 [edit_discussion_comment()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.edit_discussion_comment)
    * Discussion 또는 Pull Request의 이름을 바꾸려면 [rename_discussion()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.rename_discussion)
    * Discussion / Pull Request를 열거나 닫으려면 [change_discussion_status()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.change_discussion_status)
    * Pull Request를 병합하려면 [merge_pull_request()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.merge_pull_request)를 사용합니다.


사용 가능한 모든 메소드에 대한 전체 참조는 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 문서 페이지를 참조하세요.

## Pull Request에 변경 사항 푸시[[push-changes-to-a-pull-request]]

*곧 공개됩니다!*

## 참고 항목[[see-also]]

더 자세한 내용은 [Discussions 및 Pull Requests](../package_reference/community)와 [hf_api](../package_reference/hf_api) 문서 페이지를 참조하세요.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/community.md" />

### Hugging Face Hub에서 파일 시스템 API를 통해 상호작용하기[[interact-with-the-hub-through-the-filesystem-api]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/hf_file_system.md

# Hugging Face Hub에서 파일 시스템 API를 통해 상호작용하기[[interact-with-the-hub-through-the-filesystem-api]]

`huggingface_hub` 라이브러리는 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 외에도 Hugging Face Hub에 대한 파이써닉한 [fsspec-compatible](https://filesystem-spec.readthedocs.io/en/latest/) 파일 인터페이스인 [HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)을 제공합니다. [HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)은 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi)을 기반으로 구축되며, `cp`, `mv`, `ls`, `du`, `glob`, `get_file` 및 `put_file`과 같은 일반적인 파일 시스템 스타일 작업을 제공합니다.

## 사용법[[usage]]

```python
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()

>>> # 디렉터리의 모든 파일 나열하기
>>> fs.ls("datasets/my-username/my-dataset-repo/data", detail=False)
['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv']

>>> # 저장소(repo)에서 ".csv" 파일 모두 나열하기
>>> fs.glob("datasets/my-username/my-dataset-repo/**.csv")
['datasets/my-username/my-dataset-repo/data/train.csv', 'datasets/my-username/my-dataset-repo/data/test.csv']

>>> # 원격 파일 읽기
>>> with fs.open("datasets/my-username/my-dataset-repo/data/train.csv", "r") as f:
...     train_data = f.readlines()

>>> # 문자열로 원격 파일의 내용 읽기
>>> train_data = fs.read_text("datasets/my-username/my-dataset-repo/data/train.csv", revision="dev")

>>> # 원격 파일 쓰기
>>> with fs.open("datasets/my-username/my-dataset-repo/data/validation.csv", "w") as f:
...     f.write("text,label")
...     f.write("Fantastic movie!,good")
```

선택적 `revision` 인수를 전달하여 브랜치, 태그 이름 또는 커밋 해시와 같은 특정 커밋에서 작업을 실행할 수 있습니다.

파이썬에 내장된 `open`과 달리 `fsspec`의 `open`은 바이너리 모드 `"rb"`로 기본 설정됩니다. 이것은 텍스트 모드에서 읽기 위해 `"r"`, 쓰기 위해 `"w"`로 모드를 명시적으로 설정해야 함을 의미합니다. 파일에 추가하기(모드 `"a"` 및 `"ab"`)는 아직 지원되지 않습니다.

## 통합[[integrations]]

[HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)은 URL이 다음 구문을 따르는 경우 `fsspec`을 통합하는 모든 라이브러리에서 사용할 수 있습니다.

```
hf://[<repo_type_prefix>]<repo_id>[@<revision>]/<path/in/repo>
```

여기서 `repo_type_prefix`는 Datasets의 경우 `datasets/`, Spaces의 경우 `spaces/`이며, 모델에는 URL에 접두사가 필요하지 않습니다.

[HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)이 Hub와의 상호작용을 단순화하는 몇 가지 흥미로운 통합 사례는 다음과 같습니다:

* Hub 저장소에서 [Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#reading-writing-remote-files) DataFrame 읽기/쓰기:

  ```python
  >>> import pandas as pd

  >>> # 원격 CSV 파일을 데이터프레임으로 읽기
  >>> df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")

  >>> # 데이터프레임을 원격 CSV 파일로 쓰기
  >>> df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")
  ```

동일한 워크플로우를 [Dask](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html) 및 [Polars](https://pola-rs.github.io/polars/py-polars/html/reference/io.html) DataFrame에도 사용할 수 있습니다.

* [DuckDB](https://duckdb.org/docs/guides/python/filesystems)를 사용하여 (원격) Hub 파일 쿼리:

  ```python
  >>> from huggingface_hub import HfFileSystem
  >>> import duckdb

  >>> fs = HfFileSystem()
  >>> duckdb.register_filesystem(fs)
  >>> # 원격 파일을 쿼리하고 결과를 데이터프레임으로 가져오기
  >>> fs_query_file = "hf://datasets/my-username/my-dataset-repo/data_dir/data.parquet"
  >>> df = duckdb.query(f"SELECT * FROM '{fs_query_file}' LIMIT 10").df()
  ```

* [Zarr](https://zarr.readthedocs.io/en/stable/tutorial.html#io-with-fsspec)를 사용하여 Hub를 배열 저장소로 사용:

  ```python
  >>> import numpy as np
  >>> import zarr

  >>> embeddings = np.random.randn(50000, 1000).astype("float32")

  >>> # 저장소(repo)에 배열 쓰기
  >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="w") as root:
  ...    foo = root.create_group("embeddings")
  ...    foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4')
  ...    foobar[:] = embeddings

  >>> # 저장소(repo)에서 배열 읽기
  >>> with zarr.open_group("hf://my-username/my-model-repo/array-store", mode="r") as root:
  ...    first_row = root["embeddings/experiment_0"][0]
  ```

## 인증[[authentication]]

대부분의 경우 Hub와 상호작용하려면 Hugging Face 계정에 로그인해야 합니다. Hub에서 인증 방법에 대해 자세히 알아보려면 문서의 [인증](../quick-start#authentication) 섹션을 참조하세요.

또한 [HfFileSystem](/docs/huggingface_hub/main/ko/package_reference/hf_file_system#huggingface_hub.HfFileSystem)에 `token`을 인수로 전달하여 프로그래밍 방식으로 로그인할 수 있습니다:

```python
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem(token=token)
```

이렇게 로그인하는 경우 소스 코드를 공유할 때 토큰이 실수로 누출되지 않도록 주의해야 합니다!


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/hf_file_system.md" />

### 추론 엔드포인트[[inference-endpoints]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/inference_endpoints.md

# 추론 엔드포인트[[inference-endpoints]]

추론 엔드포인트는 Hugging Face가 관리하는 전용 및 자동 확장 인프라에 `transformers`, `sentence-transformers` 및 `diffusers` 모델을 쉽게 배포할 수 있는 안전한 프로덕션 솔루션을 제공합니다. 추론 엔드포인트는 [Hub](https://huggingface.co/models)의 모델로 구축됩니다.
이 가이드에서는 `huggingface_hub`를 사용하여 프로그래밍 방식으로 추론 엔드포인트를 관리하는 방법을 배웁니다. 추론 엔드포인트 제품 자체에 대한 자세한 내용은 [공식 문서](https://huggingface.co/docs/inference-endpoints/index)를 참조하세요.

이 가이드에서는 `huggingface_hub`가 올바르게 설치 및 로그인되어 있다고 가정합니다. 아직 그렇지 않은 경우 [빠른 시작 가이드](https://huggingface.co/docs/huggingface_hub/quick-start#quickstart)를 참조하세요. 추론 엔드포인트 API를 지원하는 최소 버전은 `v0.19.0`입니다.

## 추론 엔드포인트 생성[[create-an-inference-endpoint]]

첫 번째 단계는 [create_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint)를 사용하여 추론 엔드포인트를 생성하는 것입니다:

```py
>>> from huggingface_hub import create_inference_endpoint

>>> endpoint = create_inference_endpoint(
...     "my-endpoint-name",
...     repository="gpt2",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="cpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x2",
...     instance_type="intel-icl"
... )
```

예시에서는 `"my-endpoint-name"`라는 `protected` 추론 엔드포인트를 생성하여 `text-generation`을 위한 [gpt2](https://huggingface.co/gpt2)를 제공합니다. `protected` 추론 엔드포인트 API에 액세스하려면 토큰이 필요합니다. 또한 벤더, 지역, 액셀러레이터, 인스턴스 유형, 크기와 같은 하드웨어 요구 사항을 구성하기 위한 추가 정보를 제공해야 합니다. 사용 가능한 리소스 목록은 [여기](https://api.endpoints.huggingface.cloud/#/v2%3A%3Aprovider/list_vendors)에서 확인할 수 있습니다. 또한 [웹 인터페이스](https://ui.endpoints.huggingface.co/new)를 사용하여 편리하게 수동으로 추론 엔드포인트를 생성할 수 있습니다. 고급 설정 및 사용법에 대한 자세한 내용은 [이 가이드](https://huggingface.co/docs/inference-endpoints/guides/advanced)를 참조하세요.

[create_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_inference_endpoint)에서 반환된 값은 [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) 개체입니다:

```py
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
```

이것은 엔드포인트에 대한 정보를 저장하는 데이터클래스입니다. `name`, `repository`, `status`, `task`, `created_at`, `updated_at` 등과 같은 중요한 속성에 접근할 수 있습니다. 필요한 경우 `endpoint.raw`를 통해 서버로부터의 원시 응답에도 접근할 수 있습니다.

추론 엔드포인트가 생성되면 [개인 대시보드](https://ui.endpoints.huggingface.co/)에서 확인할 수 있습니다.

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/huggingface_hub/inference_endpoints_created.png)

#### 사용자 정의 이미지 사용[[using-a-custom-image]]

기본적으로 추론 엔드포인트는 Hugging Face에서 제공하는 도커 이미지로 구축됩니다. 그러나 `custom_image` 매개변수를 사용하여 모든 도커 이미지를 지정할 수 있습니다. 일반적인 사용 사례는 [text-generation-inference](https://github.com/huggingface/text-generation-inference) 프레임워크를 사용하여 LLM을 실행하는 것입니다. 다음과 같이 수행할 수 있습니다:

```python
# TGI에서 Zephyr-7b-beta를 실행하는 추론 엔드포인트 시작하기
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="x1",
...     instance_type="nvidia-a10g",
...     custom_image={
...         "health_route": "/health",
...         "env": {
...             "MAX_BATCH_PREFILL_TOKENS": "2048",
...             "MAX_INPUT_LENGTH": "1024",
...             "MAX_TOTAL_TOKENS": "1512",
...             "MODEL_ID": "/repository"
...         },
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
... )
```

`custom_image`에 전달할 값은 도커 컨테이너의 URL과 이를 실행하기 위한 구성이 포함된 딕셔너리입니다. 자세한 내용은 [Swagger 문서](https://api.endpoints.huggingface.cloud/#/v2%3A%3Aendpoint/create_endpoint)를 참조하세요.

### 기존 추론 엔드포인트 가져오기 또는 리스트 조회[[get-or-list-existing-inference-endpoints]]

경우에 따라 이전에 생성한 추론 엔드포인트를 관리해야 할 수 있습니다. 이름을 알고 있는 경우 [get_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.get_inference_endpoint)를 사용하여 [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) 개체를 가져올 수 있습니다. 또는 [list_inference_endpoints()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_inference_endpoints)를 사용하여 모든 추론 엔드포인트 리스트를 검색할 수 있습니다. 두 메소드 모두 선택적 `namespace` 매개변수를 허용합니다. 속해 있는 조직의 `namespace`를 설정할 수 있습니다. 그렇지 않으면 기본적으로 사용자 이름이 사용됩니다.

```py
>>> from huggingface_hub import get_inference_endpoint, list_inference_endpoints

# 엔드포인트 개체 가져오기
>>> get_inference_endpoint("my-endpoint-name")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)

# 조직의 모든 추론 엔드포인트 나열
>>> list_inference_endpoints(namespace="huggingface")
[InferenceEndpoint(name='aws-starchat-beta', namespace='huggingface', repository='HuggingFaceH4/starchat-beta', status='paused', url=None), ...]

# 사용자가 속해있는 모든 조직의 엔드포인트 나열
>>> list_inference_endpoints(namespace="*")
[InferenceEndpoint(name='aws-starchat-beta', namespace='huggingface', repository='HuggingFaceH4/starchat-beta', status='paused', url=None), ...]
```

## 배포 상태 확인[[check-deployment-status]]

이 가이드의 나머지 부분에서는 `endpoint`라는 이름의 [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint) 객체를 가지고 있다고 가정합니다. 엔드포인트에 `status` 속성이 [InferenceEndpointStatus](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointStatus) 유형이라는 것을 알 수 있었습니다. 추론 엔드포인트가 배포되고 접근 가능하면 상태가 `"running"`이 되고 `url` 속성이 설정됩니다:

```py
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='running', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
```

`추론 엔드포인트가 "running"` 상태에 도달하기 전에 일반적으로 `"initializing"` 또는 `"pending"` 단계를 거칩니다. [fetch()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.fetch)를 실행하여 엔드포인트의 새로운 상태를 가져올 수 있습니다. [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)의 다른 메소드와 마찬가지로 이 메소드는 서버에 요청을 하며, `endpoint`의 내부 속성이 변경됩니다:

```py
>>> endpoint.fetch()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
```

추론 엔드포인트가 실행될 때까지 기다리면서 상태를 가져오는 대신 [wait()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.wait)를 직접 호출할 수 있습니다. 이 헬퍼는 `timeout`과 `fetch_every` 매개변수를 입력으로 받아 (초 단위) 추론 엔드포인트가 배포될 때까지 스레드를 차단합니다. 기본값은 각각 `None`(제한 시간 없음)과 `5`초입니다.

```py
# 엔드포인트 보류
>>> endpoint
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)

# 10초 대기 => InferenceEndpointTimeoutError 발생
>>> endpoint.wait(timeout=10)
    raise InferenceEndpointTimeoutError("Timeout while waiting for Inference Endpoint to be deployed.")
huggingface_hub._inference_endpoints.InferenceEndpointTimeoutError: Timeout while waiting for Inference Endpoint to be deployed.

# 추가 대기
>>> endpoint.wait()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='running', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
```

`timeout`이 설정되어 있고 추론 엔드포인트를 불러오는 데 너무 오래 걸리면, `InferenceEndpointTimeoutError` 제한 시간 초과 오류가 발생합니다.

## 추론 실행[[run-inference]]

추론 엔드포인트가 실행되면, 마침내 추론을 실행할 수 있습니다!

[InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)에는 각각 [InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)와 [AsyncInferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.AsyncInferenceClient)를 반환하는 `client`와 `async_client` 속성이 있습니다.

```py
# 텍스트 생성 작업 실행:
>>> endpoint.client.text_generation("I am")
' not a fan of the idea of a "big-budget" movie. I think it\'s a'

# 비동기 컨텍스트에서도 마찬가지로 실행:
>>> await endpoint.async_client.text_generation("I am")
```

추론 엔드포인트가 실행 중이 아니면 [InferenceEndpointError](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpointError) 오류가 발생합니다:

```py
>>> endpoint.client
huggingface_hub._inference_endpoints.InferenceEndpointError: Cannot create a client for this Inference Endpoint as it is not yet deployed. Please wait for the Inference Endpoint to be deployed using `endpoint.wait()` and try again.
```

[InferenceClient](/docs/huggingface_hub/main/ko/package_reference/inference_client#huggingface_hub.InferenceClient)를 사용하는 방법에 대한 자세한 내용은 [추론 가이드](../guides/inference)를 참조하세요.

## 라이프사이클 관리[[manage-lifecycle]]

이제 추론 엔드포인트를 생성하고 추론을 실행하는 방법을 살펴보았으니, 라이프사이클을 관리하는 방법을 살펴봅시다.

> [!TIP]
> 이 섹션에서는 [pause()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause), [resume()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume), [scale_to_zero()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero), [update()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update) 및 [delete()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.delete) 등의 메소드를 살펴볼 것입니다. 모든 메소드는 편의를 위해 [InferenceEndpoint](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint)에 추가된 별칭입니다. 원한다면 `HfApi`에 정의된 일반 메소드 [pause_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.pause_inference_endpoint), [resume_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.resume_inference_endpoint), [scale_to_zero_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.scale_to_zero_inference_endpoint), [update_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.update_inference_endpoint) 및 [delete_inference_endpoint()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_inference_endpoint)를 사용할 수도 있습니다.

### 일시 중지 또는 0으로 확장[[pause-or-scale-to-zero]]

추론 엔드포인트를 사용하지 않을 때 비용을 절감하기 위해 [pause()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.pause)를 사용하여 일시 중지하거나 [scale_to_zero()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.scale_to_zero)를 사용하여 0으로 스케일링할 수 있습니다.

> [!TIP]
> *일시 중지* 또는 *0으로 스케일링*된 추론 엔드포인트는 비용이 들지 않습니다. 이 두 가지의 차이점은 *일시 중지* 엔드포인트는 [resume()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.resume)를 사용하여 명시적으로 *재개*해야 한다는 것입니다. 반대로 *0으로 스케일링*된 엔드포인트는 추론 호출이 있으면 추가 콜드 스타트 지연과 함께 자동으로 시작됩니다. 추론 엔드포인트는 일정 기간 비활성화된 후 자동으로 0으로 스케일링되도록 구성할 수도 있습니다.

```py
# 엔드포인트 일시중지 및 재시작
>>> endpoint.pause()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='paused', url=None)
>>> endpoint.resume()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='pending', url=None)
>>> endpoint.wait().client.text_generation(...)
...

# 0으로 스케일링
>>> endpoint.scale_to_zero()
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2', status='scaledToZero', url='https://jpj7k2q4j805b727.us-east-1.aws.endpoints.huggingface.cloud')
# 엔드포인트는 'running'은 아니지만 URL�을 가지고 있으며 첫 번째 호출 시 다시 시작됩니다.
```

### 모델 또는 하드웨어 요구 사항 업데이트[[update-model-or-hardware-requirements]]

경우에 따라 새로운 엔드포인트를 생성하지 않고 추론 엔드포인트를 업데이트하고 싶을 수 있습니다. 호스팅된 모델이나 모델 실행에 필요한 하드웨어 요구 사항을 업데이트할 수 있습니다. 이렇게 하려면 [update()](/docs/huggingface_hub/main/ko/package_reference/inference_endpoints#huggingface_hub.InferenceEndpoint.update)를 사용합니다:

```py
# 타겟 모델 변경
>>> endpoint.update(repository="gpt2-large")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)

# 복제본 갯수 업데이트
>>> endpoint.update(min_replica=2, max_replica=6)
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)

# 더 큰 인스턴스로 업데이트
>>> endpoint.update(accelerator="cpu", instance_size="x4", instance_type="intel-icl")
InferenceEndpoint(name='my-endpoint-name', namespace='Wauplin', repository='gpt2-large', status='pending', url=None)
```

### 엔드포인트 삭제[[delete-the-endpoint]]

마지막으로 더 이상 추론 엔드포인트를 사용하지 않을 경우, `~InferenceEndpoint.delete()`를 호출하기만 하면 됩니다.

> [!WARNING]
> 이것은 돌이킬 수 없는 작업이며, 구성, 로그 및 사용 메트릭을 포함한 엔드포인트를 완전히 제거합니다. 삭제된 추론 엔드포인트는 복원할 수 없습니다.

## 엔드 투 엔드 예제[an-end-to-end-example]

추론 엔드포인트의 일반적인 사용 사례는 한 번에 여러 개의 작업을 처리하여 인프라 비용을 제한하는 것입니다. 이 가이드에서 본 것을 사용하여 이 프로세스를 자동화할 수 있습니다:

```py
>>> import asyncio
>>> from huggingface_hub import create_inference_endpoint

# 엔드포인트 시작 + 초기화될 때까지 대기
>>> endpoint = create_inference_endpoint(name="batch-endpoint",...).wait()

# 추론 실행
>>> client = endpoint.client
>>> results = [client.text_generation(...) for job in jobs]

# 비동기 추론 실행
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

# 엔드포인트 중지
>>> endpoint.pause()
```

또는 추론 엔드포인트가 이미 존재하고 일시 중지된 경우:

```py
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint

# 엔드포인트 가져오기 + 초기화될 때까지 대기
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()

# 추론 실행
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

# 엔드포인트 중지
>>> endpoint.pause()
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/inference_endpoints.md" />

### Hub에서 검색하기[[search-the-hub]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/search.md

# Hub에서 검색하기[[search-the-hub]]

이 튜토리얼에서는 `huggingface_hub`를 사용하여 Hub에서 모델, 데이터 세트 및 Spaces를 검색하는 방법을 배웁니다.

## 리포지토리를 어떻게 나열하나요?[[how-to-list-repositories-]]

`huggingface_hub` 라이브러리에는 Hub와 상호작용하기 위한 HTTP 클라이언트[HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi)가 포함되어 있습니다.
이를 통해, Hub에 저장된 모델, 데이터셋, 그리고 Spaces를 나열할 수 있습니다.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> models = api.list_models()
```

[list_models()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_models)의 출력은 Hub에 저장되어 있는 모델들을 나열한 결과입니다.

마찬가지로, [list_datasets()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_datasets)를 사용하여 데이터 세트를 나열하고 [list_spaces()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.list_spaces)를 사용하여 Spaces를 나열할 수 있습니다.

## 리포지토리를 어떻게 필터링하나요?[[how-to-filter-repositories-]]

리포지토리를 나열하는 것도 유용하지만, 검색을 필터링하고 싶을 수도 있습니다.
리스트에는 다음과 같은 여러 속성이 있습니다.
- `filter`
- `author`
- `search`
- ...

이 매개변수 중 두 개는 직관적입니다(`author` 및 `search`). 그렇다면 `filter`는 어떤 것을 나타낼까요?
`filter`는 `ModelFilter` 객체(또는 `DatasetFilter`)를 입력으로 받습니다. 이를 이용해 필터링 하고 싶은 모델을 지정하여 인스턴스를 생성할 수 있습니다.

PyTorch로 작동되고 imagenet 데이터 세트로 훈련된, 이미지 분류를 위한 Hub의 모든 모델을 찾는 방법으로 예를 들어보겠습니다. 이 과정은 단일 [ModelFilter]를 사용하여 수행할 수 있습니다. 이때, 필터링 속성들은 '논리적 AND'로 결합되어, 지정한 모든 조건을 만족하는 모델만 선택됩니다.

```py
models = hf_api.list_models(
    filter=ModelFilter(
		task="image-classification",
		library="pytorch",
		trained_dataset="imagenet"
	)
)
```

필터링하는 과정에서 모델을 정렬하고 상위 결과만 선택할 수도 있습니다. 다음 예제는 Hub에서 가장 많이 다운로드된 상위 5개 데이터 세트를 가져옵니다.

```py
>>> list(list_datasets(sort="downloads", direction=-1, limit=5))
[DatasetInfo(
	id='argilla/databricks-dolly-15k-curated-en',
	author='argilla',
	sha='4dcd1dedbe148307a833c931b21ca456a1fc4281',
	last_modified=datetime.datetime(2023, 10, 2, 12, 32, 53, tzinfo=datetime.timezone.utc),
	private=False,
	downloads=8889377,
	(...)
```



Hub에서 사용 가능한 필터에 대해 살펴보려면 웹브라우저에서 [모델](https://huggingface.co/models) 및 [데이터 세트](https://huggingface.co/datasets) 페이지를 방문하여 일부 매개변수를 검색한 다음, URL에서 값들을 확인해보세요.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/search.md" />

### `huggingface_hub` 캐시 시스템 관리하기[[manage-huggingfacehub-cache-system]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/manage-cache.md

# `huggingface_hub` 캐시 시스템 관리하기[[manage-huggingfacehub-cache-system]]

## 캐싱 이해하기[[understand-caching]]

Hugging Face Hub 캐시 시스템은 Hub에 의존하는 라이브러리 간에 공유되는 중앙 캐시로 설계되었습니다. v0.8.0에서 수정한 파일 간에 다시 다운로드하는 것을 방지하도록 업데이트되었습니다.

캐시 시스템은 다음과 같이 설계되었습니다:

```
<CACHE_DIR>
├─ <MODELS>
├─ <DATASETS>
├─ <SPACES>
```

`<CACHE_DIR>`는 보통 사용자의 홈 디렉토리입니다. 그러나 모든 메소드에서 `cache_dir` 인수를 사용하거나 `HF_HOME` 또는 `HF_HUB_CACHE` 환경 변수를 지정하여 사용자 정의할 수 있습니다.

모델, 데이터셋, 스페이스는 공통된 루트를 공유합니다. 각 리포지토리는 리포지토리 유형과 네임스페이스(조직 또는 사용자 이름이 있을 경우), 리포지토리 이름을 포함합니다:

```
<CACHE_DIR>
├─ models--julien-c--EsperBERTo-small
├─ models--lysandrejik--arxiv-nlp
├─ models--bert-base-cased
├─ datasets--glue
├─ datasets--huggingface--DataMeasurementsFiles
├─ spaces--dalle-mini--dalle-mini
```

Hub로부터 모든 파일이 이 폴더들 안에 다운로드됩니다. 캐싱은 파일이 이미 존재하고 업데이트되지 않은 경우, 파일을 두 번 다운로드하지 않도록 해줍니다.
하지만 파일이 업데이트되었고 최신 파일을 요청하면, 최신 파일을 다운로드합니다 (이전 파일은 그대로 유지되어 필요할 때 다시 사용할 수 있습니다).

이를 위해 모든 폴더는 동일한 구조를 가집니다:

```
<CACHE_DIR>
├─ datasets--glue
│  ├─ refs
│  ├─ blobs
│  ├─ snapshots
...
```

각 폴더는 다음과 같은 내용을 포함하도록 구성되었습니다:

### Refs[[refs]]

`refs` 폴더에는 주어진 참조의 최신 수정 버전을 나타내는 파일이 포함되어 있습니다. 예를 들어, 이전에 리포지토리의 `main` 브랜치에서 파일을 가져온 경우, `refs` 폴더에는 `main`이라는 이름의 파일이 포함되며, 이 파일 자체에는 현재 헤드의 커밋 식별자가 들어 있습니다.

만약 `main`의 최신 커밋 식별자가 `aaaaaa`라면, 그 파일에는 `aaaaaa`가 들어 있습니다.

같은 브랜치가 새로운 커밋으로 업데이트되어 `bbbbbb`라는 식별자를 갖게 되면, 해당 참조에서 파일을 다시 다운로드할 때 `refs/main` 파일은 `bbbbbb`로 업데이트됩니다.

### Blobs[[blobs]]

`blobs` 폴더에는 실제로 다운로드된 파일이 포함되어 있습니다. 각 파일의 이름은 해당 파일의 해시값입니다.

### Snapshots[[snapshots]]

`snapshots` 폴더에는 위에서 언급한 blobs에 대한 심볼릭 링크가 포함되어 있습니다. 이 폴더는 여러 개의 하위 폴더로 구성되어 있으며, 각 폴더는 알려진 수정 버전을 나타냅니다.

위 설명에서, 처음에 `aaaaaa` 버전에서 파일을 가져왔고, 그 후에 `bbbbbb` 버전에서 파일을 가져왔습니다. 이 상황에서 `snapshots` 폴더에는 `aaaaaa`와 `bbbbbb`라는 두 개의 폴더가 있습니다.

이 폴더들 각각에는 다운로드한 파일의 이름을 가진 심볼릭 링크가 있습니다. 예를 들어, `aaaaaa` 버전에서 `README.md` 파일을 다운로드했다면, 다음과 같은 경로가 생깁니다:

```
<CACHE_DIR>/<REPO_NAME>/snapshots/aaaaaa/README.md
```

그 `README.md` 파일은 실제로 해당 파일의 해시를 가진 blob에 대한 심볼릭 링크입니다.

이와 같은 구조를 생성함으로써 파일 공유 메커니즘이 열리게 됩니다. 동일한 파일을 `bbbbbb` 버전에서 가져온 경우, 동일한 해시를 가지게 되어 파일을 다시 다운로드할 필요가 없습니다.

### .no_exist (advanced)[[noexist-advanced]]

`blobs`, `refs`, `snapshots` 폴더 외에도 캐시에서 `.no_exist` 폴더를 찾을 수 있습니다. 이 폴더는 한 번 다운로드하려고 시도했지만 Hub에 존재하지 않는 파일을 기록합니다. 이 폴더의 구조는 `snapshots` 폴더와 동일하며, 알려진 각 수정 버전에 대해 하나의 하위 폴더를 갖습니다:

```
<CACHE_DIR>/<REPO_NAME>/.no_exist/aaaaaa/config_that_does_not_exist.json
```
`snapshots` 폴더와 달리, 파일은 단순히 빈 파일입니다 (심볼릭 링크가 아님). 이 예에서 `"config_that_does_not_exist.json"` 파일은 `"aaaaaa"` 버전에 대해 Hub에 존재하지 않습니다. 빈 파일만 저장하므로, 이 폴더는 디스크 사용량을 크게 차지하지 않기에 무시할 수 있습니다.

그렇다면 이제 여러분은 왜 이 정보가 관련이 있는지 궁금해 할지도 모릅니다. 몇몇 경우에서는 프레임워크가 모델에 대한 옵션 파일들을 불러오려고 시도합니다. 존재하지 않는 옵션 파일들을 저장하면 가능한 옵션 파일당 1개의 HTTP 호출을 절약할 수 있어 모델을 더 빠르게 불러올 수 있습니다. 이는 예를 들어 각 토크나이저가 추가 파일을 지원하는 `transformers`에서 발생합니다. 처음으로 토크나이저를 로드할 때, 다음 초기화를 위해 로딩 시간을 더 빠르게 하기 위해 옵션 파일이 존재하는지 여부를 캐시합니다.

HTTP 요청을 만들지 않고 로컬로 캐시된 파일이 있는지 테스트하려면, [try_to_load_from_cache()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.try_to_load_from_cache) 헬퍼를 사용할 수 있습니다. 이것은 파일이 존재하고 캐시된 경우에는 파일 경로를, 존재하지 않음이 캐시된 경우에는 `_CACHED_NO_EXIST` 객체를, 알 수 없는 경우에는 `None`을 반환합니다.

```python
from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST

filepath = try_to_load_from_cache()
if isinstance(filepath, str):
    # 파일이 존재하고 캐시됩니다
    ...
elif filepath is _CACHED_NO_EXIST:
    # 파일의 존재여부가 캐시됩니다
    ...
else:
    # 파일은 캐시되지 않습니다
    ...
```

### 캐시 구조 예시[[in-practice]]

실제로는 캐시는 다음과 같은 트리 구조를 가질 것입니다:

```text
    [  96]  .
    └── [ 160]  models--julien-c--EsperBERTo-small
        ├── [ 160]  blobs
        │   ├── [321M]  403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
        │   ├── [ 398]  7cb18dc9bafbfcf74629a4b760af1b160957a83e
        │   └── [1.4K]  d7edf6bd2a681fb0175f7735299831ee1b22b812
        ├── [  96]  refs
        │   └── [  40]  main
        └── [ 128]  snapshots
            ├── [ 128]  2439f60ef33a0d46d85da5001d52aeda5b00ce9f
            │   ├── [  52]  README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812
            │   └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
            └── [ 128]  bbc77c8132af1cc5cf678da3f1ddf2de43606d48
                ├── [  52]  README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e
                └── [  76]  pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd
```

### 제한사항[[limitations]]

효율적인 캐시 시스템을 갖기 위해 `huggingface-hub`은 심볼릭 링크를 사용합니다. 그러나 모든 기기에서 심볼릭 링크를 지원하지는 않습니다. 특히 Windows에서 이러한 한계가 있다는 것이 알려져 있습니다. 이런 경우에는 `huggingface_hub`이 `blobs/` 디렉터리를 사용하지 않고 대신 파일을 직접 `snapshots/` 디렉터리에 저장합니다. 이 해결책을 통해 사용자는 Hub에서 파일을 다운로드하고 캐시하는 방식을 정확히 동일하게 사용할 수 있습니다. 캐시를 검사하고 삭제하는 도구들도 지원됩니다. 그러나 캐시 시스템은 동일한 리포지토리의 여러 수정 버전을 다운로드하는 경우 같은 파일이 여러 번 다운로드될 수 있기 때문에 효율적이지 않을 수 있습니다.

Windows 기기에서 심볼릭 링크 기반 캐시 시스템의 이점을 누리려면, [개발자 모드를 활성화](https://docs.microsoft.com/ko-kr/windows/apps/get-started/enable-your-device-for-development)하거나 Python을 관리자 권한으로 실행해야 합니다.

심볼릭 링크가 지원되지 않는 경우, 사용자에게 캐시 시스템의 낮은 버전을 사용 중임을 알리는 경고 메시지가 표시됩니다. 이 경고는 `HF_HUB_DISABLE_SYMLINKS_WARNING` 환경 변수를 true로 설정하여 비활성화할 수 있습니다.

## 캐싱 자산[[caching-assets]]

Hub에서 파일을 캐시하는 것 외에도, 하위 라이브러리들은 종종 `huggingface_hub`에 직접 처리되지 않는 HF와 관련된 다른 파일을 캐시해야 할 때가 있습니다 (예: GitHub에서 다운로드한 파일, 전처리된 데이터, 로그 등). 이러한 파일, 즉 '자산(assets)'을 캐시하기 위해 [cached_assets_path()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.cached_assets_path)를 사용할 수 있습니다. 이 헬퍼는 요청한 라이브러리의 이름과 선택적으로 네임스페이스 및 하위 폴더 이름을 기반으로 HF 캐시의 경로를 통일된 방식으로 생성합니다. 목표는 모든 하위 라이브러리가 자산을 자체 방식대로(예: 구조에 대한 규칙 없음) 관리할 수 있도록 하는 것입니다. 그러나 올바른 자산 폴더 내에 있어야 합니다. 그러한 라이브러리는 `huggingface_hub`의 도구를 활용하여 캐시를 관리할 수 있으며, 특히 CLI 명령을 통해 자산의 일부를 스캔하고 삭제할 수 있습니다.

```py
from huggingface_hub import cached_assets_path

assets_path = cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="download")
something_path = assets_path / "something.json" # 자산 폴더에서 원하는 대로 작업하세요!
```

> [!TIP]
> [cached_assets_path()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.cached_assets_path)는 자산을 저장하는 권장 방법이지만 필수는 아닙니다. 이미 라이브러리가 자체 캐시를 사용하는 경우 해당 캐시를 자유롭게 사용하세요!

### 자산 캐시 구조 예시[[assets-in-practice]]

실제로는 자산 캐시는 다음과 같은 트리 구조를 가질 것입니다:

```text
    assets/
    └── datasets/
    │   ├── SQuAD/
    │   │   ├── downloaded/
    │   │   ├── extracted/
    │   │   └── processed/
    │   ├── Helsinki-NLP--tatoeba_mt/
    │       ├── downloaded/
    │       ├── extracted/
    │       └── processed/
    └── transformers/
        ├── default/
        │   ├── something/
        ├── bert-base-cased/
        │   ├── default/
        │   └── training/
    hub/
    └── models--julien-c--EsperBERTo-small/
        ├── blobs/
        │   ├── (...)
        │   ├── (...)
        ├── refs/
        │   └── (...)
        └── [ 128]  snapshots/
            ├── 2439f60ef33a0d46d85da5001d52aeda5b00ce9f/
            │   ├── (...)
            └── bbc77c8132af1cc5cf678da3f1ddf2de43606d48/
                └── (...)
```

## 캐시 확인하기[[scan-your-cache]]

현재 캐시된 파일은 로컬 디렉토리에서 자동으로 삭제되지 않습니다. 브랜치의 새로운 수정 버전을 다운로드하면 이전 파일이 그대로 남아 향후 다시 사용할 수 있습니다. 따라서 디스크 공간을 많이 차지하는 리포지토리와 수정 버전을 파악하려면 캐시 디렉터리를 점검하는 것이 좋습니다. `huggingface_hub`은 이를 위해 CLI와 Python 유틸리티를 모두 제공합니다.

### 터미널에서 캐시 확인하기[[scan-cache-from-the-terminal]]

HF 캐시 상태를 살펴보는 가장 간단한 방법은 `hf cache ls` 명령을 사용하는 것입니다. 기본적으로 캐시에 저장된 리포지토리를 한눈에 보여줍니다.

```text
➜ hf cache ls
ID                                   SIZE   LAST_ACCESSED LAST_MODIFIED REFS
------------------------------------ ------- ------------- ------------- -------------------
dataset/glue                         116.3K 4 days ago     4 days ago     2.4.0 main 1.17.0
dataset/google/fleurs                 64.9M 1 week ago     1 week ago     main refs/pr/1
model/Jean-Baptiste/camembert-ner    441.0M 2 weeks ago    16 hours ago   main
model/bert-base-cased                  1.9G 1 week ago     2 years ago
model/t5-base                          10.1K 3 months ago   3 months ago   main
model/t5-small                        970.7M 3 days ago     3 days ago     main refs/pr/1

Found 6 repo(s) for a total of 12 revision(s) and 3.4G on disk.
```

`--revisions` 옵션을 추가하면 각 스냅샷별 상세 목록을 확인할 수 있습니다. `size>1GB`, `accessed>30d`와 같이 사람이 읽기 쉬운 값을 사용하는 필터도 함께 적용할 수 있습니다.

```text
➜ hf cache ls --revisions --filter "size>1GB" --filter "accessed>30d"
ID                                   REVISION            SIZE   LAST_MODIFIED REFS
------------------------------------ ------------------ ------- ------------- -------------------
model/bert-base-cased                6d1d7a1a2a6cf4c2    1.9G  2 years ago
model/t5-small                       1c610f6b3f5e7d8a    1.1G  3 months ago  main

Found 2 repo(s) for a total of 2 revision(s) and 3.0G on disk.
```

머신 친화적인 출력이 필요하다면 `--format json` 또는 `--format csv`를 사용하고, 식별자만 받고 싶다면 `--quiet`를 사용할 수 있습니다. `--cache-dir` 옵션을 함께 지정하면 기본 위치가 아닌 다른 캐시 디렉터리를 살펴볼 수도 있습니다.

```text
➜ hf cache rm $(hf cache ls --filter "accessed>1y" -q) -y
About to delete 2 repo(s) totalling 5.31G.
  - model/meta-llama/Llama-3.2-1B-Instruct (entire repo)
  - model/hexgrad/Kokoro-82M (entire repo)
Delete repo: ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct
Delete repo: ~/.cache/huggingface/hub/models--hexgrad--Kokoro-82M
Cache deletion done. Saved 5.31G.
Deleted 2 repo(s) and 2 revision(s); freed 5.31G.
```

#### 쉘 도구로 필터링하기[[grep-example]]

출력이 표 형식이므로 기존의 `grep` 같은 도구와도 잘 어울립니다. 아래 예시는 `t5-small` 관련 스냅샷만 찾는 방법입니다.

```text
➜ eval "hf cache ls --revisions" | grep "t5-small"
model/t5-small                       1c610f6b3f5e7d8a    1.1G  3 months ago  main
model/t5-small                       8f3ad1c90fed7a62    820.1M 2 weeks ago   refs/pr/1
```

### 파이썬에서 캐시 스캔하기[[scan-cache-from-python]]

보다 고급 기능을 사용하려면, CLI 도구에서 호출되는 파이썬 유틸리티인 [scan_cache_dir()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.scan_cache_dir)을 사용할 수 있습니다.

이를 사용하여 4가지 데이터 클래스를 중심으로 구조화된 자세한 보고서를 얻을 수 있습니다:

- [HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo): [scan_cache_dir()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.scan_cache_dir)에 의해 반환되는 완전한 보고서
- [CachedRepoInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedRepoInfo): 캐시된 리포지토리에 관한 정보
- [CachedRevisionInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedRevisionInfo): 리포지토리 내의 캐시된 수정 버전(예: "snapshot")에 관한 정보
- [CachedFileInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.CachedFileInfo): 스냅샷 내의 캐시된 파일에 관한 정보

다음은 간단한 사용 예시입니다. 자세한 내용은 참조 문서를 참고하세요.

```py
>>> from huggingface_hub import scan_cache_dir

>>> hf_cache_info = scan_cache_dir()
HFCacheInfo(
    size_on_disk=3398085269,
    repos=frozenset({
        CachedRepoInfo(
            repo_id='t5-small',
            repo_type='model',
            repo_path=PosixPath(...),
            size_on_disk=970726914,
            nb_files=11,
            last_accessed=1662971707.3567169,
            last_modified=1662971107.3567169,
            revisions=frozenset({
                CachedRevisionInfo(
                    commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5',
                    size_on_disk=970726339,
                    snapshot_path=PosixPath(...),
                    # 수정 버전 간에 blobs가 공유되기 때문에 `last_accessed`가 없습니다.
                    last_modified=1662971107.3567169,
                    files=frozenset({
                        CachedFileInfo(
                            file_name='config.json',
                            size_on_disk=1197
                            file_path=PosixPath(...),
                            blob_path=PosixPath(...),
                            blob_last_accessed=1662971707.3567169,
                            blob_last_modified=1662971107.3567169,
                        ),
                        CachedFileInfo(...),
                        ...
                    }),
                ),
                CachedRevisionInfo(...),
                ...
            }),
        ),
        CachedRepoInfo(...),
        ...
    }),
    warnings=[
        CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."),
        CorruptedCacheException(...),
        ...
    ],
)
```

## 캐시 정리하기[[clean-your-cache]]

캐시를 스캔하는 것은 흥미로울 수 있지만 실제로 해야 할 다음 작업은 일반적으로 드라이브의 일부 공간을 확보하기 위해 일부를 삭제하는 것입니다. 이는 `delete-cache` CLI 명령을 사용하여 가능합니다. 또한 캐시를 스캔할 때 반환되는 [HFCacheInfo](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo) 객체에서 [delete_revisions()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions) 헬퍼를 사용하여 프로그래밍 방식으로도 사용할 수 있습니다.

### 전략적으로 삭제하기[[delete-strategy]]


캐시를 삭제하려면 삭제할 수정 버전 목록을 전달해야 합니다. 이 도구는 이 목록을 기반으로 공간을 확보하기 위한 전략을 정의합니다. 이는 어떤 파일과 폴더가 삭제될지를 설명하는 [DeleteCacheStrategy](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.DeleteCacheStrategy) 객체를 반환합니다. [DeleteCacheStrategy](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.DeleteCacheStrategy)를 통해 사용 가능한 공간을 확보 할 수 있습니다. 삭제에 동의하면 삭제를 실행하여 삭제를 유효하게 만들어야 합니다. 불일치를 피하기 위해 전략 객체를 수동으로 편집할 수 없습니다.

수정 버전을 삭제하기 위한 전략은 다음과 같습니다:

- 수정 버전 심볼릭 링크가 있는 `snapshot` 폴더가 삭제됩니다.
- 삭제할 수정 버전에만 대상이 되는 blobs 파일도 삭제됩니다.
- 수정 버전이 1개 이상의 `refs`에 연결되어 있는 경우, 참조가 삭제됩니다.
- 리포지토리의 모든 수정 버전이 삭제되는 경우 전체 캐시된 리포지토리가 삭제됩니다.

> [!TIP]
> 수정 버전 해시는 모든 리포지토리를 통틀어 고유합니다. 따라서 `hf cache rm`은 리포지토리 식별자(`model/bert-base-uncased` 등)와 수정 버전 해시를 모두 인수로 받으며, 해시를 전달할 때는 리포지토리를 별도로 지정할 필요가 없습니다.

> [!WARNING]
> 캐시에서 수정 버전을 찾을 수 없는 경우 무시됩니다. 또한 삭제 중에 파일 또는 폴더를 찾을 수 없는 경우 경고가 기록되지만 오류가 발생하지 않습니다. [DeleteCacheStrategy](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.DeleteCacheStrategy) 객체에 포함된 다른 경로에 대해 삭제가 계속됩니다.

### 터미널에서 캐시 정리하기[[clean-cache-from-the-terminal]]

캐시에서 불필요한 데이터를 지우려면 `hf cache rm` 명령을 사용하세요. 리포지토리 식별자(예: `model/bert-base-uncased`)나 수정 버전 해시를 인수로 전달하면 됩니다.

```text
➜ hf cache rm model/bert-base-cased
About to delete 1 repo(s) totalling 1.9G.
  - model/bert-base-cased (entire repo)
Proceed with deletion? [y/N]: y
Deleted 1 repo(s) and 1 revision(s); freed 1.9G.
```

여러 리포지토리와 특정 수정 버전을 함께 지정할 수도 있습니다. `--dry-run` 옵션을 사용하면 실제 삭제 없이 결과를 미리 확인할 수 있고, 자동화된 스크립트에서는 `--yes`로 확인 단계를 건너뛸 수 있습니다.

```text
➜ hf cache rm model/t5-small 8f3ad1c --dry-run
About to delete 1 repo(s) and 1 revision(s) totalling 1.1G.
  - model/t5-small:
      8f3ad1c [main] 1.1G
Dry run: no files were deleted.
```

기본 위치가 아닌 다른 캐시 디렉터리를 다루고 있다면 `--cache-dir` 옵션으로 경로를 지정하세요.

사용되지 않는(detached) 스냅샷만 한 번에 정리하려면 `hf cache prune`을 사용할 수 있습니다. 이 명령은 브랜치나 태그에서 참조하지 않는 수정 버전을 자동으로 선택합니다.

```text
➜ hf cache prune
About to delete 3 unreferenced revision(s) (2.4G total).
  - model/t5-small:
      1c610f6b [refs/pr/1] 820.1M
      d4ec9b72 [(detached)] 640.5M
  - dataset/google/fleurs:
      2b91c8dd [(detached)] 937.6M
Proceed? [y/N]: y
Deleted 3 unreferenced revision(s); freed 2.4G.
```

두 명령 모두 `--dry-run`, `--yes`, `--cache-dir` 옵션을 지원하므로 시뮬레이션, 자동화, 대체 캐시 경로 지정을 자유롭게 조합할 수 있습니다.

### 파이썬에서 캐시 정리하기[[clean-cache-from-python]]

더 유연하게 사용하려면, 프로그래밍 방식으로 [delete_revisions()](/docs/huggingface_hub/main/ko/package_reference/cache#huggingface_hub.HFCacheInfo.delete_revisions) 메소드를 사용할 수도 있습니다. 간단한 예제를 살펴보겠습니다. 자세한 내용은 참조 문서를 확인하세요.

```py
>>> from huggingface_hub import scan_cache_dir

>>> delete_strategy = scan_cache_dir().delete_revisions(
...     "81fd1d6e7847c99f5862c9fb81387956d99ec7aa"
...     "e2983b237dccf3ab4937c97fa717319a9ca1a96d",
...     "6c0e6080953db56375760c0471a8c5f2929baf11",
... )
>>> print("Will free " + delete_strategy.expected_freed_size_str)
Will free 8.6G

>>> delete_strategy.execute()
Cache deletion done. Saved 8.6G.
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/manage-cache.md" />

### 명령줄 인터페이스 (CLI) [[command-line-interface]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/cli.md

# 명령줄 인터페이스 (CLI) [[command-line-interface]]

`huggingface_hub` Python 패키지는 `hf`라는 내장 CLI를 함께 제공합니다. 이 도구를 사용하면 터미널에서 Hugging Face Hub와 직접 상호 작용할 수 있습니다. 계정에 로그인하고, 리포지토리를 생성하고, 파일을 업로드 및 다운로드하는 등의 다양한 작업을 수행할 수 있습니다. 또한 머신을 구성하거나 캐시를 관리하는 데 유용한 기능도 제공합니다. 이 가이드는 CLI의 주요 기능과 사용 방법에 관해 설명합니다.

## 시작하기 [[getting-started]]

먼저, CLI를 설치해 보세요:

```
>>> pip install -U "huggingface_hub"
```

> [!TIP]
> CLI는 기본 `huggingface_hub` 패키지에 포함되어 있습니다.

설치가 완료되면, CLI가 올바르게 설정되었는지 확인할 수 있습니다:

```
>>> hf --help
usage: hf <command> [<args>]

positional arguments:
  {auth,cache,download,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
                        hf command helpers
    auth                Manage authentication (login, logout, etc.).
    cache               Manage local cache directory.
    download            Download files from the Hub
    repo                Manage repos on the Hub.
    repo-files          Manage files in a repo on the Hub.
    upload              Upload a file or a folder to the Hub. Recommended for single-commit uploads.
    upload-large-folder
                        Upload a large folder to the Hub. Recommended for resumable uploads.
    env                 Print information about the environment.
    version             Print information about the hf version.

options:
  -h, --help            show this help message and exit
```

CLI가 제대로 설치되었다면 CLI에서 사용 가능한 모든 옵션 목록이 출력됩니다. `command not found: hf`와 같은 오류 메시지가 표시된다면 [설치](../installation) 가이드를 확인하세요.

> [!TIP]
> `--help` 옵션을 사용하면 명령어에 대한 자세한 정보를 얻을 수 있습니다. 언제든지 사용 가능한 모든 옵션과 그 세부 사항을 확인할 수 있습니다. 예를 들어 `hf upload --help`는 CLI를 사용하여 파일을 업로드하는 구체적인 방법을 알려줍니다.

### 다른 방법으로 설치하기 [[alternative-install]]

#### uv 사용하기 [[using-uv]]

[uv](https://docs.astral.sh/uv/)를 사용하면 `hf` CLI를 설치하거나, 설치 없이 바로 실행할 수 있습니다. 먼저 uv를 설치하세요 (PATH에 `uv`와 `uvx`가 추가됩니다):

```bash
>>> curl -LsSf https://astral.sh/uv/install.sh | sh
```

영구적으로 도구를 설치해 어디에서나 사용하려면:

```bash
>>> uv tool install "huggingface_hub"
>>> hf --help
```

전역 설치 없이 일회성으로 실행하려면 `uvx`를 사용하세요:

```bash
>>> uvx --from huggingface_hub hf --help
```

#### Homebrew 사용하기 [[using-homebrew]]

[Homebrew](https://brew.sh/)를 사용하여 CLI를 설치할 수도 있습니다:

```bash
>>> brew install huggingface-cli
```

Homebrew huggingface에 대한 자세한 내용은 [여기](https://formulae.brew.sh/formula/huggingface-cli)에서 확인할 수 있습니다.

## hf auth login [[hf-login]]

Hugging Face Hub에 접근하는 대부분의 작업(비공개 리포지토리 액세스, 파일 업로드, PR 제출 등)을 위해서는 Hugging Face 계정에 로그인해야 합니다. 로그인을 하기 위해서 [설정 페이지](https://huggingface.co/settings/tokens)에서 생성한 [사용자 액세스 토큰](https://huggingface.co/docs/hub/security-tokens)이 필요하며, 이 토큰은 Hub에서의 사용자 인증에 사용됩니다. 파일 업로드나 콘텐츠 수정을 위해선 쓰기 권한이 있는 토큰이 필요합니다.
토큰을 받은 후에 터미널에서 다음 명령을 실행하세요:

```bash
>>> hf auth login
```

이 명령은 토큰을 입력하라는 메시지를 표시합니다. 토큰을 복사하여 붙여넣고 Enter 키를 입력합니다. 그런 다음 토큰을 git 자격 증명으로 저장할지 묻는 메시지가 표시됩니다. 로컬에서 `git`을 사용할 계획이라면 Enter 키를 입력합니다(기본값은 yes). 마지막으로 Hub에서 토큰의 유효성을 검증한 후 로컬에 저장합니다.

```
_|    _|  _|    _|    _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|_|_|_|    _|_|      _|_|_|  _|_|_|_|
_|    _|  _|    _|  _|        _|          _|    _|_|    _|  _|            _|        _|    _|  _|        _|
_|_|_|_|  _|    _|  _|  _|_|  _|  _|_|    _|    _|  _|  _|  _|  _|_|      _|_|_|    _|_|_|_|  _|        _|_|_|
_|    _|  _|    _|  _|    _|  _|    _|    _|    _|    _|_|  _|    _|      _|        _|    _|  _|        _|
_|    _|    _|_|      _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|        _|    _|    _|_|_|  _|_|_|_|

To log in, `huggingface_hub` requires a token generated from https://huggingface.co/settings/tokens .
Token:
Add token as git credential? (Y/n)
Token is valid (permission: write).
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/wauplin/.cache/huggingface/token
Login successful
```

프롬프트를 거치지 않고 바로 로그인하고 싶다면, 명령줄에서 토큰을 직접 입력할 수도 있습니다. 하지만 보안을 더욱 강화하기 위해서는 명령 기록에 토큰을 남기지 않고, 환경 변수를 통해 토큰을 전달하는 방법이 바람직합니다.

```bash
# Or using an environment variable
>>> hf auth login --token $HUGGINGFACE_TOKEN --add-to-git-credential
Token is valid (permission: write).
Your token has been saved in your configured git credential helpers (store).
Your token has been saved to /home/wauplin/.cache/huggingface/token
Login successful
```

[이 단락](../quick-start#authentication)에서 인증에 대한 더 자세한 내용을 확인할 수 있습니다.

## hf auth whoami [[hf-whoami]]

로그인 여부를 확인하기 위해 `hf auth whoami` 명령어를 사용할 수 있습니다. 이 명령어는 옵션이 없으며, 간단하게 사용자 이름과 소속된 조직들을 출력합니다:

```bash
hf auth whoami
Wauplin
orgs:  huggingface,eu-test,OAuthTesters,hf-accelerate,HFSmolCluster
```

로그인하지 않은 경우 오류 메시지가 출력됩니다.

## hf auth logout [[hf-auth-logout]]

이 명령어를 사용하여 로그아웃할 수 있습니다. 실제로는 컴퓨터에 저장된 토큰을 삭제합니다.

하지만 `HF_TOKEN` 환경 변수를 사용하여 로그인했다면, 이 명령어로는 로그아웃할 수 없습니다([참조]((../package_reference/environment_variables#hftoken))). 대신 컴퓨터의 환경 설정에서 `HF_TOKEN` 변수를 제거하면 됩니다.

## hf download [[hf-download]]


`hf download` 명령어를 사용하여 Hub에서 직접 파일을 다운로드할 수 있습니다. [다운로드](./download) 가이드에서 설명된 `hf_hub_download()`, `snapshot_download()` 헬퍼 함수를 사용하여 반환된 경로를 터미널에 출력합니다. 우리는 아래 예시에서 가장 일반적인 사용 사례를 살펴볼 것입니다. 사용 가능한 모든 옵션을 보려면 아래 명령어를 실행해보세요:

```bash
hf download --help
```

### 파일 한 개 다운로드하기 [[download-a-single-file]]

리포지토리에서 파일 하나를 다운로드하고 싶다면, repo_id와 다운받고 싶은 파일명을 아래와 같이 입력하세요:

```bash
>>> hf download gpt2 config.json
downloading https://huggingface.co/gpt2/resolve/main/config.json to /home/wauplin/.cache/huggingface/hub/tmpwrq8dm5o
(…)ingface.co/gpt2/resolve/main/config.json: 100%|██████████████████████████████████| 665/665 [00:00<00:00, 2.49MB/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

이 명령어를 실행하면 항상 마지막 줄에 파일 경로를 출력합니다.

### 전체 리포지토리 다운로드하기 [[download-an-entire-repository]]

리포지토리의 모든 파일을 다운로드하고 싶을 때에는 repo id만 입력하면 됩니다:

```bash
>>> hf download HuggingFaceH4/zephyr-7b-beta
Fetching 23 files:   0%|                                                | 0/23 [00:00<?, ?it/s]
...
...
/home/wauplin/.cache/huggingface/hub/models--HuggingFaceH4--zephyr-7b-beta/snapshots/3bac358730f8806e5c3dc7c7e19eb36e045bf720
```

### 여러 파일 다운로드하기 [[download-multiple-files]]

리포지토리의 전체 폴더를 다운로드하지 않고 한 번에 여러 파일을 다운로드할 수도 있습니다. 이를 위한 두 가지 방법이 있습니다. 다운로드하고자 하는 파일들의 목록이 정해져 있다면, 해당 파일명을 순서대로 입력하면 됩니다:

```bash
>>> hf download gpt2 config.json model.safetensors
Fetching 2 files:   0%|                                                                        | 0/2 [00:00<?, ?it/s]
downloading https://huggingface.co/gpt2/resolve/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors to /home/wauplin/.cache/huggingface/hub/tmpdachpl3o
(…)8f278a7049802950aedb10/model.safetensors: 100%|██████████████████████████████| 8.09k/8.09k [00:00<00:00, 40.5MB/s]
Fetching 2 files: 100%|████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  3.76it/s]
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

또 다른 방법은 `--include`와 `--exclude` 옵션을 사용하여 원하는 파일을 필터링하는 것입니다. 예를 들어, [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)의 모든 safetensors 파일을 다운로드하되 FP16 정밀도의 파일은 제외하고 싶다면 다음과 같이 실행할 수 있습니다:

```bash
>>> hf download stabilityai/stable-diffusion-xl-base-1.0 --include "*.safetensors" --exclude "*.fp16.*"*
Fetching 8 files:   0%|                                                                         | 0/8 [00:00<?, ?it/s]
...
...
Fetching 8 files: 100%|█████████████████████████████████████████████████████████████████████████| 8/8 (...)
/home/wauplin/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/snapshots/462165984030d82259a11f4367a4eed129e94a7b
```

### 데이터 세트 또는 Space 다운로드하기 [[download-a-dataset-or-a-space]]

앞서 소개된 예시들을 통해 모델 리포지토리에서 다운로드하는 방법을 배웠습니다. 데이터 세트나 Space를 다운로드하고자 할 때는 `--repo-type` 옵션을 사용하세요:

```bash
# https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k
>>> hf download HuggingFaceH4/ultrachat_200k --repo-type dataset

# https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
>>> hf download HuggingFaceH4/zephyr-chat --repo-type space

...
```

### 특정 리비전 다운로드하기 [[download-a-specific-revision]]

따로 리비전을 지정하지 않는다면 기본적으로 main 브랜치의 최신 커밋에서 파일을 다운로드합니다. 특정 리비전(커밋 해시, 브랜치 이름 또는 태그)에서 다운로드하려면 `--revision` 옵션을 사용하세요:

```bash
>>> hf download bigcode/the-stack --repo-type dataset --revision v1.1
...
```

### 로컬 폴더에 다운로드하기 [[download-to-a-local-folder]]

Hub에서 파일을 다운로드하는 권장되고 기본적인 방법은 캐시 시스템을 사용하는 것입니다. 그러나 특정한 경우에는 파일을 지정된 폴더로 다운로드하고 옮기고 싶을 수 있습니다. 이는 git 명령어와 유사한 워크플로우를 만드는데 도움이 됩니다. `--local_dir` 옵션을 사용하여 이 작업을 수행할 수 있습니다.

> [!WARNING]
> 로컬 폴더에 다운로드하는 것에는 몇 가지 단점이 있습니다. `--local-dir` 명령어를 사용하기 전에 [다운로드](./download#download-files-to-local-folder) 가이드에서 해당 내용을 확인해보세요.

```bash
>>> hf download adept/fuyu-8b model-00001-of-00002.safetensors --local-dir .
...
./model-00001-of-00002.safetensors
```

### 캐시 디렉터리 지정하기 [[specify-cache-directory]]

기본적으로 모든 파일은 `HF_HOME` [환경 변수](../package_reference/environment_variables#hfhome)에서 정의한 캐시 디렉터리에 다운로드됩니다. `--cache-dir`을 사용하여 직접 캐시 위치를 지정할 수 있습니다:

```bash
>>> hf download adept/fuyu-8b --cache-dir ./path/to/cache
...
./path/to/cache/models--adept--fuyu-8b/snapshots/ddcacbcf5fdf9cc59ff01f6be6d6662624d9c745
```

### 토큰 설정하기 [[specify-a-token]]

비공개 또는 접근이 제한된 리포지토리들에 접근하기 위해서는 토큰이 필요합니다. 기본적으로 로컬에 저장된 토큰(`hf auth login`)이 사용됩니다. 직접 인증하고 싶다면 `--token` 옵션을 사용해보세요:

```bash
>>> hf download gpt2 config.json --token=hf_****
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
```

### 조용한 모드 [[quiet-mode]]

`hf download` 명령은 상세한 정보를 출력합니다. 경고 메시지, 다운로드된 파일 정보, 진행률 등이 포함됩니다. 이 모든 출력을 숨기려면 `--quiet` 옵션을 사용하세요. 이 옵션을 사용하면 다운로드된 파일의 경로가 표시되는 마지막 줄만 출력됩니다. 이 기능은 스크립트에서 다른 명령어로 출력을 전달하고자 할 때 유용할 수 있습니다.

```bash
>>> hf download gpt2 --quiet
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10
```

## hf upload [[hf-upload]]

`hf upload` 명령어로 Hub에 직접 파일을 업로드할 수 있습니다. [업로드](./upload) 가이드에서 설명된 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 헬퍼 함수를 사용합니다. 우리는 아래 예시에서 가장 일반적인 사용 사례를 살펴볼 것입니다. 사용 가능한 모든 옵션을 보려면 아래 명령어를 실행해보세요:

```bash
>>> hf upload --help
```

### 전체 폴더 업로드하기 [[upload-an-entire-folder]]

이 명령어의 기본 사용법은 다음과 같습니다:

```bash
# Usage:  hf upload [repo_id] [local_path] [path_in_repo]
```

현재 디텍터리를 리포지토리의 루트 위치에 업로드하려면, 아래 명령어를 사용하세요:

```bash
>>> hf upload my-cool-model . .
https://huggingface.co/Wauplin/my-cool-model/tree/main/
```

> [!TIP]
> 리포지토리가 아직 존재하지 않으면 자동으로 생성됩니다.

또한, 특정 폴더만 업로드하는 것도 가능합니다:

```bash
>>> hf upload my-cool-model ./models .
https://huggingface.co/Wauplin/my-cool-model/tree/main/
```

마지막으로, 리포지토리의 특정 위치에 폴더를 업로드할 수 있습니다:

```bash
>>> hf upload my-cool-model ./path/to/curated/data /data/train
https://huggingface.co/Wauplin/my-cool-model/tree/main/data/train
```

### 파일 한 개 업로드하기 [[upload-a-single-file]]

컴퓨터에 있는 파일을 가리키도록 `local_path`를 설정함으로써 파일 한 개를 업로드할 수 있습니다. 이때, `path_in_repo`는 선택사항이며 로컬 파일 이름을 기본값으로 사용합니다:

```bash
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors
```

파일 한 개를 특정 디렉터리에 업로드하고 싶다면, `path_in_repo`를 그에 맞게 설정하세요:

```bash
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors /vae/model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/vae/model.safetensors
```

### 여러 파일 업로드하기 [[upload-multiple-files]]

전체 폴더를 업로드하지 않고 한 번에 여러 파일을 업로드하려면 `--include`와 `--exclude` 옵션을 사용해보세요. 리포지토리에 있는 파일을 삭제하면서 새 파일을 업로드하는 `--delete` 옵션과 함께 사용할 수 있습니다. 아래 예시는 `/logs` 안의 파일을 제외한 모든 파일을 업로드하고 원격 파일들을 삭제함으로써 로컬 Space를 동기화하는 방법을 보여줍니다:

```bash
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
>>> hf upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
...
```

### 데이터 세트 또는 Space에 업로드하기 [[upload-to-a-dataset-or-space]]

데이터 세트나 Space에 업로드하려면 `--repo-type` 옵션을 사용하세요:

```bash
>>> hf upload Wauplin/my-cool-dataset ./data /train --repo-type=dataset
...
```

### 조직에 업로드하기 [[upload-to-an-organization]]

개인 리포지토리 대신 조직이 소유한 리포지토리에 파일을 업로드하려면 `repo_id`를 입력해야 합니다:

```bash
>>> hf upload MyCoolOrganization/my-cool-model . .
https://huggingface.co/MyCoolOrganization/my-cool-model/tree/main/
```

### 특정 개정에 업로드하기 [[upload-to-a-specific-revision]]

기본적으로 파일은 `main` 브랜치에 업로드됩니다. 다른 브랜치나 참조에 파일을 업로드하려면 `--revision` 옵션을 사용하세요:

```bash
# Upload files to a PR
>>> hf upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104
...
```

**참고:** `revision`이 존재하지 않고 `--create-pr` 옵션이 설정되지 않은 경우, `main` 브랜치에서 자동으로 새 브랜치가 생성됩니다.

### 업로드 및 PR 생성하기 [[upload-and-create-a-pr]]

리포지토리에 푸시할 권한이 없다면, PR을 생성하여 작성자들에게 변경하고자 하는 내용을 알려야 합니다. 이를 위해서 `--create-pr` 옵션을 사용할 수 있습니다:

```bash
# Create a PR and upload the files to it
>>> hf upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104
https://huggingface.co/datasets/bigcode/the-stack/blob/refs%2Fpr%2F104/
```

### 정기적으로 업로드하기 [[upload-at-regular-intervals]]

리포지토리에 정기적으로 업데이트하고 싶을 때, `--every` 옵션을 사용할 수 있습니다. 예를 들어, 모델을 훈련하는 중에 로그 폴더를 10분마다 업로드하고 싶다면 다음과 같이 사용할 수 있습니다:

```bash
# Upload new logs every 10 minutes
hf upload training-model logs/ --every=10
```

### 커밋 메시지 지정하기 [[specify-a-commit-message]]

`--commit-message`와 `--commit-description`을 사용하여 기본 메시지 대신 사용자 지정 메시지와 설명을 커밋에 설정하세요:

```bash
>>> hf upload Wauplin/my-cool-model ./models . --commit-message="Epoch 34/50" --commit-description="Val accuracy: 68%. Check tensorboard for more details."
...
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

### 토큰 지정하기 [[specify-a-token]]

파일을 업로드하려면 토큰이 필요합니다. 기본적으로 로컬에 저장된 토큰(`hf auth login`)이 사용됩니다. 직접 인증하고 싶다면 `--token` 옵션을 사용해보세요:

```bash
>>> hf upload Wauplin/my-cool-model ./models . --token=hf_****
...
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

### 조용한 모드 [[quiet-mode]]

기본적으로 `hf upload` 명령은 상세한 정보를 출력합니다. 경고 메시지, 업로드된 파일 정보, 진행률 등이 포함됩니다. 이 모든 출력을 숨기려면 `--quiet` 옵션을 사용하세요. 이 옵션을 사용하면 업로드된 파일의 URL이 표시되는 마지막 줄만 출력됩니다. 이 기능은 스크립트에서 다른 명령어로 출력을 전달하고자 할 때 유용할 수 있습니다.

```bash
>>> hf upload Wauplin/my-cool-model ./models . --quiet
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

## hf cache ls [[hf-cache-ls]]

로컬 캐시에 어떤 리포지토리나 수정 버전이 저장되어 있는지 확인하려면 `hf cache ls`를 사용하세요. 기본 출력은 리포지토리 단위 요약입니다.

```bash
>>> hf cache ls
ID                                   SIZE   LAST_ACCESSED LAST_MODIFIED REFS
------------------------------------ ------- ------------- ------------- -------------------
dataset/glue                         116.3K 4 days ago     4 days ago     2.4.0 main 1.17.0
dataset/google/fleurs                 64.9M 1 week ago     1 week ago     main refs/pr/1
model/Jean-Baptiste/camembert-ner    441.0M 2 weeks ago    16 hours ago   main
model/bert-base-cased                  1.9G 1 week ago     2 years ago
model/t5-base                          10.1K 3 months ago   3 months ago   main
model/t5-small                        970.7M 3 days ago     3 days ago     main refs/pr/1

Found 6 repo(s) for a total of 12 revision(s) and 3.4G on disk.
```

`--revisions` 옵션과 `--filter` 표현식을 조합하면 특정 스냅샷만 추려 볼 수 있습니다.

```bash
>>> hf cache ls --revisions --filter "size>1GB" --filter "accessed>30d"
ID                                   REVISION            SIZE   LAST_MODIFIED REFS
------------------------------------ ------------------ ------- ------------- -------------------
model/bert-base-cased                6d1d7a1a2a6cf4c2    1.9G  2 years ago
model/t5-small                       1c610f6b3f5e7d8a    1.1G  3 months ago  main

Found 2 repo(s) for a total of 2 revision(s) and 3.0G on disk.
```

`--format json`, `--format csv`, `--quiet`, `--cache-dir` 등 다양한 옵션으로 출력 형식을 조정할 수 있습니다. 자세한 내용은 [캐시 관리](./manage-cache#scan-your-cache) 가이드를 참고하세요.

`hf cache ls --quiet`로 추린 식별자를 `hf cache rm`에 바로 파이프하면 오래된 항목을 한 번에 정리할 수 있습니다.

```bash
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q) -y
About to delete 2 repo(s) totalling 5.31G.
  - model/meta-llama/Llama-3.2-1B-Instruct (entire repo)
  - model/hexgrad/Kokoro-82M (entire repo)
Delete repo: ~/.cache/huggingface/hub/models--meta-llama--Llama-3.2-1B-Instruct
Delete repo: ~/.cache/huggingface/hub/models--hexgrad--Kokoro-82M
Cache deletion done. Saved 5.31G.
Deleted 2 repo(s) and 2 revision(s); freed 5.31G.
```

## hf cache rm [[hf-cache-rm]]

캐시에서 특정 리포지토리나 수정 버전을 삭제하려면 `hf cache rm`을 사용합니다. 리포지토리 식별자나 수정 버전 해시를 하나 이상 전달하면 됩니다. `--dry-run`으로 미리보기, `--yes`로 확인창 건너뛰기, `--cache-dir`로 다른 경로 지정이 가능합니다.

## hf cache prune [[hf-cache-prune]]

참조되지 않는(detached) 수정 버전만 한꺼번에 제거하려면 `hf cache prune`을 실행하세요. `--dry-run`, `--yes`, `--cache-dir` 옵션 역시 동일하게 사용할 수 있습니다.

## hf env [[hf-env]]

`hf env` 명령어는 사용자의 컴퓨터 설정에 대한 상세한 정보를 보여줍니다. 이는 [GitHub](https://github.com/huggingface/huggingface_hub)에서 문제를 제출할 때, 관리자가 문제를 파악하고 해결하는 데 도움이 됩니다.

```bash
>>> hf env

Copy-and-paste the text below in your GitHub issue.

- huggingface_hub version: 0.19.0.dev0
- Platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/wauplin/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: Wauplin
- Configured git credential helpers: store
- FastAI: N/A
- Torch: 1.12.1
- Jinja2: 3.1.2
- Graphviz: 0.20.1
- Pydot: 1.4.2
- Pillow: 9.2.0
- hf_transfer: 0.1.3
- gradio: 4.0.2
- tensorboard: 2.6
- numpy: 1.23.2
- pydantic: 2.4.2
- aiohttp: 3.8.4
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/wauplin/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/wauplin/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/wauplin/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/cli.md" />

### Hub와 어떤 머신 러닝 프레임워크든 통합[[integrate-any-ml-framework-with-the-hub]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/integrations.md

# Hub와 어떤 머신 러닝 프레임워크든 통합[[integrate-any-ml-framework-with-the-hub]]
Hugging Face Hub는 커뮤니티와 모델을 공유하는 것을 쉽게 만들어줍니다. 이는 오픈소스 생태계의 [수십 가지 라이브러리](https://huggingface.co/docs/hub/models-libraries)를 지원합니다. 저희는 항상 협업적인 머신 러닝을 발전시키기 위해 이 라이브러리를 확대하고자 노력하고 있습니다. `huggingface_hub` 라이브러리는 어떤 Python 스크립트든지 쉽게 파일을 업로드하고 가져올 수 있는 중요한 역할을 합니다.

라이브러리를 Hub와 통합하는 네 가지 주요 방법이 있습니다:

1. **Hub에 업로드하기**: 모델을 Hub에 업로드하는 메소드를 구현합니다. 이에는 모델 가중치뿐만 아니라 [모델 카드](https://huggingface.co/docs/huggingface_hub/how-to-model-cards) 및 모델 실행에 필요한 다른 관련 정보나 데이터(예: 훈련 로그)가 포함됩니다. 이 메소드는 일반적으로 `push_to_hub()`라고 합니다.
2. **Hub에서 다운로드하기**: Hub에서 모델을 가져오는 메소드를 구현합니다. 이 메소드는 모델 구성/가중치를 다운로드하고 모델을 가져와야 합니다. 이 메소드는 일반적으로 `from_pretrained` 또는 `load_from_hub()`라고 합니다.
3. **추론 API**: 라이브러리에서 지원하는 모델에 대해 무료로 추론을 실행할 수 있도록 당사 서버를 사용합니다.
4. **위젯**: Hub의 모델 랜딩 페이지에 위젯을 표시합니다. 이를 통해 사용자들은 브라우저에서 빠르게 모델을 시도할 수 있습니다.

이 가이드에서는 앞의 두 가지 주제에 중점을 둘 것입니다. 우리는 라이브러리를 통합하는 데 사용할 수 있는 두 가지 주요 방법을 소개하고 각각의 장단점을 설명할 것입니다. 두 가지 중 어떤 것을 선택할지에 대한 도움이 되도록 끝 부분에 내용이 요약되어 있습니다. 이는 단지 가이드라는 것을 명심하고 상황에 맞게 적응시킬 수 있는 가이드라는 점을 유념하십시오.

추론 및 위젯에 관심이 있는 경우 [이 가이드](https://huggingface.co/docs/hub/models-adding-libraries#set-up-the-inference-api)를 참조할 수 있습니다. 양쪽 모두에서 라이브러리를 Hub와 통합하고 [문서](https://huggingface.co/docs/hub/models-libraries)에 목록에 게시하고자 하는 경우에는 언제든지 연락하실 수 있습니다.

## 유연한 접근 방식: 도우미(helper)[[a-flexible-approach-helpers]]

라이브러리를 Hub에 통합하는 첫 번째 접근 방법은 실제로 `push_to_hub` 및 `from_pretrained` 메소드를 직접 구현하는 것입니다. 이를 통해 업로드/다운로드할 파일 및 입력을 처리하는 방법에 대한 완전한 유연성을 제공받을 수 있습니다. 이를 위해 [파일 업로드](./upload) 및 [파일 다운로드](./download) 가이드를 참조하여 자세히 알아볼 수 있습니다. 예를 들어 FastAI 통합이 구현된 방법을 보면 됩니다 ([push_to_hub_fastai()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.push_to_hub_fastai) 및 [from_pretrained_fastai()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.from_pretrained_fastai)를 참조).

라이브러리마다 구현 방식은 다를 수 있지만, 워크플로우는 일반적으로 비슷합니다.

### from_pretrained[[frompretrained]]

일반적으로 `from_pretrained` 메소드는 다음과 같은 형태를 가집니다:

```python
def from_pretrained(model_id: str) -> MyModelClass:
   # Hub로부터 모델을 다운로드
   cached_model = hf_hub_download(
      repo_id=repo_id,
      filename="model.pkl",
      library_name="fastai",
      library_version=get_fastai_version(),
   )

   # 모델 가져오기
    return load_model(cached_model)
```

### push_to_hub[[pushtohub]]

`push_to_hub` 메소드는 종종 리포지토리 생성, 모델 카드 생성 및 가중치 저장을 처리하기 위해 조금 더 복잡한 접근 방식이 필요합니다. 일반적으로 모든 이러한 파일을 임시 폴더에 저장한 다음 업로드하고 나중에 삭제하는 방식이 흔히 사용됩니다.

```python
def push_to_hub(model: MyModelClass, repo_name: str) -> None:
   api = HfApi()

   # 해당 리포지토리가 아직 없다면 리포지토리를 생성하고 관련된 리포지토리 ID를 가져옵니다.
   repo_id = api.create_repo(repo_name, exist_ok=True)

   # 모든 파일을 임시 디렉토리에 저장하고 이를 단일 커밋으로 푸시합니다.
   with TemporaryDirectory() as tmpdir:
      tmpdir = Path(tmpdir)

      # 가중치 저장
      save_model(model, tmpdir / "model.safetensors")

      # model card 생성
      card = generate_model_card(model)
      (tmpdir / "README.md").write_text(card)

      # 로그 저장
      # 설정 저장
      # 평가 지표를 저장
      # ...

      # Hub에 푸시
      return api.upload_folder(repo_id=repo_id, folder_path=tmpdir)
```


물론 이는 단순한 예시에 불과합니다. 더 복잡한 조작(원격 파일 삭제, 가중치를 실시간으로 업로드, 로컬로 가중치를 유지 등)에 관심이 있다면 [파일 업로드](./upload) 가이드를 참조해 주세요.

### 제한 사항[[limitations]]

이러한 방식은 유연성을 가지고 있지만, 유지보수 측면에서 일부 단점을 가지고 있습니다. Hugging Face 사용자들은 `huggingface_hub`와 함께 작업할 때 추가 기능에 익숙합니다. 예를 들어, Hub에서 파일을 로드할 때 다음과 같은 매개변수를 제공하는 것이 일반적입니다:

- `token`: 개인 리포지토리에서 다운로드하기 위한 토큰
- `revision`: 특정 브랜치에서 다운로드하기 위한 리비전
- `cache_dir`: 특정 디렉터리에 파일을 캐시하기 위한 디렉터리
- `force_download`/`local_files_only`: 캐시를 재사용할 것인지 여부를 결정하는 매개변수
- `proxies`: HTTP 세션 구성

모델을 푸시할 때는 유사한 매개변수가 지원됩니다:
- `commit_message`: 사용자 정의 커밋 메시지
- `private`: 개인 리포지토리를 만들어야 할 경우
- `create_pr`: `main`에 푸시하는 대신 PR을 만드는 경우
- `branch`: `main` 브랜치 대신 브랜치에 푸시하는 경우
- `allow_patterns/ignore_patterns`: 업로드할 파일을 필터링하는 매개변수
- `token`
- ...

이러한 매개변수는 위에서 본 구현에 추가하여 `huggingface_hub` 메소드로 전달할 수 있습니다. 그러나 매개변수가 변경되거나 새로운 기능이 추가되는 경우에는 패키지를 업데이트해야 합니다. 이러한 매개변수를 지원하는 것은 유지 관리할 문서가 더 많아진다는 것을 의미합니다. 이러한 제한 사항을 완화할 수 있는 방법을 보려면 다음 섹션인 **클래스 상속**으로 이동해 보겠습니다.

## 더욱 복잡한 접근법: 클래스 상속[[a-more-complex-approach-class-inheritance]]

위에서 보았듯이 Hub와 통합하기 위해 라이브러리에 포함해야 할 주요 메소드는 파일을 업로드 (`push_to_hub`) 와 파일 다운로드 (`from_pretrained`)입니다. 이러한 메소드를 직접 구현할 수 있지만, 이에는 몇 가지 주의할 점이 있습니다. 이를 해결하기 위해 `huggingface_hub`은 클래스 상속을 사용하는 도구를 제공합니다. 이 도구가 어떻게 작동하는지 살펴보겠습니다!

많은 경우에 라이브러리는 이미 Python 클래스를 사용하여 모델을 구현합니다. 이 클래스에는 모델의 속성 및 로드, 실행, 훈련 및 평가하는 메소드가 포함되어 있습니다. 접근 방식은 믹스인을 사용하여 이 클래스를 확장하여 업로드 및 다운로드 기능을 포함하는 것입니다. [믹스인(Mixin)](https://stackoverflow.com/a/547714)은 기존 클래스에 여러 상속을 통해 특정 기능을 확장하기 위해 설계된 클래스입니다. `huggingface_hub`은 자체 믹스인인 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)을 제공합니다. 여기서 핵심은 동작과 이를 사용자 정의하는 방법을 이해하는 것입니다.

[ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin) 클래스는 세 개의 *공개* 메소드(`push_to_hub`, `save_pretrained`, `from_pretrained`)를 구현합니다. 이 메소드들은 사용자가 라이브러리를 사용하여 모델을 로드/저장할 때 호출하는 메소드입니다. 또한 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)은 두 개의 *비공개* 메소드(`_save_pretrained` 및 `_from_pretrained`)를 정의합니다. 라이브러리를 통합하려면 이 메소드들을 구현해야 합니다. :

1. 모델 클래스를 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)에서 상속합니다.
2. 비공개 메소드를 구현합니다:
   - [_save_pretrained()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin._save_pretrained): 디렉터리 경로를 입력으로 받아 모델을 해당 디렉터리에 저장하는 메소드입니다. 이 메소드에는 모델 카드, 모델 가중치, 구성 파일, 훈련 로그 및 그림 등 해당 모델에 대한 모든 관련 정보를 저장하기 위한 로직을 작성해야 합니다. [모델 카드](https://huggingface.co/docs/hub/model-cards)는 모델을 설명하는 데 특히 중요합니다. 더 자세한 내용은 [구현 가이드](./model-cards)를 확인하세요.
   - [_from_pretrained()](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin._from_pretrained): `model_id`를 입력으로 받아 인스턴스화된 모델을 반환하는 **클래스 메소드**입니다. 이 메소드는 관련 파일을 다운로드하고 가져와야 합니다.
3. 완료했습니다!

[ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)의 장점은 파일의 직렬화/로드에만 신경을 쓰면 되기 때문에 즉시 사용할 수 있다는 것입니다. 리포지토리 생성, 커밋, PR 또는 리비전과 같은 사항에 대해 걱정할 필요가 없습니다. [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)은 또한 공개 메소드가 문서화되고 타입에 주석이 달려있는지를 확인하며, Hub 모델의 다운로드 수를 볼 수 있도록 합니다. 이 모든 것은 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)에 의해 처리되며 사용자에게 제공됩니다. 

### 자세한 예시: PyTorch[[a-concrete-example-pytorch]]

위에서 언급한 내용의 좋은 예시는 Pytorch 프레임워크를 통합한 [PyTorchModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)입니다. 바로 사용 가능할 수 있는 메소드입니다.

#### 어떻게 사용하나요?[[how-to-use-it]]

다음은 Hub에서 PyTorch 모델을 로드/저장하는 방법입니다:

```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin


# PyTorch 모델을 여러분이 흔히 사용하는 방식과 완전히 동일하게 정의하세요.
>>> class MyModel(
...         nn.Module,
...         PyTorchModelHubMixin, # 다중 상속
...         library_name="keras-nlp",
...         tags=["keras"],
...         repo_url="https://github.com/keras-team/keras-nlp",
...         docs_url="https://keras.io/keras_nlp/",
...         # ^ 모델 카드를 생성하는 데 선택적인 메타데이터입니다.
...     ):
...     def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
...         super().__init__()
...         self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
...         self.linear = nn.Linear(output_size, vocab_size)

...     def forward(self, x):
...         return self.linear(x + self.param)

# 1. 모델 생성
>>> model = MyModel(hidden_size=128)

# 설정은 입력 및 기본값을 기반으로 자동으로 생성됩니다.
>>> model.param.shape[0]
128

# 2. (선택사항) 모델을 로컬 디렉터리에 저장합니다.
>>> model.save_pretrained("path/to/my-awesome-model")

# 3. 모델 가중치를 Hub에 푸시합니다.
>>> model.push_to_hub("my-awesome-model")

# 4. Hub로부터 모델을 초기화합니다. => 이때 설정은 보존됩니다.
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.param.shape[0]
128

# 모델 카드가 올바르게 작성되었습니다.
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["keras", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"keras-nlp"
```

#### 구현[[implementation]]

실제 구현은 매우 간단합니다. 전체 구현은 [여기](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hub_mixin.py)에서 찾을 수 있습니다.

1. 클래스를 `ModelHubMixin`으로부터 상속하세요:

```python
from huggingface_hub import ModelHubMixin

class PyTorchModelHubMixin(ModelHubMixin):
   (...)
```

2. `_save_pretrained` 메소드를 구현하세요:

```py
from huggingface_hub import ModelHubMixin

class PyTorchModelHubMixin(ModelHubMixin):
   (...)

    def _save_pretrained(self, save_directory: Path) -> None:
        """PyTorch 모델의 가중치를 로컬 디렉터리에 저장합니다."""
        save_model_as_safetensor(self.module, str(save_directory / SAFETENSORS_SINGLE_FILE))

```

3. `_from_pretrained` 메소드를 구현하세요:

```python
class PyTorchModelHubMixin(ModelHubMixin):
   (...)

   @classmethod # 반드시 클래스 메소드여야 합니다!
   def _from_pretrained(
      cls,
      *,
      model_id: str,
      revision: str,
      cache_dir: str,
      force_download: bool,
      proxies: Optional[dict],
      local_files_only: bool,
      token: Union[str, bool, None],
      map_location: str = "cpu", # 추가 인자
      strict: bool = False, # 추가 인자
      **model_kwargs,
   ):
      """PyTorch의 사전 학습된 가중치와 모델을 반환합니다."""
        model = cls(**model_kwargs)
        if os.path.isdir(model_id):
            print("Loading weights from local directory")
            model_file = os.path.join(model_id, SAFETENSORS_SINGLE_FILE)
            return cls._load_as_safetensor(model, model_file, map_location, strict)

         model_file = hf_hub_download(
            repo_id=model_id,
            filename=SAFETENSORS_SINGLE_FILE,
            revision=revision,
            cache_dir=cache_dir,
            force_download=force_download,
            token=token,
            local_files_only=local_files_only,
            )
         return cls._load_as_safetensor(model, model_file, map_location, strict)
```

이게 전부입니다! 이제 라이브러리를 통해 Hub로부터 파일을 업로드하고 다운로드할 수 있습니다.

### 고급 사용법[[advanced-usage]]

위의 섹션에서는 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)이 어떻게 작동하는지 간단히 살펴보았습니다. 이번 섹션에서는 Hugging Face Hub와 라이브러리 통합을 개선하기 위한 더 고급 기능 중 일부를 살펴보겠습니다.

#### 모델 카드[[model-card]]

[ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)은 모델 카드를 자동으로 생성합니다. 모델 카드는 모델과 함께 제공되는 중요한 정보를 제공하는 파일입니다. 모델 카드는 추가 메타데이터가 포함된 간단한 Markdown 파일입니다. 모델 카드는 발견 가능성, 재현성 및 공유를 위해 중요합니다! 더 자세한 내용은 [모델 카드 가이드](https://huggingface.co/docs/hub/model-cards)를 확인하세요.

모델 카드를 반자동으로 생성하는 것은 라이브러리로 푸시된 모든 모델이 `library_name`, `tags`, `license`, `pipeline_tag` 등과 같은 공통 메타데이터를 공유하도록 하는 좋은 방법입니다. 이를 통해 모든 모델이 Hub에서 쉽게 검색 가능하게 되고, Hub에 접속한 사용자에게 일부 리소스 링크를 제공합니다. [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)을 상속할 때 메타데이터를 직접 정의할 수 있습니다:

```py
class UniDepthV1(
   nn.Module,
   PyTorchModelHubMixin,
   library_name="unidepth",
   repo_url="https://github.com/lpiccinelli-eth/UniDepth",
   docs_url=...,
   pipeline_tag="depth-estimation",
   license="cc-by-nc-4.0",
   tags=["monocular-metric-depth-estimation", "arxiv:1234.56789"]
):
   ...
```

기본적으로는 제공된 정보로 일반적인 모델 카드가 생성됩니다(예: [pyp1/VoiceCraft_giga830M](https://huggingface.co/pyp1/VoiceCraft_giga830M)). 그러나 사용자 정의 모델 카드 템플릿을 정의할 수도 있습니다!

이 예에서는 `VoiceCraft` 클래스로 푸시된 모든 모델에 자동으로 인용 부분과 라이선스 세부 정보가 포함됩니다. 모델 카드 템플릿을 정의하는 방법에 대한 자세한 내용은 [모델 카드 가이드](./model-cards)를 참조하세요.

```py
MODEL_CARD_TEMPLATE = """
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{{ card_data }}
---

This is a VoiceCraft model. For more details, please check out the official Github repo: https://github.com/jasonppy/VoiceCraft. This model is shared under a Attribution-NonCommercial-ShareAlike 4.0 International license.

## Citation

@article{peng2024voicecraft,
  author    = {Peng, Puyuan and Huang, Po-Yao and Li, Daniel and Mohamed, Abdelrahman and Harwath, David},
  title     = {VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild},
  journal   = {arXiv},
  year      = {2024},
}
"""

class VoiceCraft(
   nn.Module,
   PyTorchModelHubMixin,
   library_name="voicecraft",
   model_card_template=MODEL_CARD_TEMPLATE,
   ...
):
   ...
```

마지막으로, 모델 카드 생성 프로세스를 동적 값으로 확장하려면 `generate_model_card()` 메소드를 재정의할 수 있습니다:

```py
from huggingface_hub import ModelCard, PyTorchModelHubMixin

class UniDepthV1(nn.Module, PyTorchModelHubMixin, ...):
   (...)

   def generate_model_card(self, *args, **kwargs) -> ModelCard:
      card = super().generate_model_card(*args, **kwargs)
      card.data.metrics = ...  # 메타데이터에 메트릭 추가
      card.text += ... # 모델 카드에 섹션 추가
      return card
```

#### 구성[[config]]

[ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)은 모델 구성을 처리합니다. 모델을 인스턴스화할 때 입력 값들을 자동으로 확인하고 이를 `config.json` 파일에 직렬화합니다. 이렇게 함으로써 두 가지 이점이 제공됩니다:

1. 사용자는 정확히 동일한 매개변수로 모델을 다시 가져올 수 있습니다.
2. `config.json` 파일이 자동으로 생성되면 Hub에서 분석이 가능해집니다(즉, "다운로드" 횟수가 기록됩니다).

하지만 이것이 실제로 어떻게 작동하는 걸까요? 사용자 관점에서 프로세스가 가능한 매끄럽도록 하기 위해 여러 규칙이 존재합니다:
- 만약 `__init__` 메소드가 `config` 입력을 기대한다면, 이는 자동으로 `config.json`으로 저장됩니다.
- 만약 `config` 입력 매개변수에 데이터 클래스 유형(예: `config: Optional[MyConfigClass] = None`)의 어노테이션이 있다면, config 값은 올바르게 역직렬화됩니다.
- 초기화할 때 전달된 모든 값들도 구성 파일에 저장됩니다. 이는 `config` 입력을 기대하지 않더라도 이점을 얻을 수 있다는 것을 의미합니다.

예시:

```py
class MyModel(ModelHubMixin):
   def __init__(value: str, size: int = 3):
      self.value = value
      self.size = size

   (...) # _save_pretrained / _from_pretrained 구현

model = MyModel(value="my_value")
model.save_pretrained(...)

# config.json 파일에는 전달된 값과 기본 값이 모두 포함됩니다.
{"value": "my_value", "size": 3}
```

그러나 값이 JSON으로 직렬화될 수 없는 경우, 기본적으로 구성 파일을 저장할 때 해당 값은 무시됩니다. 그러나 경우에 따라 라이브러리가 이미 직렬화할 수 없는 사용자 정의 객체를 예상하고 있고 해당 유형을 업데이트하고 싶지 않은 경우가 있습니다. 그렇다면 [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin)을 상속할 때 어떤 유형에 대한 사용자 지정 인코더/디코더를 전달할 수 있습니다. 이는 조금 더 많은 작업이 필요하지만 내부 로직을 변경하지 않고도 라이브러리를 Hub에 통합할 수 있도록 보장합니다.

여기서 `argparse.Namespace` 구성을 입력으로 받는 클래스의 구체적인 예가 있습니다:

```py
class VoiceCraft(nn.Module):
    def __init__(self, args):
      self.pattern = self.args.pattern
      self.hidden_size = self.args.hidden_size
      ...
```

한 가지 해결책은 `__init__` 시그니처를 `def __init__(self, pattern: str, hidden_size: int)`로 업데이트하고 클래스를 인스턴스화하는 모든 스니펫을 업데이트하는 것입니다. 이 방법은 유효한 방법이지만, 라이브러리를 사용하는 하위 응용 프로그램을 망가뜨릴 수 있습니다.

다른 해결책은 `argparse.Namespace`를 사전으로 변환하는 간단한 인코더/디코더를 제공하는 것입니다.

```py
from argparse import Namespace

class VoiceCraft(
   nn.Module,
   PyTorchModelHubMixin,  # 믹스인을 상속합니다.
   coders={
      Namespace: (
         lambda x: vars(x),  # Encoder: `Namespace`를 유효한 JSON 형태로 변환하는 방법은 무엇인가요?
         lambda data: Namespace(**data),  # Decoder: 딕셔너리에서 Namespace를 재구성하는 방법은 무엇인가요?
      )
   }
):
    def __init__(self, args: Namespace): # `args`에 주석을 답니다.
      self.pattern = self.args.pattern
      self.hidden_size = self.args.hidden_size
      ...
```

위의 코드 스니펫에서는 클래스의 내부 로직과 `__init__` 시그니처가 변경되지 않았습니다. 이는 기존의 모든 코드 스니펫이 여전히 작동한다는 것을 의미합니다. 이를 달성하기 위해 다음 과정을 수행하면 됩니다:
1. 믹스인(`PytorchModelHubMixin`)으로부터 상속합니다.
2. 상속 시 `coders` 매개변수를 전달합니다. 이는 키가 처리하려는 사용자 지정 유형이고, 값은 튜플 `(인코더, 디코더)`입니다.
   - 인코더는 지정된 유형의 객체를 입력으로 받아서 jsonable 값으로 반환합니다. 이는 `save_pretrained`로 모델을 저장할 때 사용됩니다.
   - 디코더는 원시 데이터(일반적으로 딕셔너리 타입)를 입력으로 받아서 초기 객체를 재구성합니다. 이는 `from_pretrained`로 모델을 로드할 때 사용됩니다.
   - `__init__` 시그니처에 유형 주석을 추가합니다. 이는 믹스인에게 클래스가 기대하는 유형과, 따라서 어떤 디코더를 사용해야 하는지를 알려주는 데 중요합니다.

위의 예제는 간단한 예시이기 때문에 인코더/디코더 함수는 견고하지 않습니다. 구체적인 구현을 위해서는 코너 케이스를 적절하게 처리해야 할 것입니다.

## 빠른 비교[[quick-comparison]]

두 가지 접근 방법에 대한 장단점을 간단히 정리해보겠습니다. 아래 표는 단순히 예시일 뿐입니다. 각자 다른 프레임워크에는 고려해야 할 특정 사항이 있을 수 있습니다. 이 가이드는 통합을 다루는 아이디어와 지침을 제공하기 위한 것입니다. 언제든지 궁금한 점이 있으면 문의해 주세요!

<!-- Generated using https://www.tablesgenerator.com/markdown_tables -->
|         통합         |                                                       helpers 사용 시                                                       |                               [ModelHubMixin](/docs/huggingface_hub/main/ko/package_reference/mixins#huggingface_hub.ModelHubMixin) 사용 시                               |
| :------------------: | :-------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------: |
|     사용자 경험      |                                  `model = load_from_hub(...)`<br>`push_to_hub(model, ...)`                                  |          `model = MyModel.from_pretrained(...)`<br>`model.push_to_hub(...)`           |
|        유연성        |                                        매우 유연합니다.<br>구현을 완전히 제어합니다.                                        |          유연성이 떨어집니다.<br>프레임워크에는 모델 클래스가 있어야 합니다.          |
|      유지 관리       | 구성 및 새로운 기능에 대한 지원을 추가하기 위한 유지 관리가 더 필요합니다. 사용자가 보고한 문제를 해결해야할 수도 있습니다. | Hub와의 대부분의 상호 작용이 `huggingface_hub`에서 구현되므로 유지 관리가 줄어듭니다. |
|  문서화 / 타입 주석  |                                                  수동으로 작성해야 합니다.                                                  |                     `huggingface_hub`에서 부분적으로 처리됩니다.                      |
| 다운로드 횟수 표시기 |                                                  수동으로 처리해야 합니다.                                                  |               클래스에 `config` 속성이 있다면 기본적으로 활성화됩니다.                |
|      모델 카드       |                                                  수동으로 처리해야 합니다.                                                  |                library_name, tags 등을 활용하여 기본적으로 생성됩니다.                |


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/integrations.md" />

### 모델 카드 생성 및 공유[[create-and-share-model-cards]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/model-cards.md

# 모델 카드 생성 및 공유[[create-and-share-model-cards]]

`huggingface_hub` 라이브러리는 모델 카드를 생성, 공유, 업데이트할 수 있는 파이썬 인터페이스를 제공합니다. Hub의 모델 카드가 무엇인지, 그리고 실제로 어떻게 작동하는지에 대한 자세한 내용을 확인하려면 [전용 설명 페이지](https://huggingface.co/docs/hub/models-cards)를 방문하세요.

> [!TIP]
> [신규 (베타)! 우리의 실험적인 모델 카드 크리에이터 앱을 사용해 보세요](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool)

## Hub에서 모델 카드 불러오기[[load-a-model-card-from-the-hub]]

Hub에서 기존 카드를 불러오려면 [ModelCard.load()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard.load) 기능을 사용하면 됩니다. 이 문서에서는 [`nateraw/vit-base-beans`](https://huggingface.co/nateraw/vit-base-beans)에서 카드를 불러오겠습니다.


```python
from huggingface_hub import ModelCard

card = ModelCard.load('nateraw/vit-base-beans')
```

이 카드에는 접근하거나 활용할 수 있는 몇 가지 유용한 속성이 있습니다:

  - `card.data`: 모델 카드의 메타데이터와 함께 [ModelCardData](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.ModelCardData) 인스턴스를 반환합니다. 이 인스턴스에 `.to_dict()`를 호출하여 표현을 사전으로 가져옵니다.
  - `card.text`: *메타데이터 헤더를 제외*한 카드의 텍스트를 반환합니다.
  - `card.content`: *메타데이터 헤더를 포함*한 카드의 텍스트 콘텐츠를 반환합니다.

## 모델 카드 만들기[[create-model-cards]]

### 텍스트에서 생성[[from-text]]

텍스트로 모델 카드의 초기 내용을 설정하려면, 카드의 텍스트 내용을 초기화 시 `ModelCard`에 전달하면 됩니다.

```python
content = """
---
language: en
license: mit
---

# 내 모델 카드
"""

card = ModelCard(content)
card.data.to_dict() == {'language': 'en', 'license': 'mit'}  # True
```
 
이 작업을 수행하는 또 다른 방법은 f-strings를 사용하는 것입니다. 다음 예에서 우리는:

- 모델 카드에 YAML 블록을 삽입할 수 있도록 [ModelCardData.to_yaml()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.CardData.to_yaml)을 사용해서 우리가 정의한 메타데이터를 YAML로 변환합니다.
- Python f-strings를 통해 템플릿 변수를 사용할 방법을 보여줍니다.

```python
card_data = ModelCardData(language='en', license='mit', library='timm')

example_template_var = 'nateraw'
content = f"""
---
{ card_data.to_yaml() }
---

# 내 모델 카드

이 모델은 [@{example_template_var}](https://github.com/ {example_template_var})에 의해 생성되었습니다
"""

card = ModelCard(content)
print(card)
```

위 예시는 다음과 같은 모습의 카드를 남깁니다:

```
---
language: en
license: mit
library: timm
---

# 내 모델 카드

This model created by [@nateraw](https://github.com/nateraw)
```

### Jinja 템플릿으로부터[[from-a-jinja-template]]

`Jinja2`가 설치되어 있으면, jinja 템플릿 파일에서 모델 카드를 만들 수 있습니다. 기본적인 예를 살펴보겠습니다:

```python
from pathlib import Path

from huggingface_hub import ModelCard, ModelCardData

# jinja 템플릿 정의
template_text = """
---
{{ card_data }}
---

# MyCoolModel 모델용 모델 카드

이 모델은 이런 저런 것들을 합니다.

이 모델은 [[@{{ author }}](https://hf.co/{{author}})에 의해 생성되었습니다.
""".strip() 

# 템플릿을 파일에 쓰기
Path('custom_template.md').write_text(template_text)

# 카드 메타데이터 정의
card_data = ModelCardData(language='en', license='mit', library_name='keras')

# 템플릿에서 카드를 만들고 원하는 Jinja 템플릿 변수를 전달합니다.
# 우리의 경우에는 작성자를 전달하겠습니다.
card = ModelCard.from_template(card_data, template_path='custom_template.md', author='nateraw')
card.save('my_model_card_1.md')
print(card)
```

결과 카드의 마크다운은 다음과 같습니다:

```
---
language: en
license: mit
library_name: keras
---

# MyCoolModel 모델용 모델 카드

이 모델은 이런 저런 것들을 합니다.

이 모델은 [@nateraw](https://hf.co/nateraw)에 의해 생성되었습니다.
```

카드 데이터를 업데이트하면 카드 자체에 반영됩니다.

```
card.data.library_name = 'timm'
card.data.language = 'fr'
card.data.license = 'apache-2.0'
print(card)
```

이제 보시다시피 메타데이터 헤더가 업데이트되었습니다:

```
---
language: fr
license: apache-2.0
library_name: timm
---

# MyCoolModel 모델용 모델 카드

이 모델은 이런 저런 것들을 합니다.

이 모델은 [@nateraw](https://hf.co/nateraw)에 의해 생성되었습니다.
```

카드 데이터를 업데이트할 때 [ModelCard.validate()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard.validate)를 불러와 Hub에 대해 카드가 여전히 유효한지 확인할 수 있습니다. 이렇게 하면 Hugging Face Hub에 설정된 모든 유효성 검사 규칙을 통과할 수 있습니다.

### 기본 템플릿으로부터[[from-the-default-template]]

자체 템플릿을 사용하는 대신에, 많은 섹션으로 구성된 기능이 풍부한 [기본 템플릿](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)을 사용할 수도 있습니다. 내부적으론 [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/) 를 사용하여 템플릿 파일을 작성합니다.

> [!TIP]
> `from_template`를 사용하려면 jinja2를 설치해야 합니다. `pip install Jinja2`를 사용하면 됩니다.

```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
    card_data,
    model_id='my-cool-model',
    model_description="this model does this and that",
    developers="Nate Raw",
    repo="https://github.com/huggingface/huggingface_hub",
)
card.save('my_model_card_2.md')
print(card)
```

## 모델 카드 공유하기[[share-model-cards]]

Hugging Face Hub로 인증받은 경우(`hf auth login` 또는 [login()](/docs/huggingface_hub/main/ko/package_reference/login#huggingface_hub.login) 사용) 간단히 [ModelCard.push_to_hub()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.RepoCard.push_to_hub)를 호출하여 카드를 Hub에 푸시할 수 있습니다. 이를 수행하는 방법을 살펴보겠습니다.

먼저 인증된 사용자의 네임스페이스 아래에 'hf-hub-modelcards-pr-test'라는 새로운 레포지토리를 만듭니다:

```python
from huggingface_hub import whoami, create_repo

user = whoami()['name']
repo_id = f'{user}/hf-hub-modelcards-pr-test'
url = create_repo(repo_id, exist_ok=True)
```

그런 다음 기본 템플릿에서 카드를 만듭니다(위 섹션에서 정의한 것과 동일):

```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
    card_data,
    model_id='my-cool-model',
    model_description="this model does this and that",
    developers="Nate Raw",
    repo="https://github.com/huggingface/huggingface_hub",
)
```

마지막으로 이를 Hub로 푸시하겠습니다.

```python
card.push_to_hub(repo_id)
```

결과 카드는 [여기](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/blob/main/README.md)에서 확인할 수 있습니다.

PR로 카드를 푸시하고 싶다면 `push_to_hub`를 호출할 때 `create_pr=True`라고 지정하면 됩니다.

```python
card.push_to_hub(repo_id, create_pr=True)
```

이 명령으로 생성된 결과 PR은 [여기](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/discussions/3)에서 볼 수 있습니다.

## 메타데이터 업데이트[[update-metadata]]

이 섹션에서는 레포 카드에 있는 메타데이터와 업데이트 방법을 확인합니다.

`메타데이터`는 모델, 데이터 세트, Spaces에 대한 높은 수준의 정보를 제공하는 해시맵(또는 키 값) 컨텍스트를 말합니다. 모델의 `pipeline type`, `model_id` 또는 `model_desc` 설명 등의 정보가 포함될 수 있습니다. 자세한 내용은 [모델 카드](https://huggingface.co/docs/hub/model-cards#model-card-metadata), [데이터 세트 카드](https://huggingface.co/docs/hub/datasets-cards#dataset-card-metadata) 및 [�Spaces 설정](https://huggingface.co/docs/hub/spaces-settings#spaces-settings) 을 참조하세요. 이제 메타데이터를 업데이트하는 방법에 대한 몇 가지 예를 살펴보겠습니다.


첫 번째 예부터 살펴보겠습니다:

```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "image-classification"})
```

두 줄의 코드를 사용하면 메타데이터를 업데이트하여 새로운 `파이프라인_태그`를 설정할 수 있습니다.

기본적으로 카드에 이미 존재하는 키는 업데이트할 수 없습니다. 그렇게 하려면 `overwrite=True`를 명시적으로 전달해야 합니다.

```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "text-generation"}, overwrite=True)
```

쓰기 권한이 없는 저장소에 일부 변경 사항을 제안하려는 경우가 종종 있습니다. 소유자가 귀하의 제안을 검토하고 병합할 수 있도록 해당 저장소에 PR을 생성하면 됩니다.

```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("someone/model", {"pipeline_tag": "text-classification"}, create_pr=True)
```

## 평가 결과 포함하기[[include-evaluation-results]]

메타데이터 `모델-인덱스`에 평가 결과를 포함하려면 관련 평가 결과와 함께 [EvalResult] 또는 `EvalResult` 목록을 전달하면 됩니다. 내부적으론 `card.data.to _dict()`를 호출하면 `모델-인덱스`가 생성됩니다. 자세한 내용은 [Hub 문서의 이 섹션](https://huggingface.co/docs/hub/models-cards#evaluation-results)을 참조하십시오.

> [!TIP]
> 이 기능을 사용하려면 [ModelCardData]에 `model_name` 속성을 포함해야 합니다.

```python
card_data = ModelCardData(
    language='en',
    license='mit',
    model_name='my-cool-model',
    eval_results = EvalResult(
        task_type='image-classification',
        dataset_type='beans',
        dataset_name='Beans',
        metric_type='accuracy',
        metric_value=0.7
    )
)

card = ModelCard.from_template(card_data)
print(card.data)
```

결과 `card.data`는 다음과 같이 보여야 합니다:

```
language: en
license: mit
model-index:
- name: my-cool-model
  results:
  - task:
      type: image-classification
    dataset:
      name: Beans
      type: beans
    metrics:
    - type: accuracy
      value: 0.7
```

`EvalResult`: 공유하고 싶은 평가 결과가 둘 이상 있는 경우 `EvalResults` 목록을 전달하기만 하면 됩니다:

```python
card_data = ModelCardData(
    language='en',
    license='mit',
    model_name='my-cool-model',
    eval_results = [
        EvalResult(
            task_type='image-classification',
            dataset_type='beans',
            dataset_name='Beans',
            metric_type='accuracy',
            metric_value=0.7
        ),
        EvalResult(
            task_type='image-classification',
            dataset_type='beans',
            dataset_name='Beans',
            metric_type='f1',
            metric_value=0.65
        )
    ]
)
card = ModelCard.from_template(card_data)
card.data
```
그러면 다음 `card.data`가 남게 됩니다:

```
language: en
license: mit
model-index:
- name: my-cool-model
  results:
  - task:
      type: image-classification
    dataset:
      name: Beans
      type: beans
    metrics:
    - type: accuracy
      value: 0.7
    - type: f1
      value: 0.65
```


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/model-cards.md" />

### Hub에 파일 업로드하기[[upload-files-to-the-hub]]
https://huggingface.co/docs/huggingface_hub/main/ko/guides/upload.md

# Hub에 파일 업로드하기[[upload-files-to-the-hub]]

파일과 작업물을 공유하는 것은 Hub의 주요 특성 중 하나입니다. `huggingface_hub`는 Hub에 파일을 업로드하기 위한 몇 가지 옵션을 제공합니다. 이러한 기능을 단독으로 사용하거나 라이브러리에 통합하여 해당 라이브러리의 사용자가 Hub와 더 편리하게 상호작용할 수 있도록 도울 수 있습니다.

Hub에 파일을 업로드 하려면 허깅페이스 계정으로 로그인해야 합니다. 인증에 대한 자세한 내용은 [이 페이지](../quick-start#authentication)를 참조해 주세요.

## 파일 업로드하기[[upload-a-file]]

[create_repo()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_repo)로 리포지토리를 생성했다면, [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file)을 사용하여 해당 리포지토리에 파일을 업로드할 수 있습니다.

업로드할 파일의 본 경로, 리포지토리에서 파일을 업로드할 위치, 대상 리포지토리의 이름을 지정합니다. 리포지토리의 유형을 `dataset`, `model`, `space`로 선택적으로 설정할 수 있습니다.


```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(
...     path_or_fileobj="/path/to/local/folder/README.md",
...     path_in_repo="README.md",
...     repo_id="username/test-dataset",
...     repo_type="dataset",
... )
```

## 폴더 업로드[[upload-a-folder]]

로컬 폴더를 리포지토리에 업로드하려면 [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 함수를 사용합니다.
업로드할 로컬 폴더의 본 경로, 리포지토리에서 폴더를 업로드할 위치, 대상 리포지토리의 이름을 지정합니다. 리포지토리의 유형을 `dataset`, `model`, `space`로 선택적으로 설정할 수 있습니다.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()

# 로컬 폴더에 있는 모든 콘텐츠를 원격 Space에 업로드 합니다.
# 파일은 기본적으로 리포지토리의 루트 디렉토리에 업로드 됩니다.
>>> api.upload_folder(
...     folder_path="/path/to/local/space",
...     repo_id="username/my-cool-space",
...     repo_type="space",
... )
```

기본적으로 어떤 파일을 커밋할지 여부를 알기 위해 `.gitignore` 파일을 참조하게 됩니다. 기본적으로 커밋에 `.gitignore` 파일이 있는지 확인하고, 없는 경우 Hub에 파일이 있는지 확인합니다. 디렉터리의 루트 경로에 있는 `.gitignore` 파일만 사용된다는 점을 주의하세요. 하위 경로에는 `.gitignore` 파일이 있는지 확인하지 않습니다.

하드코딩된 `.gitignore` 파일을 사용하지 않으려면 `allow_patterns` 와 `ignore_patterns` 인수를 사용하여 업로드할 파일을 필터링할 수 있습니다. 이 매개변수들은 단일 패턴 또는 패턴 리스트를 허용합니다. 패턴의 형식은 [이 문서](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm)에 설명된 대로 표준 와일드카드(글로빙 패턴)입니다. `allow_patterns`과 `ignore_patterns`을 모두 사용하면 두 가지 설정이 모두 적용됩니다.

`.gitignore` 파일과 allow/ignore 패턴 외에 하위 경로 있는 모든 `.git/` 폴더는 무시됩니다.

```py
>>> api.upload_folder(
...     folder_path="/path/to/local/folder",
...     path_in_repo="my-dataset/train", # 특정 폴더에 업로드
...     repo_id="username/test-dataset",
...     repo_type="dataset",
...     ignore_patterns="**/logs/*.txt", # 모든 로그 텍스트 파일을 무시
... )
```

`delete_patterns` 인수를 사용하여 동일한 커밋에서 리포지토리에서 삭제할 파일을 지정할 수도 있습니다.
이 방법은 파일을 푸시하기 전에 원격 폴더를 정리하고 싶은데 어떤 파일이 이미 존재하는지 모르는 경우에 유용합니다.

다음은 로컬 `./logs` 폴더를 원격 `/experiment/logs/` 폴더에 업로드하는 예시입니다.
폴더 내의 txt 파일만을 업로드 하게 되며 그 전에 리포지토리에 있던 모든 이전 txt 파일이 삭제됩니다. 이 모든 과정이 단 한 번의 커밋으로 이루어집니다.
```py
>>> api.upload_folder(
...     folder_path="/path/to/local/folder/logs",
...     repo_id="username/trained-model",
...     path_in_repo="experiment/logs/",
...     allow_patterns="*.txt", # 모든 로컬 텍스트 파일을 업로드
...     delete_patterns="*.txt", # 모든 이전 텍스트 파일을 삭제
... )
```

## CLI에서 업로드[[upload-from-the-cli]]

터미널에서 `hf upload` 명령어를 사용하여 Hub에 파일을 직접 업로드할 수 있습니다. 내부적으로는 위에서 설명한 것과 동일한 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file) 와 [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 함수를 사용합니다.

다음과 같이 단일 파일 또는 전체 폴더를 업로드할 수 있습니다:

```bash
# 사용례:  hf upload [repo_id] [local_path] [path_in_repo]
>>> hf upload Wauplin/my-cool-model ./models/model.safetensors model.safetensors
https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors

>>> hf upload Wauplin/my-cool-model ./models .
https://huggingface.co/Wauplin/my-cool-model/tree/main
```

`local_path` 와 `path_in_repo`는 선택 사항이며 주어지지 않을 시 임의로 추정됩니다.
`local_path`가 설정되지 않은 경우, 이 툴은 로컬 폴더나 파일에 `repo_id`와 같은 이름이 있는지 확인하며, 발견된 경우 해당 폴더나 파일이 업로드됩니다.
같은 이름의 폴더나 파일을 찾지 못한다면 사용자에게 `local_path`를 명시하도록 요청하는 예외 처리가 발생합니다.
어떤 경우든 `path_in_repo`가 설정되어 있지 않으면 파일이 리포지토리의 루트 디렉터리에 업로드됩니다.

CLI 업로드 명령어에 대한 자세한 내용은 [CLI 가이드](./cli#hf-upload)를 참조하세요.

## 고급 기능[[advanced-features]]

대부분의 경우, Hub에 파일을 업로드하는 데 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file)과 [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 이상이 필요하지 않습니다.
하지만 `huggingface_hub`에는 작업을 더 쉽게 할 수 있는 고급 기능이 있습니다. 그 기능들을 살펴봅시다!


### 논블로킹 업로드[[non-blocking-uploads]]

메인 스레드를 멈추지 않고 데이터를 푸시하고 싶은 경우가 있습니다.
이는 모델 학습을 계속 진행하면서 로그와 아티팩트를 업로드할 때 특히 유용합니다.
이렇게 하려면 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file)과 [[upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 에 `run_as_future` 인수를 사용하고 [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html#future-objects)객체를 반환받아 업로드 상태를 확인하는 데 사용할 수 있습니다.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.upload_folder( # 백그라운드에서 업로드 작업 수행 (논블로킹)
...     repo_id="username/my-model",
...     folder_path="checkpoints-001",
...     run_as_future=True,
... )
>>> future
Future(...)
>>> future.done()
False
>>> future.result() # 업로드가 완료될 때까지 대기 (블로킹)
...
```

> [!TIP]
> `run_as_future=True`를 사용하면 백그라운드 작업이 큐에 대기됩니다. 이는 작업이 올바른 순서로 실행된다는 것을 의미합니다.

백그라운드 작업은 주로 데이터를 업로드하거나 커밋을 생성하는 데 유용하지만, 이 외에도 [run_as_future()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.run_as_future)를 사용하여 원하는 메소드를 대기열에 넣을 수 있습니다.
예를 들어, 해당 기능을 사용하여 백그라운드에서 리포지토리를 만든 다음 그대로 데이터를 업로드할 수 있습니다.
업로드 메소드에 내장된 `run_as_future` 인수는 본 기능의 별칭입니다.

```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.run_as_future(api.create_repo, "username/my-model", exists_ok=True)
Future(...)
>>> api.upload_file(
...     repo_id="username/my-model",
...     path_in_repo="file.txt",
...     path_or_fileobj=b"file content",
...     run_as_future=True,
... )
Future(...)
```

### 청크 단위로 폴더 업로드하기[[upload-a-folder-by-chunks]]

[upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder)를 사용하면 전체 폴더를 Hub에 쉽게 업로드할 수 있지만, 대용량 폴더(수천 개의 파일 또는 수백 GB의 용량)의 경우 문제가 될 수 있습니다.
파일이 많은 폴더가 있는 경우 여러 커밋에 걸쳐 업로드하는 것이 좋습니다.
업로드 중에 오류나 연결 문제가 발생해도 처음부터 다시 시작할 필요는 없습니다.

여러 커밋으로 폴더를 업로드하려면 `multi_commits=True`를 인수로 전달하면 됩니다.
내부적으로 `huggingface_hub`는 업로드/삭제할 파일을 나열하고 여러 커밋으로 분할합니다.
커밋을 분할하는 전략은 업로드할 파일의 수와 크기에 따라 결정됩니다.
모든 커밋을 푸시하기 위해 Hub에 PR이 열리게 되며, PR이 준비되면 여러 커밋이 단일 커밋으로 뭉쳐집니다.
완료하기 전에 프로세스가 중단된 경우 스크립트를 다시 실행하여 업로드를 재개할 수 있습니다. 생성된 PR이 자동으로 감지되고 업로드가 중단된 지점부터 업로드가 재개됩니다.
업로드 진행 상황을 더 잘 이해하고 싶다면 `multi_commits_verbose=True`를 인수로 전달하면 됩니다.

다음은 여러 커밋으로 체크포인트 폴더를 데이터셋에 업로드하는 예제입니다.
Hub에 PR이 생성되고 업로드가 완료되면 자동으로 병합됩니다.
PR을 계속 열어두고 수동으로 검토하려면 `create_pr=True`를 인수로 전달하면 됩니다.

```py
>>> upload_folder(
...     folder_path="local/checkpoints",
...     repo_id="username/my-dataset",
...     repo_type="dataset",
...     multi_commits=True,
...     multi_commits_verbose=True,
... )
```

업로드 전략(즉, 생성되는 커밋)을 더 잘 제어하고 싶으면 로우 레벨의 `plan_multi_commits` 와 `create_commits_on_pr` 메서드를 살펴보세요.

> [!WARNING]
> `multi_commits`은 아직 실험적인 기능입니다.
> 해당 API와 동작은 향후 사전 고지 없이 변경될 수 있습니다.

### 예약된 업로드[[scheduled-uploads]]

허깅 페이스 Hub를 사용하면 데이터를 쉽게 저장하고 버전업할 수 있지만, 동일한 파일을 수천 번 업데이트할 때는 몇 가지 제한이 있습니다.
예를 들어, 배포된 Space에 대한 교육 프로세스 또는 사용자 로그를 저장하고 싶을 때 Hub에 데이터 집합으로 데이터를 업로드하는 것이 좋아 보이지만, 이를 제대로 하기 어려울 수 있습니다.
데이터의 모든 업데이트를 버전으로 만들게 되면 git 리포지토리를 사용할 수 없는 상태로 만들어 버리기 때문입니다.
[CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler) 클래스는 이 문제에 대한 해결책을 제공합니다.

이 클래스는 로컬 폴더를 Hub에 정기적으로 푸시하는 백그라운드 작업을 실행합니다.
일부 텍스트를 입력으로 받아 두 개의 번역을 생성한 다음, 사용자가 선호하는 번역을 선택할 수 있는 라디오 스페이스가 있다고 가정해 보겠습니다.
이 스페이스의 각 실행에 대해 입력, 출력 및 사용자 기본 설정을 저장하여 결과를 분석하려고 하는데, 이것은 [CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)의 완벽한 사용 사례가 될 수 있습니다.
Hub에 데이터(잠재적으로 수백만 개의 사용자 피드백)를 저장하고 싶지만, 굳이 각 사용자의 입력을 _실시간_ 으로 저장할 필요는 없으니 데이터를 로컬 JSON 파일에 저장한 다음 10분마다 업로드하면 됩니다.
예제 코드는 다음과 같습니다:

```py
>>> import json
>>> import uuid
>>> from pathlib import Path
>>> import gradio as gr
>>> from huggingface_hub import CommitScheduler

# 데이터를 저장할 파일을 선언합니다. UUID를 이용하여 중복을 방지합니다.
>>> feedback_file = Path("user_feedback/") / f"data_{uuid.uuid4()}.json"
>>> feedback_folder = feedback_file.parent

# 정기 업로드를 예약합니다. 원격 리포지토리와 로컬 폴더가 없을시 생성합니다.
>>> scheduler = CommitScheduler(
...     repo_id="report-translation-feedback",
...     repo_type="dataset",
...     folder_path=feedback_folder,
...     path_in_repo="data",
...     every=10,
... )

# 사용자가 피드백을 제출할 때 호출받을 함수를 정의합니다. (Gradio 안에서 호출받게 됩니다)
>>> def save_feedback(input_text:str, output_1: str, output_2:str, user_choice: int) -> None:
...     """
...     JSON Lines 파일에 입/출력과 사용자 피드백을 추가합니다. 타 사용자에 의한 동시적인 쓰기를 지양하기 위해 스레드 락을 사용합니다.
...     """
...     with scheduler.lock:
...         with feedback_file.open("a") as f:
...             f.write(json.dumps({"input": input_text, "output_1": output_1, "output_2": output_2, "user_choice": user_choice}))
...             f.write("\n")

# Gradio를 시작합니다.
>>> with gr.Blocks() as demo:
>>>     ... # Gradio 데모를 정의하고 `save_feedback`을 사용합니다
>>> demo.launch()
```

사용자 입력/출력 및 피드백은 Hub에서 데이터 세트의 형태로 사용할 수 있습니다.
고유한 JSON 파일 이름을 사용하면 이전 실행이나 다른 스페이스/복제본이 동일한 리포지토리에 동시에 푸시하는 경우의 데이터를 덮어쓰지 않도록 보장할 수 있습니다.

[CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)에 대한 상세 사항은 다음과 같습니다:
- **추가 전용:**
    스케줄러는 폴더에 콘텐츠를 추가만 한다고 가정합니다. 기존 파일에 데이터를 추가하거나 새 파일을 만들 때만 사용하여야 합니다.
    파일을 삭제하거나 덮어쓰면 리포지토리가 손상될 수 있습니다.
- **git 히스토리**:
    기본적으로 스케줄러는 `매 분마다`  폴더를 커밋합니다.
    git 리포지토리를 너무 많이 오염시키지 않으려면 최소값을 5분으로 설정하는 것이 좋습니다.
    또한 스케줄러는 빈 커밋을 피하도록 설계되었는데, 만약 폴더에서 새 콘텐츠가 감지되지 않으면 예약된 커밋을 삭제합니다.
- **에러:**
    스케줄러는 백그라운드 스레드로 실행되고, 이는 클래스를 인스턴스화할 때 시작되며 절대 멈추지 않습니다.
    만약 업로드 중에 오류가 발생하면(예: 연결 문제), 스케줄러는 이를 아무 말 없이 무시하고 다음 예약된 커밋에서 재시도 합니다.
- **스레드 안전:**
    대부분의 경우 파일 락에 대해 걱정할 필요 없이 파일에 쓰기 작업을 수행 할 수 있습니다.
    스케줄러는 업로드하는 동안 대상 폴더에 콘텐츠를 쓰더라도 충돌하거나 손상되지 않습니다.
    그러나, 부하가 많은 앱의 경우 이런 작업에서 _동시성 문제_ 가 발생할 수 있습니다.
    이 경우, `scheduler.lock`을 사용하여 스레드 안전을 보장하는 것이 좋습니다.
    이 락은 스케줄러가 폴더에서 변경 사항을 검색할 때만 차단되며, 데이터를 업로드할 때는 차단되지 않습니다.
    따라서 Space의 사용자 환경에는 영향을 미치지 않습니다.

#### 스페이스 지속성 데모[[space-persistence-demo]]

스페이스에서 Hub의 데이터셋으로 데이터를 영속하는 것이 [CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)의 주요 사용 사례입니다.
각 사용 사례에 따라 데이터 구조를 다르게 설정해야 할 수도 있습니다.
데이터 구조는 동시 사용자와 재시작에 대해 견고해야 하며, 이는 대개 UUID를 생성 해야 함을 의미합니다. 
견고함 뿐만 아니라, 재사용성을 위해 🤗 데이터 세트 라이브러리에서 읽을 수 있는 형식으로 데이터를 업로드해야 합니다.
이 [스페이스](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver)
예제에서 여러 가지 데이터 형식을 저장하는 방법을 보여줍니다(각자의 필요에 맞게 조정해야 할 수도 있습니다).

#### 사용자 지정 업로드[[custom-uploads]]

[CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)는 데이터가 추가 전용이며 "있는 그대로" 업로드해야 한다고 가정합니다.
그러나 데이터 업로드 방식을 사용자 스스로 정의하고 싶을 때도 있는데, [CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)를 상속받는 클래스를 생성하고 `push_to_hub` 메서드를 덮어쓰면 됩니다(원하는 방식으로 자유롭게 덮어쓰세요).
이렇게 하면 해당 클래스가 백그라운드 스레드에서 `매 분마다` 호출됩니다.
동시성 및 오류에 대해 걱정할 필요는 없지만 빈 커밋이나 중복된 데이터를 푸시하는 것과 같은 케이스들에 주의해야 합니다.

아래의 (단순화된) 예제에서는 `push_to_hub`를 덮어써서 모든 PNG 파일을 단일 아카이브에 압축하여 Hub의 리포지토리에 과부하가 걸리는 것을 방지합니다:.

```py
class ZipScheduler(CommitScheduler):
    def push_to_hub(self):
        # 1. PNG 파일들을 나열합니다.
          png_files = list(self.folder_path.glob("*.png"))
          if len(png_files) == 0:
              return None  # 커밋할 것이 없다면 일찍 리턴합니다.

        # 2. png 파일들을 단일 Zip 파일로 압축합니다.
        with tempfile.TemporaryDirectory() as tmpdir:
            archive_path = Path(tmpdir) / "train.zip"
            with zipfile.ZipFile(archive_path, "w", zipfile.ZIP_DEFLATED) as zip:
                for png_file in png_files:
                    zip.write(filename=png_file, arcname=png_file.name)

            # 3. 압축된 파일을 업로드 합니다.
            self.api.upload_file(..., path_or_fileobj=archive_path)

        # 4. 로컬 png 파일을 삭제하여 다음에 다시 업로드 되는 일을 방지합니다.
        for png_file in png_files:
            png_file.unlink()
```

`push_to_hub`를 덮어쓰면 다음과 같은 [CommitScheduler](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitScheduler)의 속성에 접근할 수 있습니다:
- [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 클라이언트: `api`
- 폴더 매개변수: `folder_path` 및 `path_in_repo`
- 리포지토리 매개변수: `repo_id`, `repo_type`, `revision`
- 스레드 락: `lock`

> [!TIP]
> 사용자 정의 스케줄러의 더 많은 예제는 사용 사례에 따른 다양한 구현이 포함된 [데모 스페이스](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver)를 참조하세요.

### create_commit[[createcommit]]

[upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file) 및 [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 함수는 일반적으로 사용하기 편리한 하이 레벨 API입니다.
로우 레벨에서 작업할 필요가 없다면 이 함수들을 먼저 사용해 볼 것을 권장합니다.
만약 커밋 레벨에서 작업하고 싶다면 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit) 함수를 직접 사용할 수 있습니다.

[create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)이 지원하는 작업 유형은 세 가지입니다:

- [CommitOperationAdd](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationAdd) 는 파일을 Hub에 업로드합니다. 파일이 이미 있는 경우 파일 내용을 덮어씁니다. 이 작업은 두 개의 인수를 받습니다:

  - `path_in_repo`: 파일을 업로드할 리포지토리 경로.
  - `path_or_fileobj`: Hub에 업로드할 파일의 파일 시스템상 파일 경로 또는 파일 스타일 객체.

- [CommitOperationDelete](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationDelete)는 리포지토리에서 파일 또는 폴더를 제거합니다. 이 작업은 `path_in_repo`를 인수로 받습니다.

- [CommitOperationCopy](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationCopy)는 리포지토리 내의 파일을 복사합니다. 이 작업은 세 가지 인수를 받습니다:

  - `src_path_in_repo`: 복사할 파일의 리포지토리 경로.
  - `path_in_repo`: 파일 붙여넣기를 수행할 리포지토리 경로.
  - `src_revision`: 선택 사항 - 다른 브랜치/리비전에서 파일을 복사하려는 경우 필요한 복사할 파일의 리비전.

예를 들어, Hub 리포지토리에서 두 개의 파일을 업로드하고 한 개의 파일을 삭제하려는 경우:

1. 파일을 추가하거나 삭제하고 폴더를 삭제하기 위해 적절한 `CommitOperation`을 사용합니다:

```py
>>> from huggingface_hub import HfApi, CommitOperationAdd, CommitOperationDelete
>>> api = HfApi()
>>> operations = [
...     CommitOperationAdd(path_in_repo="LICENSE.md", path_or_fileobj="~/repo/LICENSE.md"),
...     CommitOperationAdd(path_in_repo="weights.h5", path_or_fileobj="~/repo/weights-final.h5"),
...     CommitOperationDelete(path_in_repo="old-weights.h5"),
...     CommitOperationDelete(path_in_repo="logs/"),
...     CommitOperationCopy(src_path_in_repo="image.png", path_in_repo="duplicate_image.png"),
... ]
```

2. 작업을 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)에 전달합니다:

```py
>>> api.create_commit(
...     repo_id="lysandre/test-model",
...     operations=operations,
...     commit_message="Upload my model weights and license",
... )
```

다음 함수들은 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file) 및 [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 외에도 내부적으로 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)을 사용합니다:

- [delete_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_file)은 Hub의 리포지토리에서 단일 파일을 삭제합니다.
- [delete_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.delete_folder)는 Hub의 리포지토리에서 전체 폴더를 삭제합니다.
- [metadata_update()](/docs/huggingface_hub/main/ko/package_reference/cards#huggingface_hub.metadata_update)는 리포지토리의 메타데이터를 업데이트합니다.

자세한 내용은 [HfApi](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi) 의 레퍼런스를 참조하세요.

### 커밋하기 전에 LFS 파일 미리 업로드하기[[preupload-lfs-files-before-commit]]

경우에 따라 커밋 호출을 **하기 전에** 대용량 파일을 S3에 업로드해야 할 수도 있습니다.
예를 들어 인메모리에 생성된 여러 개의 샤드에 있는 데이터 세트를 커밋하는 경우, 샤드를 하나씩 업로드해야 메모리 부족 문제를 피할 수 있을 것입니다.
이 문제에 대한 해결책은 각 샤드를 리포지토리에 별도의 커밋으로 업로드하는 것입니다.
이 방법은 완벽하게 유효하지만, 수십 개의 커밋을 생성하여 잠재적으로 git 히스토리를 엉망으로 만들 수 있다는 단점이 있습니다.
이 문제를 극복하기 위해 파일을 하나씩 S3에 업로드한 다음 마지막에 하나의 커밋을 생성할 수 있습니다.
이는 [preupload_lfs_files()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files)와 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)을 함께 사용하면 됩니다.

> [!WARNING]
> 이 방법은 고급 사용자를 위한 방식입니다.
> 사전에 파일을 미리 업로드하는 로우 레벨 로직을 처리하는 대신 [upload_file()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_file), [upload_folder()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.upload_folder) 또는 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)을 직접 사용하는 것이 대부분의 경우에 적합합니다.
> [preupload_lfs_files()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files)의 주요 주의 사항은 커밋이 실제로 이루어질 때까지는 Hub의 리포지토리에서 업로드 파일에 액세스할 수 없다는 것입니다.
> 궁금한 점이 있으면 언제든지 Discord나 GitHub 이슈로 문의해 주세요.

다음은 파일을 미리 업로드하는 방법을 보여주는 간단한 예시입니다:

```py
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo

>>> repo_id = create_repo("test_preupload").repo_id

>>> operations = [] # 생성될 모든 `CommitOperationAdd` 객체를 나열합니다.
>>> for i in range(5):
...     content = ... # bin 자료를 생성합니다.
...     addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
...     preupload_lfs_files(repo_id, additions=[addition])
...     operations.append(addition)

>>> # 커밋을 생성합니다.
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
```

먼저, [CommitOperationAdd](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.CommitOperationAdd) 오브젝트를 하나씩 생성합니다. 실전을 상정한 예제에서는, 여기에 생성된 샤드를 포함합니다.
각 파일은 다음 파일을 생성하기 전에 업로드됩니다.
[preupload_lfs_files()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.preupload_lfs_files) 단계에서는 **`CommitOperationAdd` 오브젝트가 변경됩니다.** 따라서[create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)에 직접 전달할 때만 사용해야 합니다.
오브젝트의 주요 업데이트는 **바이너리 콘텐츠가 제거**된다는 것인데, 이는 따로 레퍼런스를 저장하지 않으면 가비지 콜렉팅 됨을 의미합니다. 이미 업로드된 콘텐츠를 메모리상에 남기고 싶지 않기 때문입니다.
마지막으로 모든 작업을 [create_commit()](/docs/huggingface_hub/main/ko/package_reference/hf_api#huggingface_hub.HfApi.create_commit)에 전달하여 커밋을 생성합니다.
아직 처리되지 않은 추가 작업(추가, 삭제 또는 복사)도 전달하면 올바르게 처리됩니다.

## 대용량 업로드를 위한 팁과 요령[[tips-and-tricks-for-large-uploads]]

리포지토리에 있는 대량의 데이터를 처리할 때 주의해야 할 몇 가지 제한 사항이 있습니다.
데이터를 스트리밍하는 데 걸리는 시간을 고려하면, 프로세스 마지막에 업로드/푸시가 실패하거나 hf.co에서 또는 로컬에서 작업할 때 성능 저하가 발생하는 것은 매우 성가신 일이 될 수 있습니다.

Hub에서 리포지토리를 구성하는 방법에 대한 모범 사례는 [리포지토리 제한 사항 및 권장 사항](https://huggingface.co/docs/hub/repositories-recommendations) 가이드를 참조하세요.
다음으로 업로드 프로세스를 최대한 원활하게 진행할 수 있는 몇 가지 실용적인 팁을 살펴보겠습니다.

- **작게 시작하세요**: 업로드 스크립트를 테스트할 때는 소량의 데이터로 시작하는 것이 좋습니다. 소량의 데이터를 처리하는데 적은 시간이 들기 때문에 스크립트를 반복하는 것이 더 쉽습니다.
- **실패를 예상하세요**: 대량의 데이터를 스트리밍하는 것은 어려운 일입니다. 어떤 일이 일어날지 알 수 없지만, 항상 컴퓨터, 연결, 서버 등 어떤 이유로든 한 번쯤은 실패할 수 있다는 점을 고려하는 것이 가장 좋습니다. 예를 들어, 많은 양의 파일을 업로드할 계획이라면 다음 파일을 업로드하기 전에 이미 업로드한 파일을 로컬에서 추적하는 것이 가장 좋습니다. 이미 커밋된 LFS 파일은 절대 두 번 다시 업로드되지 않지만 클라이언트 측에서 이를 확인하면 시간을 절약할 수 있습니다.
- **`hf_transfer`를 사용하세요**: [`hf_transfer`](https://github.com/huggingface/hf_transfer)는 대역폭이 매우 높은 컴퓨터에서 업로드 속도를 높이기 위한 Rust 기반 라이브러리입니다. `hf_transfer`를 사용하려면:

    1. `huggingface_hub`를 설치할 때 `hf_transfer`를 추가로 지정합니다.
       (예: `pip install huggingface_hub[hf_transfer]`).
    2. 환경 변수로 `HF_HUB_ENABLE_HF_TRANSFER=1`을 설정합니다.

> [!WARNING]
> `hf_transfer`는 고급 사용자 도구입니다!
> 테스트 및 프로덕션 준비가 완료되었지만, 고급 오류 처리나 프록시와 같은 사용자 친화적인 기능이 부족합니다.
> 자세한 내용은 [이 섹션](https://huggingface.co/docs/huggingface_hub/hf_transfer)을 참조하세요.


<EditOnGithub source="https://github.com/huggingface/huggingface_hub/blob/main/docs/source/ko/guides/upload.md" />
