# Safetensors

## Docs

- [Safetensors](https://huggingface.co/docs/safetensors/main/index.md)
- [Convert weights to safetensors](https://huggingface.co/docs/safetensors/main/convert-weights.md)
- [Torch shared tensors](https://huggingface.co/docs/safetensors/main/torch_shared_tensors.md)
- [Metadata Parsing](https://huggingface.co/docs/safetensors/main/metadata_parsing.md)
- [Speed Comparison](https://huggingface.co/docs/safetensors/main/speed.md)
- [Tensorflow API[[safetensors.tensorflow.load_file]]](https://huggingface.co/docs/safetensors/main/api/tensorflow.md)
- [Torch API[[safetensors.torch.load_file]]](https://huggingface.co/docs/safetensors/main/api/torch.md)
- [Flax API[[safetensors.flax.load_file]]](https://huggingface.co/docs/safetensors/main/api/flax.md)
- [PaddlePaddle API[[safetensors.paddle.load_file]]](https://huggingface.co/docs/safetensors/main/api/paddle.md)
- [Numpy API[[safetensors.numpy.load_file]]](https://huggingface.co/docs/safetensors/main/api/numpy.md)

### Safetensors
https://huggingface.co/docs/safetensors/main/index.md

# Safetensors

Safetensors is a new simple format for storing tensors safely (as opposed to pickle) and that is still fast (zero-copy). Safetensors is really [fast 🚀](./speed).

## Installation

with pip:
```
pip install safetensors
```

with conda:
```
conda install -c huggingface safetensors
```

## Usage

### Load tensors

```python
from safetensors import safe_open

tensors = {}
with safe_open("model.safetensors", framework="pt", device=0) as f:
    for k in f.keys():
        tensors[k] = f.get_tensor(k)
```

Loading only part of the tensors (interesting when running on multiple GPU)

```python
from safetensors import safe_open

tensors = {}
with safe_open("model.safetensors", framework="pt", device=0) as f:
    tensor_slice = f.get_slice("embedding")
    vocab_size, hidden_dim = tensor_slice.get_shape()
    tensor = tensor_slice[:, :hidden_dim]
```

### Save tensors

```python
import torch
from safetensors.torch import save_file

tensors = {
    "embedding": torch.zeros((2, 2)),
    "attention": torch.zeros((2, 3))
}
save_file(tensors, "model.safetensors")
```

## Format

Let's say you have safetensors file named `model.safetensors`, then `model.safetensors` will have the following internal format:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/safetensors-format.svg"/>
</div>

## Featured Projects

Safetensors is being used widely at leading AI enterprises, such as [Hugging Face](https://huggingface.co/), [EleutherAI](https://www.eleuther.ai/), and [StabilityAI](https://stability.ai/). Here is a non-exhaustive list of projects that are using safetensors:

* [huggingface/transformers](https://github.com/huggingface/transformers)
* [ml-explore/mlx](https://github.com/ml-explore/mlx)
* [huggingface/candle](https://github.com/huggingface/candle)
* [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
* [Llama-cpp](https://github.com/ggerganov/llama.cpp/blob/e6a46b0ed1884c77267dc70693183e3b7164e0e0/convert.py#L537)
* [microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix)
* [hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI)
* [huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
* [CivitAI](https://civitai.com/)
* [huggingface/diffusers](https://github.com/huggingface/diffusers)
* [coreylowman/dfdx](https://github.com/coreylowman/dfdx)
* [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)
* [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [Sanster/lama-cleaner](https://github.com/Sanster/lama-cleaner)
* [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)
* [AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT)
* [brycedrennan/imaginAIry](https://github.com/brycedrennan/imaginAIry)
* [comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI)
* [LianjiaTech/BELLE](https://github.com/LianjiaTech/BELLE)
* [alvarobartt/safejax](https://github.com/alvarobartt/safejax)
* [MaartenGr/BERTopic](https://github.com/MaartenGr/BERTopic)
* [rachthree/safestructures](https://github.com/rachthree/safestructures)
* [justinchuby/onnx-safetensors](https://github.com/justinchuby/onnx-safetensors)


<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/index.mdx" />

### Convert weights to safetensors
https://huggingface.co/docs/safetensors/main/convert-weights.md

# Convert weights to safetensors

PyTorch model weights are commonly saved and stored as `.bin` files with Python's [`pickle`](https://docs.python.org/3/library/pickle.html) utility. To save and store your model weights in the more secure `safetensor` format, we recommend converting your weights to `.safetensors`.

The easiest way to convert your model weights is to use the [Convert Space](https://huggingface.co/spaces/safetensors/convert), given your model weights are already stored on the Hub. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted `.safetensors` file to your repository.

<Tip warning={true}>

For larger models, the Space may be a bit slower because its resources are tied up in converting other models. You can also try running the [convert.py](https://github.com/huggingface/safetensors/blob/main/bindings/python/convert.py) script (this is what the Space is running) locally to convert your weights.

Feel free to ping [@Narsil](https://huggingface.co/Narsil) for any issues with the Space.

</Tip>


<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/convert-weights.md" />

### Torch shared tensors
https://huggingface.co/docs/safetensors/main/torch_shared_tensors.md

# Torch shared tensors


## TL;DR

Using specific functions, which should work in most cases for you.
This is not without side effects.

```python
from safetensors.torch import load_model, save_model

save_model(model, "model.safetensors")
# Instead of save_file(model.state_dict(), "model.safetensors")

load_model(model, "model.safetensors")
# Instead of model.load_state_dict(load_file("model.safetensors"))
```

## What are shared tensors ?

Pytorch uses shared tensors for some computation.
This is extremely interesting to reduce memory usage in general.

One very classic use case is in transformers the `embeddings` are shared with
`lm_head`. By using the same matrix, the model uses less parameters, and gradients 
flow much better to the `embeddings` (which is the start of the model, so they don't
flow easily there, whereas `lm_head` is at the tail of the model, so gradients are
extremely good over there, since they are the same tensors, they both benefit)


```python
from torch import nn

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.a = nn.Linear(100, 100)
        self.b = self.a

    def forward(self, x):
        return self.b(self.a(x))


model = Model()
print(model.state_dict())
# odict_keys(['a.weight', 'a.bias', 'b.weight', 'b.bias'])
torch.save(model.state_dict(), "model.bin")
# This file is now 41k instead of ~80k, because A and B are the same weight hence only 1 is saved on disk with both `a` and `b` pointing to the same buffer
```

## Why are shared tensors not saved in `safetensors` ?

Multiple reasons for that:

- *Not all frameworks support them* for instance `tensorflow` does not.
  So if someone saves shared tensors in torch, there is no way to 
  load them in a similar fashion so we could not keep the same `Dict[str, Tensor]`
  API.
- *It makes lazy loading very quickly.*
  Lazy loading is the ability to load only some tensors, or part of tensors for
  a given file. This is trivial to do without sharing tensors but with tensor sharing

  ```python
  with safe_open("model.safetensors", framework="pt") as f:
      a = f.get_tensor("a")
      b = f.get_tensor("b")
  ```

  Now it's impossible with this given code to "reshare" buffers after the fact.
  Once we give the `a` tensor we have no way to give back the same memory when
  you ask for `b`. (In this particular example we could keep track of given buffers
  but this is not the case in general, since you could do arbitrary work with `a`
  like sending it to another device before asking for `b`)
- *It can lead to much larger file than necessary*.
  If you are saving a shared tensor which is only a fraction of a larger tensor,
  then saving it with pytorch leads to saving the entire buffer instead of saving
  just what is needed.

  ```python
  a = torch.zeros((100, 100))
  b = a[:1, :]
  torch.save({"b": b}, "model.bin")
  # File is 41k instead of the expected 400 bytes
  # In practice it could happen that you save several 10GB instead of 1GB.
  ```

Now with all those reasons being mentioned, nothing is set in stone in there.
Shared tensors do not cause unsafety, or denial of service potential, so this
decision could be revisited if current workarounds are not satisfactory.

## How does it work ?

The design is rather simple.
We're going to look for all shared tensors, then looking for all tensors
covering the entire buffer (there can be multiple such tensors).
That gives us multiple names which can be saved, we simply choose the first one

During `load_model`, we are loading a bit like `load_state_dict` does, except
we're looking into the model itself, to check for shared buffers, and ignoring
the "missed keys" which were actually covered by virtue of buffer sharing (they
were properly loaded since there was a buffer that loaded under the hood).
Every other error is raised as-is

**Caveat**: This means we're dropping some keys within the file. meaning if you're
checking for the keys saved on disk, you will see some "missing tensors" or if you're
using `load_state_dict`. Unless we start supporting shared tensors directly in
the format there's no real way around it.


<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/torch_shared_tensors.mdx" />

### Metadata Parsing
https://huggingface.co/docs/safetensors/main/metadata_parsing.md

# Metadata Parsing

Given the simplicity of the format, it's very simple and efficient to fetch and parse metadata about Safetensors weights – i.e. the list of tensors, their types, and their shapes or numbers of parameters – using small [(Range) HTTP requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests).

This parsing has been implemented in JS in [`huggingface.js`](https://huggingface.co/docs/huggingface.js/main/en/hub/modules#parsesafetensorsmetadata) (sample code follows below), but it would be similar in any language.

## Example use case

There can be many potential use cases. For instance, we use it on the HuggingFace Hub to display info about models which have safetensors weights:

<div class="flex justify-center">
    <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/model-page-light.png"/>
    <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/model-page-dark.png"/>
</div>

<div class="flex justify-center">
    <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/view-all-tensors-light.png"/>
    <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/view-all-tensors-dark.png"/>
</div>

## Usage

<hfoptions id="metadata">

<hfoption id="http">
From [🤗 Hub](hf.co/models), you can get metadata of a model with [HTTP range requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests) instead of downloading the entire safetensors file with all the weights. In this example python script below (you can use any language that has HTTP requests support), we are parsing metadata of [gpt2](https://huggingface.co/gpt2/blob/main/model.safetensors). 

```python
import requests # pip install requests
import struct

def parse_single_file(url):
    # Fetch the first 8 bytes of the file
    headers = {'Range': 'bytes=0-7'}
    response = requests.get(url, headers=headers)
    # Interpret the bytes as a little-endian unsigned 64-bit integer
    length_of_header = struct.unpack('<Q', response.content)[0]
    # Fetch length_of_header bytes starting from the 9th byte
    headers = {'Range': f'bytes=8-{7 + length_of_header}'}
    response = requests.get(url, headers=headers)
    # Interpret the response as a JSON object
    header = response.json()
    return header

url = "https://huggingface.co/gpt2/resolve/main/model.safetensors"
header = parse_single_file(url)

print(header)
# {
#   "__metadata__": { "format": "pt" },
#   "h.10.ln_1.weight": {
#     "dtype": "F32",
#     "shape": [768],
#     "data_offsets": [223154176, 223157248]
#   },
#   ...
# }
```
</hfoption>

<hfoption id="javascript">
Using [`huggingface.js`](https://huggingface.co/docs/huggingface.js)

```ts
import { parseSafetensorsMetadata } from "@huggingface/hub";

const info = await parseSafetensorsMetadata({
	repo: { type: "model", name: "bigscience/bloom" },
});

console.log(info)
// {
//   sharded: true,
//   index: {
//     metadata: { total_size: 352494542848 },
//     weight_map: {
//       'h.0.input_layernorm.bias': 'model_00002-of-00072.safetensors',
//       ...
//     }
//   },
//   headers: {
//     __metadata__: {'format': 'pt'},
//     'h.2.attn.c_attn.weight': {'dtype': 'F32', 'shape': [768, 2304], 'data_offsets': [541012992, 548090880]},
//     ...
//   }
// }
```

Depending on whether the safetensors weights are sharded into multiple files or not, the output of the call above will be:

```ts
export type SafetensorsParseFromRepo =
| {
		sharded: false;
		header: SafetensorsFileHeader;
	}
| {
		sharded: true;
		index: SafetensorsIndexJson;
		headers: SafetensorsShardedHeaders;
	};
```

where the underlying `types` are the following:

```ts
type FileName = string;

type TensorName = string;
type Dtype = "F64" | "F32" | "F16" | "BF16" | "I64" | "I32" | "I16" | "I8" | "U8" | "BOOL";

interface TensorInfo {
	dtype: Dtype;
	shape: number[];
	data_offsets: [number, number];
}

type SafetensorsFileHeader = Record<TensorName, TensorInfo> & {
	__metadata__: Record<string, string>;
};

interface SafetensorsIndexJson {
	weight_map: Record<TensorName, FileName>;
}

export type SafetensorsShardedHeaders = Record<FileName, SafetensorsFileHeader>;
```
</hfoption>

<hfoption id="python">
[`huggingface_hub`](https://huggingface.co/docs/huggingface_hub) provides a Python API to parse safetensors metadata.
Use [`get_safetensors_metadata`](https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.get_safetensors_metadata) to get all safetensors metadata of a model.
Depending on if the model is sharded or not, one or multiple safetensors files will be parsed.

```python
>>> from huggingface_hub import get_safetensors_metadata

# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
    metadata=None,
    sharded=False,
    weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
    files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}

# Parse repo with sharded model (i.e. multiple weights files)
>>> metadata = get_safetensors_metadata("bigscience/bloom")
Parse safetensors files: 100%|██████████████████████████████████████████| 72/72 [00:12<00:00,  5.78it/s]
>>> metadata
SafetensorsRepoMetadata(metadata={'total_size': 352494542848}, sharded=True, weight_map={...}, files_metadata={...})
>>> len(metadata.files_metadata)
72  # All safetensors files have been fetched

# Parse repo that is not a safetensors repo
>>> get_safetensors_metadata("runwayml/stable-diffusion-v1-5")
NotASafetensorsRepoError: 'runwayml/stable-diffusion-v1-5' is not a safetensors repo. Couldn't find 'model.safetensors.index.json' or 'model.safetensors' files.
```

To parse the metadata of a single safetensors file, use [`parse_safetensors_file_metadata`](https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.parse_safetensors_file_metadata).
</hfoption>

</hfoptions>

## Example output

For instance, here are the number of params per dtype for a few models on the HuggingFace Hub. Also see [this issue](https://github.com/huggingface/safetensors/issues/44) for more examples of usage.

model | safetensors | params
--- | --- | ---
[gpt2](https://huggingface.co/gpt2?show_tensors=true) | single-file | { 'F32' => 137022720 }
[roberta-base](https://huggingface.co/roberta-base?show_tensors=true) | single-file | { 'F32' => 124697433, 'I64' => 514 }
[Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner?show_tensors=true) | single-file | { 'F32' => 110035205, 'I64' => 514 }
[roberta-large](https://huggingface.co/roberta-large?show_tensors=true) | single-file | { 'F32' => 355412057, 'I64' => 514 }
[distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased?show_tensors=true) | single-file | { 'F32' => 67431550 }
[EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b?show_tensors=true) | sharded | { 'F16' => 20554568208, 'U8' => 184549376 }
[bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m?show_tensors=true) | single-file | { 'F16' => 559214592 }
[bigscience/bloom](https://huggingface.co/bigscience/bloom?show_tensors=true) | sharded | { 'BF16' => 176247271424 }
[bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b?show_tensors=true) | single-file | { 'F16' => 3002557440 }


<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/metadata_parsing.mdx" />

### Speed Comparison
https://huggingface.co/docs/safetensors/main/speed.md

# Speed Comparison

<a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb" target="_blank" class="absolute z-10 right-0 top-0">
    <img
        alt="Open In Colab"
        class="!m-0"
        src="https://colab.research.google.com/assets/colab-badge.svg"
    />
</a>

`Safetensors` is really fast. Let's compare it against `PyTorch` by loading [gpt2](https://huggingface.co/gpt2) weights. To run the [GPU benchmark](#gpu-benchmark), make sure your machine has GPU or you have selected `GPU runtime` if you are using Google Colab.

Before you begin, make sure you have all the necessary libraries installed:

```bash
pip install safetensors huggingface_hub torch
```

Let's start by importing all the packages that will be used:

```py
>>> import os
>>> import datetime
>>> from huggingface_hub import hf_hub_download
>>> from safetensors.torch import load_file
>>> import torch
```

Download safetensors & torch weights for gpt2:

```py
>>> sf_filename = hf_hub_download("gpt2", filename="model.safetensors")
>>> pt_filename = hf_hub_download("gpt2", filename="pytorch_model.bin")
```

### CPU benchmark

```py
>>> start_st = datetime.datetime.now()
>>> weights = load_file(sf_filename, device="cpu")
>>> load_time_st = datetime.datetime.now() - start_st
>>> print(f"Loaded safetensors {load_time_st}")

>>> start_pt = datetime.datetime.now()
>>> weights = torch.load(pt_filename, map_location="cpu")
>>> load_time_pt = datetime.datetime.now() - start_pt
>>> print(f"Loaded pytorch {load_time_pt}")

>>> print(f"on CPU, safetensors is faster than pytorch by: {load_time_pt/load_time_st:.1f} X")
Loaded safetensors 0:00:00.004015
Loaded pytorch 0:00:00.307460
on CPU, safetensors is faster than pytorch by: 76.6 X
```

This speedup is due to the fact that this library avoids unnecessary copies by mapping the file directly. It is actually possible to do on [pure pytorch](https://gist.github.com/Narsil/3edeec2669a5e94e4707aa0f901d2282).
The currently shown speedup was gotten on:
* OS: Ubuntu 18.04.6 LTS
* CPU: Intel(R) Xeon(R) CPU @ 2.00GHz


### GPU benchmark

```py
>>> # This is required because this feature hasn't been fully verified yet, but 
>>> # it's been tested on many different environments
>>> os.environ["SAFETENSORS_FAST_GPU"] = "1"

>>> # CUDA startup out of the measurement
>>> torch.zeros((2, 2)).cuda()

>>> start_st = datetime.datetime.now()
>>> weights = load_file(sf_filename, device="cuda:0")
>>> load_time_st = datetime.datetime.now() - start_st
>>> print(f"Loaded safetensors {load_time_st}")

>>> start_pt = datetime.datetime.now()
>>> weights = torch.load(pt_filename, map_location="cuda:0")
>>> load_time_pt = datetime.datetime.now() - start_pt
>>> print(f"Loaded pytorch {load_time_pt}")

>>> print(f"on GPU, safetensors is faster than pytorch by: {load_time_pt/load_time_st:.1f} X")
Loaded safetensors 0:00:00.165206
Loaded pytorch 0:00:00.353889
on GPU, safetensors is faster than pytorch by: 2.1 X
```

The speedup works because this library is able to skip unnecessary CPU allocations. It is unfortunately not replicable in pure pytorch as far as we know. The library works by memory mapping the file, creating the tensor empty with pytorch and calling `cudaMemcpy` directly to move the tensor directly on the GPU.
The currently shown speedup was gotten on:
* OS: Ubuntu 18.04.6 LTS.
* GPU: Tesla T4
* Driver Version: 460.32.03
* CUDA Version: 11.2


<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/speed.mdx" />

### Tensorflow API[[safetensors.tensorflow.load_file]]
https://huggingface.co/docs/safetensors/main/api/tensorflow.md

# Tensorflow API[[safetensors.tensorflow.load_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.tensorflow.load_file</name><anchor>safetensors.tensorflow.load_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L103</source><parameters>[{"name": "filename", "val": ": typing.Union[str, os.PathLike]"}]</parameters><paramsdesc>- **filename** (`str`, or `os.PathLike`)) --
  The name of the file which contains the tensors</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, tf.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `tf.Tensor`</retdesc></docstring>

Loads a safetensors file into tensorflow format.







<ExampleCodeBlock anchor="safetensors.tensorflow.load_file.example">

Example:

```python
from safetensors.tensorflow import load_file

file_path = "./my_folder/bert.safetensors"
loaded = load_file(file_path)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.tensorflow.load</name><anchor>safetensors.tensorflow.load</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L76</source><parameters>[{"name": "data", "val": ": bytes"}]</parameters><paramsdesc>- **data** (`bytes`) --
  The content of a safetensors file</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, tf.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `tf.Tensor` on cpu</retdesc></docstring>

Loads a safetensors file into tensorflow format from pure bytes.







<ExampleCodeBlock anchor="safetensors.tensorflow.load.example">

Example:

```python
from safetensors.tensorflow import load

file_path = "./my_folder/bert.safetensors"
with open(file_path, "rb") as f:
    data = f.read()

loaded = load(data)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.tensorflow.save_file</name><anchor>safetensors.tensorflow.save_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L41</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, tensorflow.python.framework.tensor.Tensor]"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, tf.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **filename** (`str`, or `os.PathLike`)) --
  The filename we're saving into.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.tensorflow.save_file.example">

Example:

```python
from safetensors.tensorflow import save_file
import tensorflow as tf

tensors = {"embedding": tf.zeros((512, 1024)), "attention": tf.zeros((256, 256))}
save_file(tensors, "model.safetensors")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.tensorflow.save</name><anchor>safetensors.tensorflow.save</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L10</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, tensorflow.python.framework.tensor.Tensor]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, tf.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The raw bytes representing the format</retdesc></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.tensorflow.save.example">

Example:

```python
from safetensors.tensorflow import save
import tensorflow as tf

tensors = {"embedding": tf.zeros((512, 1024)), "attention": tf.zeros((256, 256))}
byte_data = save(tensors)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/api/tensorflow.mdx" />

### Torch API[[safetensors.torch.load_file]]
https://huggingface.co/docs/safetensors/main/api/torch.md

# Torch API[[safetensors.torch.load_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.load_file</name><anchor>safetensors.torch.load_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L310</source><parameters>[{"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "device", "val": ": typing.Union[str, int] = 'cpu'"}]</parameters><paramsdesc>- **filename** (`str`, or `os.PathLike`) --
  The name of the file which contains the tensors
- **device** (`Union[str, int]`, *optional*, defaults to `cpu`) --
  The device where the tensors need to be located after load.
  available options are all regular torch device locations.</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, torch.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `torch.Tensor`</retdesc></docstring>

Loads a safetensors file into torch format.







<ExampleCodeBlock anchor="safetensors.torch.load_file.example">

Example:

```python
from safetensors.torch import load_file

file_path = "./my_folder/bert.safetensors"
loaded = load_file(file_path)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.load</name><anchor>safetensors.torch.load</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L342</source><parameters>[{"name": "data", "val": ": bytes"}]</parameters><paramsdesc>- **data** (`bytes`) --
  The content of a safetensors file</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, torch.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `torch.Tensor` on cpu</retdesc></docstring>

Loads a safetensors file into torch format from pure bytes.







<ExampleCodeBlock anchor="safetensors.torch.load.example">

Example:

```python
from safetensors.torch import load

file_path = "./my_folder/bert.safetensors"
with open(file_path, "rb") as f:
    data = f.read()

loaded = load(data)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.save_file</name><anchor>safetensors.torch.save_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L276</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, torch.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **filename** (`str`, or `os.PathLike`)) --
  The filename we're saving into.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.torch.save_file.example">

Example:

```python
from safetensors.torch import save_file
import torch

tensors = {"embedding": torch.zeros((512, 1024)), "attention": torch.zeros((256, 256))}
save_file(tensors, "model.safetensors")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.save</name><anchor>safetensors.torch.save</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L244</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, torch.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The raw bytes representing the format</retdesc></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.torch.save.example">

Example:

```python
from safetensors.torch import save
import torch

tensors = {"embedding": torch.zeros((512, 1024)), "attention": torch.zeros((256, 256))}
byte_data = save(tensors)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.load_model</name><anchor>safetensors.torch.load_model</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L190</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "strict", "val": ": bool = True"}, {"name": "device", "val": ": typing.Union[str, int] = 'cpu'"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to load onto.
- **filename** (`str`, or `os.PathLike`) --
  The filename location to load the file from.
- **strict** (`bool`, *optional*, defaults to True) --
  Whether to fail if you're missing keys or having unexpected ones.
  When false, the function simply returns missing and unexpected names.
- **device** (`Union[str, int]`, *optional*, defaults to `cpu`) --
  The device where the tensors need to be located after load.
  available options are all regular torch device locations.</paramsdesc><paramgroups>0</paramgroups><rettype>`(missing, unexpected)</rettype><retdesc>(List[str], List[str])`
`missing` are names in the model which were not modified during loading
`unexpected` are names that are on the file, but weren't used during
the load.</retdesc></docstring>

Loads a given filename onto a torch model.
This method exists specifically to avoid tensor sharing issues which are
not allowed in `safetensors`. [More information on tensor sharing](../torch_shared_tensors)








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.torch.save_model</name><anchor>safetensors.torch.save_model</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/torch.py#L141</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "filename", "val": ": str"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}, {"name": "force_contiguous", "val": ": bool = True"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to save on disk.
- **filename** (`str`) --
  The filename location to save the file
- **metadata** (`Dict[str, str]`, *optional*) --
  Extra information to save along with the file.
  Some metadata will be added for each dropped tensors.
  This information will not be enough to recover the entire
  shared structure but might help understanding things
- **force_contiguous** (`boolean`, *optional*, defaults to True) --
  Forcing the state_dict to be saved as contiguous tensors.
  This has no effect on the correctness of the model, but it
  could potentially change performance if the layout of the tensor
  was chosen specifically for that reason.</paramsdesc><paramgroups>0</paramgroups></docstring>

Saves a given torch model to specified filename.
This method exists specifically to avoid tensor sharing issues which are
not allowed in `safetensors`. [More information on tensor sharing](../torch_shared_tensors)




</div>

<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/api/torch.mdx" />

### Flax API[[safetensors.flax.load_file]]
https://huggingface.co/docs/safetensors/main/api/flax.md

# Flax API[[safetensors.flax.load_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.flax.load_file</name><anchor>safetensors.flax.load_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/flax.py#L102</source><parameters>[{"name": "filename", "val": ": typing.Union[str, os.PathLike]"}]</parameters><paramsdesc>- **filename** (`str`, or `os.PathLike`)) --
  The name of the file which contains the tensors</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, Array]`</rettype><retdesc>dictionary that contains name as key, value as `Array`</retdesc></docstring>

Loads a safetensors file into flax format.







<ExampleCodeBlock anchor="safetensors.flax.load_file.example">

Example:

```python
from safetensors.flax import load_file

file_path = "./my_folder/bert.safetensors"
loaded = load_file(file_path)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.flax.load</name><anchor>safetensors.flax.load</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/flax.py#L75</source><parameters>[{"name": "data", "val": ": bytes"}]</parameters><paramsdesc>- **data** (`bytes`) --
  The content of a safetensors file</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, Array]`</rettype><retdesc>dictionary that contains name as key, value as `Array` on cpu</retdesc></docstring>

Loads a safetensors file into flax format from pure bytes.







<ExampleCodeBlock anchor="safetensors.flax.load.example">

Example:

```python
from safetensors.flax import load

file_path = "./my_folder/bert.safetensors"
with open(file_path, "rb") as f:
    data = f.read()

loaded = load(data)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.flax.save_file</name><anchor>safetensors.flax.save_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/flax.py#L40</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, jax.Array]"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, Array]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **filename** (`str`, or `os.PathLike`)) --
  The filename we're saving into.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.flax.save_file.example">

Example:

```python
from safetensors.flax import save_file
from jax import numpy as jnp

tensors = {"embedding": jnp.zeros((512, 1024)), "attention": jnp.zeros((256, 256))}
save_file(tensors, "model.safetensors")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.flax.save</name><anchor>safetensors.flax.save</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/flax.py#L11</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, jax.Array]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, Array]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The raw bytes representing the format</retdesc></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.flax.save.example">

Example:

```python
from safetensors.flax import save
from jax import numpy as jnp

tensors = {"embedding": jnp.zeros((512, 1024)), "attention": jnp.zeros((256, 256))}
byte_data = save(tensors)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/api/flax.mdx" />

### PaddlePaddle API[[safetensors.paddle.load_file]]
https://huggingface.co/docs/safetensors/main/api/paddle.md

# PaddlePaddle API[[safetensors.paddle.load_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.paddle.load_file</name><anchor>safetensors.paddle.load_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/paddle.py#L108</source><parameters>[{"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "device", "val": " = 'cpu'"}]</parameters><paramsdesc>- **filename** (`str`, or `os.PathLike`)) --
  The name of the file which contains the tensors
- **device** (`Union[Dict[str, any], str]`, *optional*, defaults to `cpu`) --
  The device where the tensors need to be located after load.
  available options are all regular paddle device locations</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, paddle.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `paddle.Tensor`</retdesc></docstring>

Loads a safetensors file into paddle format.







<ExampleCodeBlock anchor="safetensors.paddle.load_file.example">

Example:

```python
from safetensors.paddle import load_file

file_path = "./my_folder/bert.safetensors"
loaded = load_file(file_path)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.paddle.load</name><anchor>safetensors.paddle.load</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/paddle.py#L77</source><parameters>[{"name": "data", "val": ": bytes"}, {"name": "device", "val": ": str = 'cpu'"}]</parameters><paramsdesc>- **data** (`bytes`) --
  The content of a safetensors file</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, paddle.Tensor]`</rettype><retdesc>dictionary that contains name as key, value as `paddle.Tensor` on cpu</retdesc></docstring>

Loads a safetensors file into paddle format from pure bytes.







<ExampleCodeBlock anchor="safetensors.paddle.load.example">

Example:

```python
from safetensors.paddle import load

file_path = "./my_folder/bert.safetensors"
with open(file_path, "rb") as f:
    data = f.read()

loaded = load(data)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.paddle.save_file</name><anchor>safetensors.paddle.save_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/paddle.py#L43</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, paddle.Tensor]"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, paddle.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **filename** (`str`, or `os.PathLike`)) --
  The filename we're saving into.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.paddle.save_file.example">

Example:

```python
from safetensors.paddle import save_file
import paddle

tensors = {"embedding": paddle.zeros((512, 1024)), "attention": paddle.zeros((256, 256))}
save_file(tensors, "model.safetensors")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.paddle.save</name><anchor>safetensors.paddle.save</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/paddle.py#L11</source><parameters>[{"name": "tensors", "val": ": typing.Dict[str, paddle.Tensor]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensors** (`Dict[str, paddle.Tensor]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The raw bytes representing the format</retdesc></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.paddle.save.example">

Example:

```python
from safetensors.paddle import save
import paddle

tensors = {"embedding": paddle.zeros((512, 1024)), "attention": paddle.zeros((256, 256))}
byte_data = save(tensors)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/api/paddle.mdx" />

### Numpy API[[safetensors.numpy.load_file]]
https://huggingface.co/docs/safetensors/main/api/numpy.md

# Numpy API[[safetensors.numpy.load_file]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.numpy.load_file</name><anchor>safetensors.numpy.load_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/numpy.py#L117</source><parameters>[{"name": "filename", "val": ": typing.Union[str, os.PathLike]"}]</parameters><paramsdesc>- **filename** (`str`, or `os.PathLike`)) --
  The name of the file which contains the tensors</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, np.ndarray]`</rettype><retdesc>dictionary that contains name as key, value as `np.ndarray`</retdesc></docstring>

Loads a safetensors file into numpy format.







<ExampleCodeBlock anchor="safetensors.numpy.load_file.example">

Example:

```python
from safetensors.numpy import load_file

file_path = "./my_folder/bert.safetensors"
loaded = load_file(file_path)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.numpy.load</name><anchor>safetensors.numpy.load</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/numpy.py#L90</source><parameters>[{"name": "data", "val": ": bytes"}]</parameters><paramsdesc>- **data** (`bytes`) --
  The content of a safetensors file</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, np.ndarray]`</rettype><retdesc>dictionary that contains name as key, value as `np.ndarray` on cpu</retdesc></docstring>

Loads a safetensors file into numpy format from pure bytes.







<ExampleCodeBlock anchor="safetensors.numpy.load.example">

Example:

```python
from safetensors.numpy import load

file_path = "./my_folder/bert.safetensors"
with open(file_path, "rb") as f:
    data = f.read()

loaded = load(data)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.numpy.save_file</name><anchor>safetensors.numpy.save_file</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/numpy.py#L52</source><parameters>[{"name": "tensor_dict", "val": ": typing.Dict[str, numpy.ndarray]"}, {"name": "filename", "val": ": typing.Union[str, os.PathLike]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensor_dict** (`Dict[str, np.ndarray]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **filename** (`str`, or `os.PathLike`)) --
  The filename we're saving into.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`None`</rettype></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.numpy.save_file.example">

Example:

```python
from safetensors.numpy import save_file
import numpy as np

tensors = {"embedding": np.zeros((512, 1024)), "attention": np.zeros((256, 256))}
save_file(tensors, "model.safetensors")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>safetensors.numpy.save</name><anchor>safetensors.numpy.save</anchor><source>https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/numpy.py#L16</source><parameters>[{"name": "tensor_dict", "val": ": typing.Dict[str, numpy.ndarray]"}, {"name": "metadata", "val": ": typing.Optional[typing.Dict[str, str]] = None"}]</parameters><paramsdesc>- **tensor_dict** (`Dict[str, np.ndarray]`) --
  The incoming tensors. Tensors need to be contiguous and dense.
- **metadata** (`Dict[str, str]`, *optional*, defaults to `None`) --
  Optional text only metadata you might want to save in your header.
  For instance it can be useful to specify more about the underlying
  tensors. This is purely informative and does not affect tensor loading.</paramsdesc><paramgroups>0</paramgroups><rettype>`bytes`</rettype><retdesc>The raw bytes representing the format</retdesc></docstring>

Saves a dictionary of tensors into raw bytes in safetensors format.







<ExampleCodeBlock anchor="safetensors.numpy.save.example">

Example:

```python
from safetensors.numpy import save
import numpy as np

tensors = {"embedding": np.zeros((512, 1024)), "attention": np.zeros((256, 256))}
byte_data = save(tensors)
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/safetensors/blob/main/docs/source/api/numpy.mdx" />
