Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
VMware
/
open-llama-0.3T-7B-open-instruct-v1.1
like
5
Follow
VMware AI Labs
212
Text Generation
Transformers
PyTorch
Safetensors
VMware/open-instruct-v1-oasst-dolly-hhrlhf
English
llama
conversational
text-generation-inference
License:
cc
Model card
Files
Files and versions
xet
Community
1
Train
Deploy
Use this model
refs/pr/1
open-llama-0.3T-7B-open-instruct-v1.1
27 GB
3 contributors
History:
23 commits
SFconvertbot
Adding `safetensors` variant of this model
92b02d8
verified
about 1 month ago
.gitattributes
Safe
1.48 kB
initial commit
over 2 years ago
README.md
Safe
2.98 kB
Update README.md
over 2 years ago
added_tokens.json
Safe
18 Bytes
Upload tokenizer
over 2 years ago
config.json
Safe
585 Bytes
Upload LlamaForCausalLM
over 2 years ago
generation_config.json
Safe
132 Bytes
Upload LlamaForCausalLM
over 2 years ago
model-00001-of-00002.safetensors
9.98 GB
xet
Adding `safetensors` variant of this model
about 1 month ago
model-00002-of-00002.safetensors
3.5 GB
xet
Adding `safetensors` variant of this model
about 1 month ago
model.safetensors.index.json
Safe
28.1 kB
Adding `safetensors` variant of this model
about 1 month ago
pytorch_model-00001-of-00002.bin
Safe
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"torch.HalfStorage"
What is a pickle import?
9.98 GB
xet
Upload LlamaForCausalLM
over 2 years ago
pytorch_model-00002-of-00002.bin
Safe
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"torch.HalfStorage"
What is a pickle import?
3.5 GB
xet
Upload LlamaForCausalLM
over 2 years ago
pytorch_model.bin.index.json
26.8 kB
Upload LlamaForCausalLM
over 2 years ago
special_tokens_map.json
Safe
96 Bytes
Upload tokenizer
over 2 years ago
tokenizer.json
Safe
1.99 MB
Upload tokenizer
over 2 years ago
tokenizer.model
Safe
772 kB
xet
Upload tokenizer
over 2 years ago
tokenizer_config.json
Safe
715 Bytes
Update tokenizer_config.json
over 2 years ago