Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

iproskurina
/
Mistral-7B-v0.1-GPTQ-8bit-g128

Text Generation
Safetensors
English
mistral
gptq
8-bit precision
Model card Files Files and versions
xet
Community
Mistral-7B-v0.1-GPTQ-8bit-g128
7.68 GB
  • 1 contributor
History: 11 commits
iproskurina's picture
iproskurina
Update README.md to include GPTQModel usage.
7d17545 verified 8 months ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • README.md
    2.76 kB
    Update README.md to include GPTQModel usage. 8 months ago
  • config.json
    660 Bytes
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago
  • model.safetensors
    7.68 GB
    xet
    Rename gptq_model-8bit-128g.safetensors to model.safetensors about 1 year ago
  • quantize_config.json
    211 Bytes
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago
  • special_tokens_map.json
    414 Bytes
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago
  • tokenizer.json
    1.8 MB
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago
  • tokenizer.model
    493 kB
    xet
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago
  • tokenizer_config.json
    996 Bytes
    AutoGPTQ model for mistralai/Mistral-7B-v0.1: 8bits, gr128, desc_act=False about 1 year ago