nightmedia's picture
Update README.md
5e0458f verified
metadata
base_model: mistralai/Magistral-Small-2507
language:
  - en
  - fr
  - de
  - es
  - pt
  - it
  - ja
  - ko
  - ru
  - zh
  - ar
  - fa
  - id
  - ms
  - ne
  - pl
  - ro
  - sr
  - sv
  - tr
  - uk
  - vi
  - hi
  - bn
library_name: mlx
license: apache-2.0
inference: false
extra_gated_description: >-
  If you want to learn more about how we process your personal data, please read
  our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
  - vllm
  - mistral-common
  - transformers
  - mlx

Magistral-Small-2507-320k-q6-mlx

This is an experimental quant extended to 320k context.

This model Magistral-Small-2507-320k-q6-mlx was converted to MLX format from mistralai/Magistral-Small-2507 using mlx-lm version 0.26.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Magistral-Small-2507-320k-q6-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)