Wwayu's picture
Upload README.md with huggingface_hub
7edc277 verified
|
raw
history blame
1.2 kB
metadata
license: apache-2.0
language:
  - en
  - zh
tags:
  - unsloth
  - QiMing
  - vllm
  - sales
  - b2b
  - Strategist
  - saas
  - fine-tuned
  - instruction-following
  - role-playing
  - cognitive-simulator
  - mlx
  - mlx-my-repo
pipeline_tag: text-generation
model_name: QiMing-Strategist-20B
library_name: transformers
base_model: aifeifei798/QiMing-Strategist-20B-MXFP4

Wwayu/QiMing-Strategist-20B-MXFP4-mlx-4Bit

The Model Wwayu/QiMing-Strategist-20B-MXFP4-mlx-4Bit was converted to MLX format from aifeifei798/QiMing-Strategist-20B-MXFP4 using mlx-lm version 0.26.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Wwayu/QiMing-Strategist-20B-MXFP4-mlx-4Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)