SnowFlash383935's picture
Add files using upload-large-folder tool
ab11163 verified
metadata
license: mit
library_name: mlx
base_model: huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated
tags:
  - chat
  - abliterated
  - uncensored
  - mlx
extra_gated_prompt: >-
  **Usage Warnings**


  “**Risk of Sensitive or Controversial Outputs**“: This model’s safety
  filtering has been significantly reduced, potentially generating sensitive,
  controversial, or inappropriate content. Users should exercise caution and
  rigorously review generated outputs.

  “**Not Suitable for All Audiences**:“ Due to limited content filtering, the
  model’s outputs may be inappropriate for public settings, underage users, or
  applications requiring high security.

  “**Legal and Ethical Responsibilities**“: Users must ensure their usage
  complies with local laws and ethical standards. Generated content may carry
  legal or ethical risks, and users are solely responsible for any consequences.

  “**Research and Experimental Use**“: It is recommended to use this model for
  research, testing, or controlled environments, avoiding direct use in
  production or public-facing commercial applications.

  “**Monitoring and Review Recommendations**“: Users are strongly advised to
  monitor model outputs in real-time and conduct manual reviews when necessary
  to prevent the dissemination of inappropriate content.

  “**No Default Safety Guarantees**“: Unlike standard models, this model has not
  undergone rigorous safety optimization. huihui.ai bears no responsibility for
  any consequences arising from its use.
pipeline_tag: text-generation

SnowFlash383935/DeepSeek-R1-0528-Qwen3-8B-abliterated-mlx-4bit

This model SnowFlash383935/DeepSeek-R1-0528-Qwen3-8B-abliterated-mlx-4bit was converted to MLX format from huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated using mlx-lm version 0.25.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("SnowFlash383935/DeepSeek-R1-0528-Qwen3-8B-abliterated-mlx-4bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)