Request Access to Ex0bit/GLM-4.7-Flash-PRISM

** PRIORITY ACCESS:**
Step 1: Submit the access request with your information below. Step 2: Complete the support donation at https://ko-fi.com/s/86882e8991
Priority Consideration to this limited edition model.
Please provide your information below.

By requesting access, you agree to:

  • Use this model for research or educational purposes only
  • Not redistribute the model weights without explicit permission
  • Cite this work appropriately in any publications
  • Report any issues or safety concerns to the author

Log in or Sign Up to review the conditions and access this model content.

Ex0bit/GLM-4.7-Flash-PRISM

Model Architecture Context

Model Description

This is Ex0bit/GLM-4.7-Flash-PRISM

πŸ”’ ACCESS REQUIRED

Complete both steps.

β‘ Submit the access request form below
β‘‘Complete a support donation option

PRISM VIP Member Sign-UpAll Models

One-Time SupportThis Model

βœ“ Priority Access

GLM-4.7-Flash-PRISM: Unrestricted (Zero Over-Refusals and Zero Propoganda) GLM-4.7-Flash Model Access

Access GLM-4.7-Flash-PRISM, an abliterated version of ZAI's efficient 30B-A3B MoE model with over-refusal mechanisms removed.

What You Get:

  • 30B-A3B MoE Architecture β€” Lightweight yet powerful Mixture-of-Experts model with 30 billion total parameters and ~3 billion active per token for fast, efficient inference
  • PRISM (Projected Refusal Isolation via Subspace Modification) β€” State-of-the-art abliteration technique that removes over-refusal behaviors while preserving capabilities
  • 128K Context Window β€” Extended context for complex tasks and large codebases
  • Interleaved & Preserved Thinking β€” Multi-turn reasoning that persists across conversations with per-turn thinking control
  • Strong In-Class Benchmarks β€” 91.6% AIME 2025, 79.5% τ²-Bench, 59.2% SWE-bench Verified, 75.2% GPQA
Downloads last month
127
GGUF
Model size
30B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support