metadata
pipeline_tag: text-generation
base_model:
- cerebras/Qwen3-Coder-REAP-246B-A35B-FP8
This is a MXFP4_MOE quantization of the model Qwen3-Coder-REAP-246B-A35B
Original model: https://huggingface.co/cerebras/Qwen3-Coder-REAP-246B-A35B-FP8
The model was originally in FP8, which limits its precision.
I attempted to apply MXFP4 quantization after converting the model to FP32, but the quality degradation from the initial FP8 quantization cannot be fully reversed.