This is a MXFP4_MOE quantization of the model OLMoE-1B-7B-0125-Instruct

Model quantized with F16 GGUF’s from: https://huggingface.co/DevQuasar/allenai.OLMoE-1B-7B-0125-Instruct-GGUF

Original model: https://huggingface.co/allenai/OLMoE-1B-7B-0125-Instruct

Downloads last month
48
GGUF
Model size
7B params
Architecture
olmoe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/OLMoE-1B-7B-0125-Instruct-MXFP4_MOE-GGUF