Phi-4-Super-o1
[Phi-4-Super-o1 finetuned] from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on responsible problem solving and advanced reasoning capabilities. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Phi-4-Super-o1 ensures that small, capable models are trained with datasets of exceptional depth and precision.
Phi-4-Super-o1 adopts a robust safety post-training approach using open-source and in-house synthetic datasets. This involves a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization) techniques, ensuring helpful and harmless outputs across various safety categories.
Let me know if you’d like further edits or refinements! This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the Model Stock merge method using unsloth/phi-4 as a base.
Models Merged
The following models were included in the merge:
- Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
 - prithivMLmods/Phi-4-o1
 - prithivMLmods/Phi-4-Math-IO
 - prithivMLmods/Phi-4-Empathetic
 - bunnycore/Phi-4-RP-V0.2
 - prithivMLmods/Phi-4-QwQ
 - mudler/LocalAI-functioncall-phi-4-v0.3
 
Configuration
The following YAML configuration was used to produce this model:
models:
  - model: unsloth/phi-4
  - model: prithivMLmods/Phi-4-o1
  - model: prithivMLmods/Phi-4-Math-IO
  - model: prithivMLmods/Phi-4-QwQ
  - model: Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
  - model: mudler/LocalAI-functioncall-phi-4-v0.3
  - model: bunnycore/Phi-4-RP-V0.2
  - model: prithivMLmods/Phi-4-Empathetic
merge_method: model_stock
base_model: unsloth/phi-4
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
tokenizer_source: "unsloth/phi-4"
- Downloads last month
 - 2