phi-3-small-8k-instruct-cn-dat-kr0.1-a1.0-creative

This is a CreativityNeuro (CN) modified version of microsoft/Phi-3-small-8k-instruct.

Model Details

  • Base Model: microsoft/Phi-3-small-8k-instruct
  • Modification: CreativityNeuro weight scaling
  • Prompt Set: dat
  • Keep Ratio: 0.1 (top 10.0% of task-specific weights)
  • Alpha: 1.0 (scaling strength)
  • Mode: creative

What is CreativityNeuro?

CreativityNeuro identifies task-specific neurons using Wanda-style importance scoring and selectively upscales weights associated with creative thinking. The modification formula is:

W_new = W × (1 + α × mask)

Where mask identifies weights important for creative tasks but not for routine/associative tasks.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("priorcomputers/phi-3-small-8k-instruct-cn-dat-kr0.1-a1.0-creative", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("priorcomputers/phi-3-small-8k-instruct-cn-dat-kr0.1-a1.0-creative", trust_remote_code=True)

# Use like any other model
outputs = model.generate(...)

Citation

If you use this model, please cite:

@misc{creativityneuro2025,
  title={CreativityNeuro: Mechanistic Interpretability for LLM Creativity},
  author={Prior Computers},
  year={2025},
  url={https://huggingface.co/priorcomputers}
}
Downloads last month
10
Safetensors
Model size
7B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for priorcomputers/phi-3-small-8k-instruct-cn-dat-kr0.1-a1.0-creative

Finetuned
(20)
this model

Collection including priorcomputers/phi-3-small-8k-instruct-cn-dat-kr0.1-a1.0-creative