Qwen3-8B-Kernelbook-SFT (HuggingFace Format)

This is a fine-tuned version of Qwen3-8B using Supervised Fine-Tuning (SFT) on the filtered KernelBook dataset, optimized for kernel and system-level tasks.

Model Details

  • Base Model: Qwen3-8B
  • Training Method: Supervised Fine-Tuning (SFT)
  • Training Framework: SLIME (Megatron-LM based)
  • Training Data: Filtered KernelBook dataset (10,000 high-quality samples)
  • Model Size: 8.2B parameters
  • Format: HuggingFace Transformers compatible
  • Checkpoint: Iteration 515

Repository Links

  • This Repository: HuggingFace format - ready for inference with Transformers, vLLM, SGLang, etc.
  • Megatron Format: JinnP/Qwen3-8B-Kernelbook-SFT-filtered - for continued training with Megatron-LM

Usage

Quick Start with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "JinnP/Qwen3-8B-Kernelbook-SFT-HF"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# Example usage
prompt = "Explain how the Linux kernel handles memory management:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Using with vLLM

from vllm import LLM, SamplingParams

llm = LLM(model="JinnP/Qwen3-8B-Kernelbook-SFT-HF")
sampling_params = SamplingParams(temperature=0.7, max_tokens=500)

prompts = ["Describe the process scheduling algorithm in Linux kernel"]
outputs = llm.generate(prompts, sampling_params)

Training Details

This model was fine-tuned using the SLIME framework on filtered KernelBook data specifically curated for kernel and system programming tasks. The training focused on:

  • Kernel internals and system calls
  • Memory management and process scheduling
  • Device drivers and I/O systems
  • File systems and networking stack
  • Performance optimization and debugging

Model Performance

The fine-tuned model shows improved performance on:

  • Kernel code generation and explanation
  • System-level debugging scenarios
  • Performance optimization recommendations
  • Operating system concept explanations

License

This model inherits the Apache 2.0 license from the base Qwen3-8B model. Please refer to the original Qwen3 license for usage terms.

Citation

If you use this model, please cite:

@misc{qwen3-kernelbook-sft,
  title={Qwen3-8B-Kernelbook-SFT: Fine-tuned for Kernel and System Programming},
  author={JinnP},
  year={2024},
  publisher={HuggingFace}
}

And the original Qwen3 model:

@article{qwen3,
  title={Qwen3 Technical Report},
  author={Qwen Team},
  year={2024}
}

Acknowledgments

  • Base model: Qwen Team for Qwen3-8B
  • Training data: KernelBook dataset
  • Training framework: SLIME (Megatron-LM based)
Downloads last month
19
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JinnP/Qwen3-8B-Kernelbook-SFT-HF

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(479)
this model
Quantizations
1 model