Framework versions

  • PEFT 0.13.2

My Model

This is a fine-tuned LLaMA 3.2 11B LoRA-NAS model.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-11B-Vision-Instruct")
model = PeftModel.from_pretrained(base_model, "krishnateja95/llama_3_2_11b_lora_nas_docvqa_gud_64_searched")

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for krishnateja95/llama_3_2_11b_lora_nas_docvqa_gud_64_searched

Adapter
(343)
this model

Collection including krishnateja95/llama_3_2_11b_lora_nas_docvqa_gud_64_searched