Devstral-Small-2507-Rebased-Vision-LoRA
An attempt to convert kmouratidis/Devstral-Small-2507-Rebased-Vision into a LoRA for Mistral-Small-3.2-24B-Instruct-2506. It seems to work with transformers, but does not seem to work with sglang because Pixtral is not supported.
from peft import get_peft_model, LoraConfig, PeftModel, PeftConfig
from transformers import Mistral3ForConditionalGeneration
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.protocol.instruct.messages import SystemMessage, UserMessage
adapter_id = "kmouratidis/Devstral-Small-2507-Rebased-Vision-LoRA"
model_id = "unsloth/Mistral-Small-3.2-24B-Instruct-2506"
# Load model
model = Mistral3ForConditionalGeneration.from_pretrained(model_id)
tokenizer = MistralTokenizer.from_file(os.path.join(target_model_download_path, "tekken.json"))
# Load & apply LoRA
config = PeftConfig.from_pretrained(adapter_id)
model = PeftModel.from_pretrained(model, adapter_id)
# Test
tokenized = tokenizer.encode_chat_completion(
    ChatCompletionRequest(
        messages=[
            SystemMessage(content="You are a server, answer `ping` with `pong`."),
            UserMessage(content="ping!"),
        ],
    ),
)
output = model.generate(
    input_ids=torch.tensor([tokenized.tokens]),
    max_new_tokens=5,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
Evaluation
I have done no evaluation, nor planning to as long as I can't load it into sglang. I might mess around with with removing the vision parts of the adapter and then try again, but no promises.
- Downloads last month
 - 6
 
	Inference Providers
	NEW
	
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
Model tree for kmouratidis/Devstral-Small-2507-Rebased-Vision-LoRA
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503