Projector file where?

#4
by BingoBird - opened

Can extract a .mmproj ?

python convert_hf_to_gguf.py ./Qwen2.5-VL-3B-Instruct
--outtype f16
--outfile Qwen2.5-VL-3B-Instruct-F16.gguf
--vocab-only # This step extracts the tokenizer

Does the abiliterated version work with mmproj from upstream? Where is mmproj from upstream?

[EDIT} LongCat advises that the upstream .mmproj will not work with the abliterated .gguf. Confirm/Deny?

llama.cpp.tr-qwen3-vl-6-b7106-495c611

convert

python convert_hf_to_gguf.py ./Qwen2.5-VL-3B-Instruct-abliterated --outfile ./Qwen2.5-VL-3B-Instruct-abliterated/ggml-model-f16.gguf --outtype f16
python convert_hf_to_gguf.py ./Qwen2.5-VL-3B-Instruct-abliterated --outfile ./Qwen2.5-VL-3B-Instruct-abliterated/mmproj-ggml-model-f16.gguf --outtype f16 --mmproj

Chat with Image

llama-mtmd-cli -m ./Qwen2.5-VL-3B-Instruct-abliterated/ggml-model-f16.gguf --mmproj ./Qwen2.5-VL-3B-Instruct-abliterated/mmproj-ggml-model-f16.gguf -c 4096 --image ./png/cc.png -p "Describe this image."

Chat

llama-cli -m ./Qwen2.5-VL-3B-Instruct-abliterated/ggml-model-f16.gguf -c 4096

Sign up or log in to comment