medgemma27b-luad-qlora

LoRA adapters (QLoRA, 4-bit) para la identificaci贸n autom谩tica de subtipos de adenocarcinoma de pulm贸n, entrenados sobre el modelo base google/medgemma-27b-it.


馃攷 Descripci贸n

Este repositorio contiene los adapters LoRA obtenidos mediante ajuste fino (QLoRA) con una base de datos de 1194 casos de adenocarcinoma de pulm贸n anotados y distribuidos en los subtipos reconocidos por la Clasificaci贸n de la OMS 2021:contentReference[oaicite:0]{index=0}:

  • Lepidic
  • Acinar
  • Papillary
  • Micropapillary
  • Solid
  • Invasive mucinous
  • Colloid
  • Fetal
  • Enteric

La anotaci贸n y validaci贸n de los casos se realiz贸 siguiendo criterios histol贸gicos y citol贸gicos descritos en la literatura:contentReference[oaicite:1]{index=1}, excluyendo im谩genes con artefactos o sin subtipo identificable.


鈿欙笍 Uso

Ejemplo m铆nimo para cargar los adapters y realizar inferencia:

from transformers import AutoModelForImageTextToText, AutoProcessor, BitsAndBytesConfig
from peft import PeftModel
import torch
from PIL import Image

base_id = "google/medgemma-27b-it"
adapter_id = "jjsprockel/medgemma27b-luad-qlora"

bnb_cfg = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

base = AutoModelForImageTextToText.from_pretrained(
    base_id,
    quantization_config=bnb_cfg,
    device_map={"": "cuda"},
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True
)

model = PeftModel.from_pretrained(base, adapter_id).eval()
processor = AutoProcessor.from_pretrained(base_id)

# Ejemplo de inferencia
img = Image.open("example.png").convert("RGB")
subtypes = ["lepidic","acinar","papillary","micropapillary","solid","invasive mucinous","colloid","fetal","enteric"]

system = "You are an expert pulmonary pathologist. Return ONLY JSON with key 'subtype' strictly from: " + ", ".join(subtypes) + "."
user = "Predict the subtype for this H&E lung adenocarcinoma patch. Only JSON."

messages = [
    {"role":"system","content":[{"type":"text","text":system}]},
    {"role":"user","content":[{"type":"text","text":user},{"type":"image","image":img}]}
]

templ = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
enc = processor(text=templ, images=img, return_tensors="pt")
inputs = {
    "input_ids":      enc["input_ids"].to(model.device),
    "attention_mask": enc["attention_mask"].to(model.device),
    "pixel_values":   enc["pixel_values"].to(model.device, dtype=torch.bfloat16),
}
with torch.inference_mode(), torch.amp.autocast("cuda", dtype=torch.bfloat16):
    out = model.generate(**inputs, max_new_tokens=32, do_sample=False)[0]
gen = out[inputs["input_ids"].shape[-1]:]
decoded = processor.decode(gen, skip_special_tokens=True)
print(decoded)


## Inference notebook

[馃И Open the inference notebook](https://huggingface.co/jjsprockel/medgemma27b-luad-qlora/resolve/main/notebooks/MedGemma27B_LUAD_inference.ipynb)

This notebook runs MedGemma-27B (base + QLoRA adapters) to classify LUAD subtypes on H&E patches
using the JSON-constrained output described in the paper.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for jjsprockel/medgemma27b-luad-qlora

Adapter
(4)
this model