Can ovis2.5 running without flash attention?
Please support old GPU like V100.
Hi, we’ve added SDPA support in the code, so it can now run without flash-attention.
Hi, we’ve added SDPA support in the code, so it can now run without flash-attention.
emm...It seems like not support batch inference and has a low GPU utilization rate, but thanks for your work either.
# Thinking mode & budget
enable_thinking = True
enable_thinking_budget = True # Only effective if enable_thinking is True.
# Total tokens for thinking + answer. Ensure: max_new_tokens > thinking_budget + 25
max_new_tokens = 3072
thinking_budget = 2048
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
attn_implementation="eager",
trust_remote_code=True
).cuda()
Throws me a Flash Attention Error that my GPU is not supported. Eager nor sdpa work on 2B Model for me.
Errors:
- SDPA:
ValueError: Ovis2_5 does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://git
hub.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implem entation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
Eager:
out, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: FlashAttention only supports Ampere GPUs or newer.
pip uninstall flash_attn