Gradio app

#9
by freakynit - opened

Thank you!
How much GPU do we need for hosting this model?

not much i guess... im using 3090 and its working blazing fast.

Wanted to run it just in vLLM is that possible by using the vLLM serve command ? I don’t see vLLM is mentioned here.

How are you hosting it ? vLLM? Would you mind sharing it ?

Checkout their github repo.. they have full vLLM code given. I was running using transformers.

@firstdears i try to with vllm serve like you getting some package errors if i can solve it sharing it here otherwise i try to run like https://github.com/deepseek-ai/DeepSeek-OCR#vllm-inference in here.

Updated with bounding box support and parsing of raw ocr output: https://github.com/freakynit/deepseek-ocr-gradio 🚀

Sign up or log in to comment