Update README.md
Browse files
README.md
CHANGED
|
@@ -207,6 +207,15 @@ pip install git+https://github.com/geyuying/vllm.git@arc-qwen-video
|
|
| 207 |
# split the video into segments for inference, and then use an LLM to integrate the results.
|
| 208 |
```
|
| 209 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
#### Inference without vllm
|
| 211 |
|
| 212 |
```bash
|
|
|
|
| 207 |
# split the video into segments for inference, and then use an LLM to integrate the results.
|
| 208 |
```
|
| 209 |
|
| 210 |
+
To quickly verify that your environment is set up correctly and that video and audio information are being processed as expected, you can run the following test case with ARC-Qwen-Video-7B.
|
| 211 |
+
|
| 212 |
+
```bash
|
| 213 |
+
video_path = "examples/猪排.mp4"
|
| 214 |
+
task = "QA"
|
| 215 |
+
question = "What did the man say at the beginning of the video after measuring the thickness of the fried pork cutlet?"
|
| 216 |
+
```
|
| 217 |
+
Expected Result: If the model's output contains the phrase "So thin", it indicates that your installation is successful.
|
| 218 |
+
|
| 219 |
#### Inference without vllm
|
| 220 |
|
| 221 |
```bash
|