TheDenk's picture
Add video-to-video pipeline tag (#1)
bd3f4d0 verified
---
license: apache-2.0
language:
- en
tags:
- cogvideox
- video-generation
- video-to-video
- controlnet
- diffusers
pipeline_tag: video-to-video
---
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/VFtwr_VimGF6g51PGQYwN.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/YaCSr74Iiw6nuqtT1Gtei.mp4"></video>
### ComfyUI
<a href="https://github.com/kijai/ComfyUI-CogVideoXWrapper">ComfyUI-CogVideoXWrapper
</a> supports controlnet pipeline. See an <a href="https://github.com/kijai/ComfyUI-CogVideoXWrapper/blob/main/examples/cogvideox_1_0_2b_controlnet_02.json">example
</a> file.
### How to
Clone repo
```bash
git clone https://github.com/TheDenk/cogvideox-controlnet.git
cd cogvideox-controlnet
```
Create venv
```bash
python -m venv venv
source venv/bin/activate
```
Install requirements
```bash
pip install -r requirements.txt
```
### Inference examples
#### Inference with cli
```bash
python -m inference.cli_demo \
--video_path "resources/car.mp4" \
--prompt "car is moving among mountains" \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-2b \
--controlnet_model_path TheDenk/cogvideox-2b-controlnet-hed-v1
```
#### Inference with Gradio
```bash
python -m inference.gradio_web_demo \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-2b \
--controlnet_model_path TheDenk/cogvideox-2b-controlnet-hed-v1
```
## Acknowledgements
Original code and models [CogVideoX](https://github.com/THUDM/CogVideo/tree/main).
## Contacts
<p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p>