zsxkib commited on
Commit
a463a31
·
verified ·
1 Parent(s): 6652862

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -1,3 +1,119 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ tags:
7
+ - image-to-video
8
+ - lora
9
+ - replicate
10
+ - text-to-video
11
+ - video
12
+ - video-generation
13
+ base_model: "Wan-AI/Wan2.1-I2V-14B-Diffusers"
14
+ pipeline_tag: image-to-video
15
+ # widget:
16
+ # - text: >-
17
+ # prompt
18
+ # output:
19
+ # url: https://...
20
+ instance_prompt: SQUISH-IT
21
+ ---
22
+
23
+ # Squish Pika Lora
24
+
25
+ <Gallery />
26
+
27
+ ## About this LoRA
28
+
29
+ This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the Wan 14B Image-to-Video model.
30
+
31
+ It can be used with diffusers or ComfyUI, and can be loaded against the Wan 14B models.
32
+
33
+ It was trained on [Replicate](https://replicate.com/) with 10 steps at a learning rate of 2e-05 and LoRA rank of 32.
34
+
35
+
36
+ ## Trigger word
37
+
38
+ You should use `SQUISH-IT` to trigger the video generation.
39
+
40
+
41
+ ## Use this LoRA
42
+
43
+ Replicate has a collection of Wan models that are optimised for speed and cost. They can also be used with this LoRA:
44
+
45
+ - https://replicate.com/collections/wan-video
46
+ - https://replicate.com/fofr/wan-with-lora
47
+
48
+ ### Run this LoRA with an API using Replicate
49
+
50
+ ```py
51
+ import replicate
52
+
53
+ input = {
54
+ "prompt": "SQUISH-IT",
55
+ "lora_url": "https://huggingface.co/zsxkib/squish-pika-lora/resolve/main/wan-14b-i2v-squish-it-lora.safetensors"
56
+ }
57
+
58
+ output = replicate.run(
59
+ "fofr/wan-with-lora:latest",
60
+ model="14B",
61
+ input=input
62
+ )
63
+ for index, item in enumerate(output):
64
+ with open(f"output_{index}.mp4", "wb") as file:
65
+ file.write(item.read())
66
+ ```
67
+
68
+ ### Using with Diffusers
69
+
70
+ ```py
71
+ import torch
72
+ from diffusers.utils import export_to_video
73
+ from diffusers import WanVidAdapter, WanVid
74
+
75
+ # Load base model
76
+ base_model = WanVid.from_pretrained("Wan-AI/Wan2.1-I2V-14B-Diffusers", torch_dtype=torch.float16)
77
+
78
+ # Load and apply LoRA adapter
79
+ adapter = WanVidAdapter.from_pretrained("zsxkib/squish-pika-lora")
80
+ base_model.load_adapter(adapter)
81
+
82
+ # Generate video
83
+ prompt = "SQUISH-IT"
84
+ negative_prompt = "blurry, low quality, low resolution"
85
+
86
+ # Generate video frames
87
+ from PIL import Image
88
+
89
+ # Load input image
90
+ image = Image.open("path/to/your/image.jpg").convert("RGB")
91
+
92
+ # Generate video from image
93
+ frames = base_model.image_to_video(
94
+ image=image,
95
+ prompt=prompt,
96
+ negative_prompt=negative_prompt,
97
+ num_inference_steps=30,
98
+ guidance_scale=5.0,
99
+ fps=16,
100
+ num_frames=32,
101
+ ).frames[0]
102
+
103
+ # Save as video
104
+ video_path = "output.mp4"
105
+ export_to_video(frames, video_path, fps=16)
106
+ print(f"Video saved to: {video_path}")
107
+ ```
108
+
109
+
110
+ ## Training details
111
+
112
+ - Steps: 10
113
+ - Learning rate: 2e-05
114
+ - LoRA rank: 32
115
+
116
+
117
+ ## Contribute your own examples
118
+
119
+ You can use the [community tab](https://huggingface.co/zsxkib/squish-pika-lora/discussions) to add videos that show off what you've made with this LoRA.