zsxkib commited on
Commit
6652862
·
verified ·
1 Parent(s): 32f6b27

Upload hugging-face-readme-template.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. hugging-face-readme-template.md +95 -0
hugging-face-readme-template.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ tags:
7
+ - image-to-video
8
+ - lora
9
+ - replicate
10
+ - text-to-video
11
+ - video
12
+ - video-generation
13
+ base_model: "Wan-AI/Wan2.1-${t2v_or_i2v}2V-${model_type}-Diffusers"
14
+ pipeline_tag: ${pipeline_tag}
15
+ # widget:
16
+ # - text: >-
17
+ # prompt
18
+ # output:
19
+ # url: https://...
20
+ $instance_prompt
21
+ ---
22
+
23
+ # $title
24
+
25
+ <Gallery />
26
+
27
+ ## About this LoRA
28
+
29
+ This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the Wan ${model_type} ${readable_finetuning_type} model.
30
+
31
+ It can be used with diffusers or ComfyUI, and can be loaded against the Wan ${model_type} models.
32
+
33
+ It was trained on [Replicate](https://replicate.com/) with ${max_training_steps} steps at a learning rate of ${learning_rate} and LoRA rank of ${lora_rank}.
34
+
35
+ $trigger_section
36
+
37
+ ## Use this LoRA
38
+
39
+ Replicate has a collection of Wan models that are optimised for speed and cost. They can also be used with this LoRA:
40
+
41
+ - https://replicate.com/collections/wan-video
42
+ - https://replicate.com/fofr/wan-with-lora
43
+
44
+ ### Run this LoRA with an API using Replicate
45
+
46
+ ```py
47
+ import replicate
48
+
49
+ input = {
50
+ "prompt": "$trigger_word",
51
+ "lora_url": "https://huggingface.co/$repo_id/resolve/main/$lora_filename.safetensors"
52
+ }
53
+
54
+ output = replicate.run(
55
+ "fofr/wan-with-lora:latest",
56
+ model="${model_type}",
57
+ input=input
58
+ )
59
+ for index, item in enumerate(output):
60
+ with open(f"output_{index}.mp4", "wb") as file:
61
+ file.write(item.read())
62
+ ```
63
+
64
+ ### Using with Diffusers
65
+
66
+ ```py
67
+ import torch
68
+ from diffusers.utils import export_to_video
69
+ from diffusers import WanVidAdapter, WanVid
70
+
71
+ # Load base model
72
+ base_model = WanVid.from_pretrained("Wan-AI/Wan2.1-${t2v_or_i2v}2V-${model_type}-Diffusers", torch_dtype=torch.float16)
73
+
74
+ # Load and apply LoRA adapter
75
+ adapter = WanVidAdapter.from_pretrained("$repo_id")
76
+ base_model.load_adapter(adapter)
77
+
78
+ # Generate video
79
+ prompt = "$trigger_word"
80
+ negative_prompt = "blurry, low quality, low resolution"
81
+
82
+ # Generate video frames
83
+ $generation_code
84
+
85
+ # Save as video
86
+ video_path = "output.mp4"
87
+ export_to_video(frames, video_path, fps=16)
88
+ print(f"Video saved to: {video_path}")
89
+ ```
90
+
91
+ $training_details
92
+
93
+ ## Contribute your own examples
94
+
95
+ You can use the [community tab](https://huggingface.co/$repo_id/discussions) to add videos that show off what you've made with this LoRA.