---
base_model:
- Wan-AI/Wan2.1-T2V-1.3B
- Wan-AI/Wan2.1-T2V-1.3B-Diffusers
library_name: diffusers
pipeline_tag: text-to-video
---
# Towards Suturing World Models (Wan, t2v)
This repository hosts the fine-tuned Wan2.1-T2V-1.3B text-to-video (t2v) diffusion model specialized for generating realistic robotic surgical suturing videos, capturing fine-grained sub-stitch actions including needle positioning, targeting, driving, and withdrawal. The model can differentiate between ideal and non-ideal surgical techniques, making it suitable for applications in surgical training, skill evaluation, and autonomous surgical system development.
## Model Details
- **Base Model**: Wan2.1-T2V-1.3B
- **Resolution**: 768×512 pixels (Adjustable)
- **Frame Length**: 49 frames per generated video (Adjustable)
- **Fine-tuning Method**: Low-Rank Adaptation (LoRA)
- **Data Source**: Annotated laparoscopic surgery exercise videos (∼2,000 clips)
## Usage Example
```python
import torch
from diffsynth import ModelManager, WanVideoPipeline, save_video, VideoData
model_manager = ModelManager(torch_dtype=torch.bfloat16, device="cpu")
model_manager.load_models([
"../Wan2.1-T2V-1.3B/diffusion_pytorch_model.safetensors",
"../Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth",
"../Wan2.1-T2V-1.3B/Wan2.1_VAE.pth",
])
model_manager.load_lora("mehmetkeremturkcan/Suturing-Wan2.1-1.3B-T2V", lora_alpha=1.0)
pipe = WanVideoPipeline.from_model_manager(model_manager, device="cuda")
pipe.enable_vram_management(num_persistent_param_in_dit=None)
video = pipe(
prompt="A needledrivingnonideal clip, generated from a backhand task.",
num_inference_steps=50,
tiled=True
)
save_video(video, "video.mp4", fps=30, quality=5)
```
## Applications
- **Surgical Training**: Generate demonstrations of both ideal and non-ideal surgical techniques for training purposes.
- **Skill Evaluation**: Assess surgical skills by comparing actual procedures against model-generated standards.
- **Robotic Automation**: Inform autonomous surgical robotic systems for real-time guidance and procedure automation.
## Quantitative Performance
| Metric | Performance |
|-------------------------|---------------|
| L2 Reconstruction Loss | 0.0667 |
| Inference Time | ~360 seconds per video |
## Future Directions
Further improvements will focus on increasing model robustness, expanding the dataset diversity, and enhancing real-time applicability to robotic surgical scenarios.