Instructions to use Jingya/tiny-stable-video-diffusion-img2vid with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Jingya/tiny-stable-video-diffusion-img2vid with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Jingya/tiny-stable-video-diffusion-img2vid", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
This is random dummy model for stable video diffusion inspired by the test script in the diffusers library (script for creating the repo here).
This model aims for internal testing, please do not use it in other scenarios.
- Downloads last month
- 6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support