Instructions to use ByteDance/AnimateDiff-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ByteDance/AnimateDiff-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ByteDance/AnimateDiff-Lightning", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Anyway to use this with Auto1111 + Animatediff
#4
by fullsoftwares - opened
I looked at the demo page, this looks amazing.
As the title says, is it possible to use with Auto1111 or Standalone?
Yeah, I'm wondering the same thing! Tried to just hook up the animatediff_lightning_4step_diffusers.safetensors model to the Auto1111 by selecting it under "motion module" and of course it didn't work :)
Maybe try the comfyui checkpoint.