Instructions to use Lightricks/LTX-Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-Video with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
Managed to run on my GTX 1650
Got this running on a GTX 1650 (4 GB VRAM, laptop GPU) --- turns out you don’t need much VRAM to get solid results!
Model: ltx-video-2b-v0.9.5.safetensors
VRAM usage: ~2.7–3.5 GB
Render time: ~3-9 minutes in total
Resolution: Test at 768×512 (took ~27 minutes total). The sample videos below are 640×480. (except the last one)
Steps: 30
Frames: 73
FPS: 24
Environment: i5-12450H, 20 GB RAM, Windows 11, SSD
I mostly left the settings at default, just adjusted frame count, resolution, and FPS.
Really impressed with this --- 9.5/10, highly recommend.
Example Videos:
Thank you for the positive feedback! If you run into any issues, please let us know.
Thank you for the positive feedback! If you run into any issues, please let us know.
I do have a question; will the spatial and temporal upscalers work with the 2b 0.9.5 model?
Hi, yes, they will!
Oii
Hi, yes, they will!
I have tried using the upscalers with both the dev and distilled fp8 2b model, yet there appears to be no visual differences. I have also gotten outputs in the terminal stating that the LoRA keys have not been loaded, so I will try with the 13b model.