Instructions to use Alibaba-Research-Intelligence-Computing/Tora_T2V_diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Alibaba-Research-Intelligence-Computing/Tora_T2V_diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Alibaba-Research-Intelligence-Computing/Tora_T2V_diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update model card for Tora2
#1
by nielsr HF Staff - opened
This PR updates the model card to reflect the new model, Tora2, as presented in the paper Tora2: Motion and Appearance Customized Diffusion Transformer for Multi-Entity Video Generation.
Specifically, it:
- Updates the model title and abstract to reflect Tora2.
- Updates the paper link to the official Hugging Face paper page for Tora2.
- Updates the project page link to the dedicated Tora2 project page.
- Enriches the metadata with structured links for the paper, project page, and GitHub repository.
- Adds new tags (
tora2,multi-entity-video-generation) for better discoverability. - Integrates comprehensive usage sections (Installation, Inference, Training, etc.) directly from the GitHub repository README to provide a more self-contained and useful resource on the Hub.
- Updates the citation to reflect the Tora2 paper.