Instructions to use linyq/kiwi-edit-5b-instruct-reference-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use linyq/kiwi-edit-5b-instruct-reference-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("linyq/kiwi-edit-5b-instruct-reference-diffusers", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Add model card for Kiwi-Edit
#2
by nielsr HF Staff - opened
Hi! I'm Niels, part of the community science team at Hugging Face. I noticed this repository was missing a model card, so I've opened this PR to add one.
The model card includes:
- Metadata for the
diffuserslibrary and theimage-to-videopipeline tag. - Links to the original paper, project page, and GitHub repository.
- A brief description of the model's capabilities (instruction and reference-guided video editing).
- CLI usage instructions for running the model with Diffusers, based on the official repository.
This information helps users discover and use your work more effectively on the Hugging Face Hub.
linyq changed pull request status to merged