Instructions to use diffusers/FLUX.2-dev-bnb-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use diffusers/FLUX.2-dev-bnb-4bit with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("diffusers/FLUX.2-dev-bnb-4bit", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Does this work in Comfyui?
#1
by Winnougan - opened
Thanks for the 4 bit quants of Flux2. Does this work in Comfyui? Do you have any workflow? Thanks
Hi, don't really know, this was created for using the model with diffusers, not sure if there's a way to load it with ComfyUI and because of the same, I don't have a workflow.
Thanks for the 4 bit quants of Flux2. Does this work in Comfyui? Do you have any workflow? Thanks
I am sure it does not work, ComfyUI doesn't support diffusers at all