Instructions to use bigscience/bloom with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloom with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloom")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModelForCausalLM.from_pretrained("bigscience/bloom") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloom with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloom" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloom
- SGLang
How to use bigscience/bloom with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloom with Docker Model Runner:
docker model run hf.co/bigscience/bloom
Getting HTTP Error Code: 422 when using Inference API
I am trying to use bigscience/bloom through Inference API, but I keep receiving HTTP Error Code: 422 (unfortunately, no more informative messages).
If I use the very same request for gpt2 (just changing the model), it works, therefore I assume it is not a badly formed request.
The request is shown below:
{
"inputs" : [ "How high is Mt. Everest?" ],
"options" : { },
"parameters" : {
"max_new_tokens" : 15,
"return_full_text" : false
}
}
Am I doing something wrong? Thanks.
+1, I get this all the time
Same, Does anyone know a fix for this?
I use this code and it works for me (albeit for Bloomz-3b):
import json
import requests as r
ENDPOINT_URL=""# url of your endpoint
HF_TOKEN=""
payload samples
regular_payload = { "inputs": "translate English to German: The weather is nice today." }
parameter_payload = {
"inputs": "translate English to French: Hello my name is Philipp and I am a Technical Leader at Hugging Face",
"parameters" : {
"max_length": 40,
}
}
HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
send request
response = r.post(ENDPOINT_URL, headers=headers, json=parameter_payload)
generated_text = response.json()
print(generated_text)
For those who have the same problem, on my side it came from a change in the API, the output format had changed so the max_new_token had to be decreased. Most of the time, a 422 error comes from a problem in the parameters.
Also, faced this issue in huggingface_hub==0.23.2 but huggingface-hub==0.22.2 did not give this 422 error!