Text Generation
Transformers
Safetensors
llama
text-generation-inference
8-bit precision
bitsandbytes
Instructions to use samadpls/querypls-prompt2sql with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use samadpls/querypls-prompt2sql with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="samadpls/querypls-prompt2sql")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("samadpls/querypls-prompt2sql") model = AutoModelForCausalLM.from_pretrained("samadpls/querypls-prompt2sql") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use samadpls/querypls-prompt2sql with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "samadpls/querypls-prompt2sql" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "samadpls/querypls-prompt2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/samadpls/querypls-prompt2sql
- SGLang
How to use samadpls/querypls-prompt2sql with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "samadpls/querypls-prompt2sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "samadpls/querypls-prompt2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "samadpls/querypls-prompt2sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "samadpls/querypls-prompt2sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use samadpls/querypls-prompt2sql with Docker Model Runner:
docker model run hf.co/samadpls/querypls-prompt2sql
| import torch | |
| from typing import Any, Dict | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| class EndpointHandler: | |
| def __init__(self, path=""): | |
| # load model and tokenizer from path | |
| self.tokenizer = AutoTokenizer.from_pretrained(path) | |
| self.model = AutoModelForCausalLM.from_pretrained( | |
| path, device_map="auto", torch_dtype=torch.float16, trust_remote_code=True | |
| ) | |
| self.device = "cuda" if torch.cuda.is_available() else "cpu" | |
| def __call__(self, data: Dict[str, Any]) -> Dict[str, str]: | |
| # process input | |
| inputs = data.pop("inputs", data) | |
| parameters = data.pop("parameters", None) | |
| # preprocess | |
| inputs = self.tokenizer(inputs, return_tensors="pt").to(self.device) | |
| # pass inputs with all kwargs in data | |
| if parameters is not None: | |
| outputs = self.model.generate(**inputs, **parameters) | |
| else: | |
| outputs = self.model.generate(**inputs) | |
| # postprocess the prediction | |
| prediction = self.tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| return [{"generated_text": prediction}] |