ORANSight-2.0: Mistral
Collection
All the Mistral models belonging to the first release of the ORANSight family of models from the NextG Lab@ NCSU • 4 items • Updated
How to use NextGLab/ORANSight_Mistral_Nemo_Instruct with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="NextGLab/ORANSight_Mistral_Nemo_Instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NextGLab/ORANSight_Mistral_Nemo_Instruct")
model = AutoModelForCausalLM.from_pretrained("NextGLab/ORANSight_Mistral_Nemo_Instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use NextGLab/ORANSight_Mistral_Nemo_Instruct with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "NextGLab/ORANSight_Mistral_Nemo_Instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NextGLab/ORANSight_Mistral_Nemo_Instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/NextGLab/ORANSight_Mistral_Nemo_Instruct
How to use NextGLab/ORANSight_Mistral_Nemo_Instruct with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "NextGLab/ORANSight_Mistral_Nemo_Instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NextGLab/ORANSight_Mistral_Nemo_Instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "NextGLab/ORANSight_Mistral_Nemo_Instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NextGLab/ORANSight_Mistral_Nemo_Instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use NextGLab/ORANSight_Mistral_Nemo_Instruct with Docker Model Runner:
docker model run hf.co/NextGLab/ORANSight_Mistral_Nemo_Instruct
This model belongs to the first release of the ORANSight family of models.
Below is a quick example of how to use the model with Hugging Face Transformers:
from transformers import pipeline
# Example query
messages = [
{"role": "system", "content": "You are an O-RAN expert assistant."},
{"role": "user", "content": "Explain the E2 interface."},
]
# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Mistral_Nemo_Instruct")
result = chatbot(messages)
print(result)
A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.
@article{gajjar2024oran,
title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
author={Gajjar, Pranshav and Shah, Vijay K},
journal={arXiv preprint arXiv:2407.06245},
year={2024}
}