Instructions to use brucewayne0459/OpenBioLLm-Derm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use brucewayne0459/OpenBioLLm-Derm with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="brucewayne0459/OpenBioLLm-Derm")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("brucewayne0459/OpenBioLLm-Derm") model = AutoModelForCausalLM.from_pretrained("brucewayne0459/OpenBioLLm-Derm") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use brucewayne0459/OpenBioLLm-Derm with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "brucewayne0459/OpenBioLLm-Derm" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "brucewayne0459/OpenBioLLm-Derm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/brucewayne0459/OpenBioLLm-Derm
- SGLang
How to use brucewayne0459/OpenBioLLm-Derm with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "brucewayne0459/OpenBioLLm-Derm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "brucewayne0459/OpenBioLLm-Derm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "brucewayne0459/OpenBioLLm-Derm" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "brucewayne0459/OpenBioLLm-Derm", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use brucewayne0459/OpenBioLLm-Derm with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for brucewayne0459/OpenBioLLm-Derm to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for brucewayne0459/OpenBioLLm-Derm to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for brucewayne0459/OpenBioLLm-Derm to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="brucewayne0459/OpenBioLLm-Derm", max_seq_length=2048, ) - Docker Model Runner
How to use brucewayne0459/OpenBioLLm-Derm with Docker Model Runner:
docker model run hf.co/brucewayne0459/OpenBioLLm-Derm
Model Details
Model Description
- Developed by: Bruce_Wayne(The Batman)
- Model type: Text Generation
- Finetuned from model [optional]: OpenBioLLM(llama-3)(aaditya/Llama3-OpenBioLLM-8B)
You can find the gguf versions here --> https://huggingface.co/brucewayne0459/OpenBioLLm-Derm-gguf
please let me know how the model works -->https://forms.gle/N14zZTkLpUr6Hf4BA
Thank you!
Uses
Direct Use
This model is fine-tuned on skin diseases and dermatology data and is used for a dermatology chatbot to provide clear, accurate, and helpful information about various skin diseases, skin care routines, treatments, and related dermatological advice.
Bias, Risks, and Limitations
This model is trained on dermatology data, which might contain inherent biases. It is important to note that the model's responses should not be considered a substitute for professional medical advice. There may be limitations in understanding rare skin conditions or those not well-represented in the training data. The model still need to be fine-tuned further to get accurate answers.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "brucewayne0459/OpenBioLLm-Derm"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Training Details
Training Data
The model is fine-tuned on a dataset containing information about various skin diseases and dermatology care. brucewayne0459/Skin_diseases_and_care
Training Procedure
Preprocessing [optional]
"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
You are a highly knowledgeable and empathetic dermatologist. Provide clear, accurate, and helpful information about various skin diseases, skin care routines, treatments, and related dermatological advice.
Input:
{}
Response:
{} """ EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
def formatting_prompts_func(examples): inputs = examples["Topic"] outputs = examples["Information"] texts = []
Prompt passed while fine tuning the model
Training Hyperparameters
Training regime: The model was trained using the following hyperparameters: Per device train batch size: 2 Gradient accumulation steps: 4 Warmup steps: 5 Max steps: 120 Learning rate: 2e-4 Optimizer: AdamW (8-bit) Weight decay: 0.01 LR scheduler type: Linear
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Tesls T4 gpu
- Hours used: 1hr
- Cloud Provider: Google Colab
Technical Specifications [optional]
Model Architecture and Objective
This model is based on the LLaMA (Large Language Model Meta AI) architecture and fine-tuned to provide dermatological advice.
Hardware
The training was performed on Tesla T4 gpu with 4-bit quantization and gradient checkpointing to optimize memory usage.
Feel free to provide any missing details or correct the assumptions made, and I'll update the model card accordingly.
- Downloads last month
- 865