Instructions to use ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit
Run Hermes
hermes
- MLX LM
How to use ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit", "messages": [ {"role": "user", "content": "Hello"} ] }'
HackIDLE-NIST-Coder (MLX 4-bit)
This is the first MLX build of HackIDLE-NIST-Coder, a NIST-focused local model built from Qwen2.5-Coder-7B-Instruct and fine-tuned on a NIST cybersecurity corpus.
This repo is kept for reproducibility. For new testing, start with the v1.1 MLX build:
Use this model as a helper. Do not treat it as a source of truth for exact control names, RMF step lists, or reference-architecture component names without checking the source publication.
Training data
This first build used 523,706 examples from 568 NIST cybersecurity documents.
Training dataset:
Current eval status
The dated smoke eval from April 22, 2026 was run against the Ollama latest tag, which matched the v1.1 line in the local install used for that check. I have not rerun that exact eval against this older MLX build.
The v1.1 result matters for this older build too because it sets the right expectation for the model family: the model can stay in-domain while still missing exact NIST structure.
Be careful with:
- exact control names
- exact RMF step ordering
- exact SP 800-207 component naming
- source-level answers that need to be right on the first pass
Installation
pip install mlx-lm
Usage
from mlx_lm import load, generate
model, tokenizer = load("ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit")
prompt = "Which NIST docs would you start with for contractor remote access?"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)
License
The base model is Qwen2.5-Coder-7B-Instruct, released under Apache 2.0. The NIST source publications used for the dataset are public domain U.S. government works. This model card uses Apache 2.0 for the model artifact and documents the NIST data source separately.
- Downloads last month
- 682
4-bit
Model tree for ethanolivertroy/HackIDLE-NIST-Coder-MLX-4bit
Base model
Qwen/Qwen2.5-7B