Encoderfile Models
Pre-built encoderfiles for popular Hugging Face embedding models — self-contained executables that run as embedding servers with no Python or ML dependencies required.
Available Models
| Model | Details |
|---|---|
| sentence-transformers/all-MiniLM-L6-v2 | 384-dim, English sentence embeddings |
| deepset/deberta-v3-base-injection | DeBERTa-variant prompt injection classification model |
| JasperLS/deberta-v3-base-injection | DeBERTa-variant prompt injection classification model |
| JasperLS/gelectra-base-injection | gELECTRA-variant prompt injection classification model |
| protectai/deberta-v3-base-prompt-injection-v2 | DeBERTa-variant prompt injection classification model |
| protectai/deberta-v3-small-prompt-injection-v2 | DeBERTa-variant prompt injection classification model |
| protectai/deberta-v3-base-prompt-injection | DeBERTa-variant prompt injection classification model |
| protectai/distilroberta-base-rejection-v1 | DistilRoBERTa-variant prompt injection classification model |
More models coming soon.
Usage
Each model directory contains platform-specific binaries. Download the one for your platform, make it executable, and run:
# Serve embeddings over HTTP
./all-MiniLM-L6-v2.aarch64-apple-darwin.encoderfile serve
# Or infer directly from the CLI
./all-MiniLM-L6-v2.aarch64-apple-darwin.encoderfile infer "this is a test"
If you don't see the model you want or are using an exotic architecture, check out our guide on Building encoderfiles.
About
These encoderfiles are built and published by mozilla-ai using the Encoderfile tool. To build your own, see the Encoderfile documentation.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support