How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Maincode/Maincoder-1B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Maincode/Maincoder-1B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for Maincode/Maincoder-1B-GGUF to start chatting
Quick Links

Maincoder-1B-GGUF

GGUF quantizations of Maincoder-1B, a code-focused language model optimized for code generation and completion tasks. These quantized versions are designed for efficient local deployment with llama.cpp.

Find more details in the original model card: https://huggingface.co/Maincode/Maincoder-1B

How to run Maincoder

Example usage with llama.cpp:

llama-cli -hf Maincode/Maincoder-1B-GGUF

Or with a specific quantization:

llama-cli -hf Maincode/Maincoder-1B-GGUF -m Maincoder-1B-Q4_K_M.gguf

Code completion example:

llama-cli -hf Maincode/Maincoder-1B-GGUF -p 'def fibonacci(n: int) -> int:
    """Return the n-th Fibonacci number."""
' -n 256

Available Quantizations

Filename Size Description
Maincoder-1B-BF16.gguf 1.9 GB BFloat16 - Full precision, best quality
Maincoder-1B-F16.gguf 1.9 GB Float16 - Full precision
Maincoder-1B-Q8_0.gguf 1.0 GB 8-bit quantization - Highest quality quantized
Maincoder-1B-Q6_K.gguf 809 MB 6-bit quantization - High quality
Maincoder-1B-Q5_K_M.gguf 722 MB 5-bit quantization - Great balance
Maincoder-1B-Q4_K_M.gguf 641 MB 4-bit quantization - Recommended
Maincoder-1B-Q4_0.gguf 614 MB 4-bit quantization - Smallest, fastest

πŸ“„ License

This model is released under the Apache 2.0 License.

πŸ”— Links

Downloads last month
284
GGUF
Model size
1B params
Architecture
maincoder
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Maincode/Maincoder-1B-GGUF

Quantized
(6)
this model