Instructions to use LightningCreeper/MIA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LightningCreeper/MIA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LightningCreeper/MIA")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("LightningCreeper/MIA", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LightningCreeper/MIA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LightningCreeper/MIA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LightningCreeper/MIA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/LightningCreeper/MIA
- SGLang
How to use LightningCreeper/MIA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LightningCreeper/MIA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LightningCreeper/MIA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LightningCreeper/MIA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LightningCreeper/MIA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use LightningCreeper/MIA with Docker Model Runner:
docker model run hf.co/LightningCreeper/MIA
Memory Intelligence Agent (MIA)
Memory Intelligence Agent (MIA) is a memory framework designed for deep research agents (DRAs). It transforms agents from "passive record-keepers" into "active strategists" using a sophisticated Manager-Planner-Executor architecture.
- Paper: Memory Intelligence Agent
- Repository: https://github.com/ECNU-SII/MIA
Overview
MIA replaces traditional "memory dumps" with a specialized architecture to enable efficient reasoning and autonomous evolution:
- The Manager: A non-parametric memory system that stores and optimizes compressed historical search trajectories to eliminate bloat.
- The Planner: A parametric memory agent that produces search plans and evolves its strategy via Continual Test-Time Learning during inference.
- The Executor: A precision instrument that searches and analyzes information guided by the search plan.
MIA employs an alternating reinforcement learning paradigm to enhance cooperation between components and establishes a bidirectional conversion loop between parametric and non-parametric memories.
Citation
@article{qiao2026mia,
title={Memory Intelligence Agent},
author={Jingyang Qiao and Weicheng Meng and Yu Cheng and Zhihang Lin and Zhizhong Zhang and Xin Tan and Jingyu Gong and Kun Shao and Yuan Xie},
journal={arXiv preprint arXiv:2604.04503},
year={2026}
}
Model tree for LightningCreeper/MIA
Base model
Qwen/Qwen2.5-VL-7B-Instruct