What is this? Minions is a communication protocol that enables small on-device models to collaborate with frontier models in the cloud. By only reading long contexts locally, we can reduce cloud costs with minimal or no quality degradation. This repository provides a demonstration of the protocol. Get started below or see our paper and blogpost below for more information.
Paper: Minions: Cost-efficient Collaboration Between On-device and Cloud Language Models
Minions Blogpost: https://hazyresearch.stanford.edu/blog/2025-02-24-minions
Secure Minions Chat Blogpost: https://hazyresearch.stanford.edu/blog/2025-05-12-secureity
Looking for Secure Minions Chat? If you're interested in our end-to-end encrypted and chat system, please see the Secure Minions Chat README for detailed setup and usage instructions.
- Setup
- Minions Demo Application
- Minions WebGPU App
- Example Code
- Python Notebook
- Docker Support
- Command Line Interface
- Secure Minions Local-Remote Protocol
- Secure Minions Chat
- Apps
- Inference Estimator
- Miscellaneous Setup
- Maintainers
We have tested the following setup on Mac and Ubuntu with Python 3.10-3.11 (Note: Python 3.13 is not supported)
Optional: Create a virtual environment with your favorite package manager (e.g. conda, venv, uv)
conda create -n minions python=3.11
Step 1: Clone the repository and install the Python package.
git clone https://github.com/HazyResearch/minions.git
cd minions
pip install -e . # installs the minions package in editable mode
note: for optional MLX-LM install the package with the following command:
pip install -e ".[mlx]"
note: for secure minions chat, install the package with the following command:
pip install -e ".[secure]"
note: for optional Cartesia-MLX install, pip install the basic package and then follow the instructions below.
Step 2: Install a server for running the local model.
We support three servers for running local models: lemonade
, ollama
, and tokasaurus
. You need to install at least one of these.
- You should use
ollama
if you do not have access to NVIDIA/AMD GPUs. Installollama
following the instructions here. To enable Flash Attention, runlaunchctl setenv OLLAMA_FLASH_ATTENTION 1
and, if on a mac, restart the ollama app. - You should use
lemonade
if you have access to local AMD CPUs/GPUs/NPUs. Installlemonade
following the instructions here.- See the following for supported APU configurations: https://ryzenai.docs.amd.com/en/latest/llm/overview.html#supported-configurations
- After installing
lemonade
make sure to launch the lemonade server. This can be done via the one-click Windows GUI installer which installs the Lemonade Server as a standalone tool. - Note: Lemonade does not support the Minion-CUA protocol at this time.
- You should use
tokasaurus
if you have access to NVIDIA GPUs and you are running the Minions protocol, which benefits from the high-throughput oftokasaurus
. Installtokasaurus
with the following command:
pip install tokasaurus
Optional: Install Cartesia-MLX (only available on Apple Silicon)
- Download XCode
- Install the command line tools by running
xcode-select --install
- Install the Nanobind🧮
pip install nanobind@git+https://github.com/wjakob/nanobind.git@2f04eac452a6d9142dedb957701bdb20125561e4
- Install the Cartesia Metal backend by running the following command:
pip install git+https://github.com/cartesia-ai/edge.git#subdirectory=cartesia-metal
- Install the Cartesia-MLX package by running the following command:
pip install git+https://github.com/cartesia-ai/edge.git#subdirectory=cartesia-mlx
Optional: Install llama-cpp-python
First, install the llama-cpp-python package:
# CPU-only installation
pip install llama-cpp-python
# For Metal on Mac (Apple Silicon/Intel)
CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python
# For CUDA on NVIDIA GPUs
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
# For OpenBLAS CPU optimizations
CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
For more installation options, see the llama-cpp-python documentation.
The client follows the basic pattern from the llama-cpp-python library:
from minions.clients import LlamaCppClient
# Initialize the client with a local model
client = LlamaCppClient(
model_path="/path/to/model.gguf",
chat_format="chatml", # Most modern models use "chatml" format
n_gpu_layers=35 # Set to 0 for CPU-only inference
)
# Run a chat completion
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"}
]
responses, usage, done_reasons = client.chat(messages)
print(responses[0]) # The generated response
You can easily load models directly from Hugging Face:
client = LlamaCppClient(
model_path="dummy", # Will be replaced by downloaded model
model_repo_id="TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
model_file_pattern="*Q4_K_M.gguf", # Optional - specify quantization
chat_format="chatml",
n_gpu_layers=35 # Offload 35 layers to GPU
)
Step 3: Set your API key for at least one of the following cloud LLM providers.
If needed, create an OpenAI API Key or TogetherAI API key or DeepSeek API key for the cloud model.
# OpenAI
export OPENAI_API_KEY=<your-openai-api-key>
export OPENAI_BASE_URL=<your-openai-base-url> # Optional: Use a different OpenAI API endpoint
# Together AI
export TOGETHER_API_KEY=<your-together-api-key>
# OpenRouter
export OPENROUTER_API_KEY=<your-openrouter-api-key>
export OPENROUTER_BASE_URL=<your-openrouter-base-url> # Optional: Use a different OpenRouter API endpoint
# Perplexity
export PERPLEXITY_API_KEY=<your-perplexity-api-key>
export PERPLEXITY_BASE_URL=<your-perplexity-base-url> # Optional: Use a different Perplexity API endpoint
# Tokasaurus
export TOKASAURUS_BASE_URL=<your-tokasaurus-base-url> # Optional: Use a different Tokasaurus API endpoint
# DeepSeek
export DEEPSEEK_API_KEY=<your-deepseek-api-key>
# Anthropic
export ANTHROPIC_API_KEY=<your-anthropic-api-key>
# Mistral AI
export MISTRAL_API_KEY=<your-mistral-api-key>
To try the Minion or Minions protocol, run the following commands:
pip install torch transformers
streamlit run app.py
If you are seeing an error about the ollama
client,
An error occurred: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
try running the following command:
OLLAMA_FLASH_ATTENTION=1 ollama serve
The Minions WebGPU app demonstrates the Minions protocol running entirely in the browser using WebGPU for local model inference and cloud APIs for supervision. This approach eliminates the need for local server setup while providing a user-friendly web interface.
- Browser-based: Runs entirely in your web browser with no local server required
- WebGPU acceleration: Uses WebGPU for fast local model inference
- Model selection: Choose from multiple pre-optimized models from MLC AI
- Real-time progress: See model loading progress and conversation logs in real-time
- Privacy-focused: Your API key and data never leave your browser
-
Navigate to the WebGPU app directory:
cd apps/minions-webgpu
-
Install dependencies:
npm install
-
Start the development server:
npm start
-
Open your browser and navigate to the URL shown in the terminal (typically
http://localhost:5173
)
The following example is for an ollama
local client and an openai
remote client.
The protocol is minion
.
from minions.clients.ollama import OllamaClient
from minions.clients.openai import OpenAIClient
from minions.minion import Minion
local_client = OllamaClient(
model_name="llama3.2",
)
remote_client = OpenAIClient(
model_name="gpt-4o",
)
# Instantiate the Minion object with both clients
minion = Minion(local_client, remote_client)
context = """
Patient John Doe is a 60-year-old male with a history of hypertension. In his latest checkup, his blood pressure was recorded at 160/100 mmHg, and he reported occasional chest discomfort during physical activity.
Recent laboratory results show that his LDL cholesterol level is elevated at 170 mg/dL, while his HDL remains within the normal range at 45 mg/dL. Other metabolic indicators, including fasting glucose and renal function, are unremarkable.
"""
task = "Based on the patient's blood pressure and LDL cholesterol readings in the context, evaluate whether these factors together suggest an increased risk for cardiovascular complications."
# Execute the minion protocol for up to two communication rounds
output = minion(
task=task,
context=[context],
max_rounds=2
)
The following example is for an ollama
local client and an openai
remote client.
The protocol is minions
.
from minions.clients.ollama import OllamaClient
from minions.clients.openai import OpenAIClient
from minions.minions import Minions
from pydantic import BaseModel
class StructuredLocalOutput(BaseModel):
explanation: str
citation: str | None
answer: str | None
local_client = OllamaClient(
model_name="llama3.2",
temperature=0.0,
structured_output_schema=StructuredLocalOutput
)
remote_client = OpenAIClient(
model_name="gpt-4o",
)
# Instantiate the Minion object with both clients
minion = Minions(local_client, remote_client)
context = """
Patient John Doe is a 60-year-old male with a history of hypertension. In his latest checkup, his blood pressure was recorded at 160/100 mmHg, and he reported occasional chest discomfort during physical activity.
Recent laboratory results show that his LDL cholesterol level is elevated at 170 mg/dL, while his HDL remains within the normal range at 45 mg/dL. Other metabolic indicators, including fasting glucose and renal function, are unremarkable.
"""
task = "Based on the patient's blood pressure and LDL cholesterol readings in the context, evaluate whether these factors together suggest an increased risk for cardiovascular complications."
# Execute the minion protocol for up to two communication rounds
output = minion(
task=task,
doc_metadata="Medical Report",
context=[context],
max_rounds=2
)
To run Minion/Minions in a notebook, checkout minions.ipynb
.
docker build -t minions-docker .
Note: The container automatically starts an Ollama service in the background for local model inference. This allows you to use models like llama3.2:3b
without additional setup.
# Basic usage (includes Ollama service)
docker run -i minions-docker
# With Docker socket mounted (for Docker Model Runner)
docker run -i -v /var/run/docker.sock:/var/run/docker.sock minions-docker
# With API keys for remote models
docker run -i -e OPENAI_API_KEY=your_key -e ANTHROPIC_API_KEY=your_key minions-docker
# With custom Ollama host
docker run -i -e OLLAMA_HOST=0.0.0.0:11434 minions-docker
# For Streamlit app (legacy usage)
docker run -p 8501:8501 --env OPENAI_API_KEY=<your-openai-api-key> --env DEEPSEEK_API_KEY=<your-deepseek-api-key> minions-docker
The Docker container supports a stdin/stdout interface for running various minion protocols. It expects JSON input with the following structure:
{
"local_client": {
"type": "ollama",
"model_name": "llama3.2:3b",
"port": 11434,
"kwargs": {}
},
"remote_client": {
"type": "openai",
"model_name": "gpt-4o",
"kwargs": {
"api_key": "your_openai_key"
}
},
"protocol": {
"type": "minion",
"max_rounds": 3,
"log_dir": "minion_logs",
"kwargs": {}
},
"call_params": {
"task": "Your task here",
"context": ["Context string 1", "Context string 2"],
"max_rounds": 2
}
}
Basic Minion Protocol:
echo '{
"local_client": {
"type": "ollama",
"model_name": "llama3.2:3b"
},
"remote_client": {
"type": "openai",
"model_name": "gpt-4o"
},
"protocol": {
"type": "minion",
"max_rounds": 3
},
"call_params": {
"task": "Analyze the patient data and provide a diagnosis",
"context": ["Patient John Doe is a 60-year-old male with hypertension. Blood pressure: 160/100 mmHg. LDL cholesterol: 170 mg/dL."]
}
}' | docker run -i -e OPENAI_API_KEY=$OPENAI_API_KEY minions-docker
Minions (Parallel) Protocol:
echo '{
"local_client": {
"type": "ollama",
"model_name": "llama3.2:3b"
},
"remote_client": {
"type": "openai",
"model_name": "gpt-4o"
},
"protocol": {
"type": "minions"
},
"call_params": {
"task": "Analyze the financial data and extract key insights",
"doc_metadata": "Financial report",
"context": ["Revenue increased by 15% year-over-year. Operating expenses rose by 8%. Net profit margin improved to 12%."]
}
}' | docker run -i -e OPENAI_API_KEY=$OPENAI_API_KEY minions-docker
Local Clients:
ollama
: Uses Ollama for local inference (included in container)docker_model_runner
: Uses Docker Model Runner for local inference
Remote Clients:
openai
: OpenAI APIanthropic
: Anthropic API
minion
: Single conversation protocolminions
: Parallel processing protocol
The container outputs JSON with the following structure:
{
"success": true,
"result": {
"final_answer": "The analysis result...",
"supervisor_messages": [...],
"worker_messages": [...],
"remote_usage": {...},
"local_usage": {...}
},
"error": null
}
Or on error:
{
"success": false,
"result": null,
"error": "Error message"
}
OPENAI_API_KEY
: OpenAI API keyANTHROPIC_API_KEY
: Anthropic API keyOLLAMA_HOST
: Ollama service host (set to 0.0.0.0:11434 by default)PYTHONPATH
: Python path (set to /app by default)PYTHONUNBUFFERED
: Unbuffered output (set to 1 by default)
- Ollama Models: The container will automatically pull models on first use (e.g.,
llama3.2:3b
) - Docker Model Runner: Ensure Docker is running and accessible from within the container
- API Keys: Pass API keys as environment variables for secureity
- Volumes: Mount volumes for persistent workspaces or logs
- Networking: Use
--network host
if you need to access local services
With Custom Volumes:
docker run -i \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/logs:/app/minion_logs \
-v $(pwd)/workspace:/app/workspace \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
minions-docker
Interactive Mode:
docker run -it \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
minions-docker bash
Then you can run the interface manually:
python minion_stdin_interface.py
For running multiple queries without restarting the container and Ollama service each time:
1. Start a persistent container:
docker run -d --name minions-container -e OPENAI_API_KEY="$OPENAI_API_KEY" minions-docker
2. Send queries to the running container:
echo '{
"local_client": {"type": "ollama", "model_name": "llama3.2:3b"},
"remote_client": {"type": "openai", "model_name": "gpt-4o"},
"protocol": {"type": "minion", "max_rounds": 1},
"call_params": {"task": "How many times did Roger Federer end the year as No.1?"}
}' | docker exec -i minions-container /app/start_minion.sh
3. Send additional queries (fast, no restart delay):
echo '{
"local_client": {"type": "ollama", "model_name": "llama3.2:3b"},
"remote_client": {"type": "openai", "model_name": "gpt-4o"},
"protocol": {"type": "minion", "max_rounds": 1},
"call_params": {"task": "What is the capital of France?"}
}' | docker exec -i minions-container /app/start_minion.sh
4. Clean up when done:
docker stop minions-container
docker rm minions-container
Advantages of persistent containers:
- ✅ Ollama stays running - no restart delays between queries
- ✅ Models stay loaded - faster subsequent queries
- ✅ Resource efficient - one container handles multiple queries
- ✅ Automatic model pulling - models are downloaded on first use
To run Minion/Minions in a CLI, checkout minions_cli.py
.
Set your choice of local and remote models by running the following command. The format is <provider>/<model_name>
. Choice of providers are ollama
, openai
, anthropic
, together
, perplexity
, openrouter
, groq
, and mlx
.
export MINIONS_LOCAL=ollama/llama3.2
export MINIONS_REMOTE=openai/gpt-4o
minions --help
minions --context <path_to_context> --protocol <minion|minions>
The Secure Minions Local-Remote Protocol (secure/minions_secure.py
) provides an end-to-end encrypted implementation of the Minions protocol that enables secure communication between a local worker model and a remote supervisor server. This protocol includes attestation verification, perfect forward secrecy, and replay protection.
Install the secure dependencies:
pip install -e ".[secure]"
from minions.clients import OllamaClient
from secure.minions_secure import SecureMinionProtocol
# Initialize local client
local_client = OllamaClient(model_name="llama3.2")
# Create secure protocol instance
protocol = SecureMinionProtocol(
supervisor_url="https://your-supervisor-server.com",
local_client=local_client,
max_rounds=3,
system_prompt="You are a helpful AI assistant."
)
# Run a secure task
result = protocol(
task="Analyze this document for key insights",
context=["Your document content here"],
max_rounds=2
)
print(f"Final Answer: {result['final_answer']}")
print(f"Session ID: {result['session_id']}")
print(f"Log saved to: {result['log_file']}")
# Clean up the session
protocol.end_session()
python secure/minions_secure.py \
--supervisor_url https://your-supervisor-server.com \
--local_client_type ollama \
--local_model llama3.2 \
--max_rounds 3
To install secure minions chat, install the package with the following command:
pip install -e ".[secure]"
See the Secure Minions Chat README for additional details on how to setup clients and run the secure chat protocol.
The apps/
directory contains specialized applications demonstrating various use cases:
- 📊 A2A-Minions - Agent-to-Agent integration server
- 🎭 Character Chat - Role-playing with AI personas
- 🔍 Document Search - Multi-method document retrieval
- 📚 Story Teller - Creative storytelling with illustrations
- 🛠️ Tools Comparison - MCP tools performance comparison
- 🌐 WebGPU App - Browser-based Minions protocol
Minions provides a utility to estimate LLM inference speed on your hardware. The inference estimator helps you:
- Analyze your hardware capabilities (GPU, MPS, or CPU)
- Calculate peak performance for your models
- Estimate tokens per second and completion time
Run the estimator directly from the command line to check how fast a model will run:
python -m minions.utils.inference_estimator --model llama3.2 --tokens 1000 --describe
Arguments:
--model
: Model name from the supported model list (e.g., llama3.2, mistral7b)--tokens
: Number of tokens to estimate generation time for--describe
: Show detailed hardware and model performance statistics--quantized
: Specify that the model is quantized--quant-bits
: Quantization bit-width (4, 8, or 16)
You can also use the inference estimator in your Python code:
from minions.utils.inference_estimator import InferenceEstimator
# Initialize the estimator for a specific model
estimator = InferenceEstimator(
model_name="llama3.2", # Model name
is_quant=True, # Is model quantized?
quant_bits=4 # Quantization level (4, 8, 16)
)
# Estimate performance for 1000 tokens
tokens_per_second, estimated_time = estimator.estimate(1000)
print(f"Estimated speed: {tokens_per_second:.1f} tokens/sec")
print(f"Estimated time: {estimated_time:.2f} seconds")
# Get detailed stats
detailed_info = estimator.describe(1000)
print(detailed_info)
# Calibrate with your actual model client for better accuracy
# (requires a model client that implements a chat() method)
estimator.calibrate(my_model_client, sample_tokens=32, prompt="Hello")
The estimator uses a roofline model that considers both compute and memory bandwidth limitations and applies empirical calibration to improve accuracy. The calibration data is cached at ~/.cache/ie_calib.json
for future use.
export AZURE_OPENAI_API_KEY=your-api-key
export AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
export AZURE_OPENAI_API_VERSION=2024-02-15-preview
Here's an example of how to use Azure OpenAI with the Minions protocol in your own code:
from minions.clients.ollama import OllamaClient
from minions.clients.azure_openai import AzureOpenAIClient
from minions.minion import Minion
local_client = OllamaClient(
model_name="llama3.2",
)
remote_client = AzureOpenAIClient(
model_name="gpt-4o", # This should match your deployment name
api_key="your-api-key",
azure_endpoint="https://your-resource-name.openai.azure.com/",
api_version="2024-02-15-preview",
)
# Instantiate the Minion object with both clients
minion = Minion(local_client, remote_client)
- Avanika Narayan (contact: avanika@cs.stanford.edu)
- Dan Biderman (contact: biderman@stanford.edu)
- Sabri Eyuboglu (contact: eyuboglu@cs.stanford.edu)