When to use Ollama
- Your network has no outbound internet access
- Security policy prohibits sending code to external LLM APIs
- You want to evaluate MergeWatch without any API keys
Configuration
Set these environment variables in your.env file or container environment:
| Variable | Required | Value |
|---|---|---|
LLM_PROVIDER | Yes | ollama |
LLM_MODEL | Yes | Ollama model tag (e.g., qwen2.5-coder:7b, llama3). The Ollama provider has no built-in default — you must pick a model that you’ve pulled into the Ollama server. |
OLLAMA_BASE_URL | No | Ollama API endpoint (default: http://localhost:11434) |
Picking a model
You must setLLM_MODEL to a model tag that you’ve pulled into your Ollama server. There is no built-in default. Suggested starting points:
| Model | Best for | Approx VRAM |
|---|---|---|
qwen2.5-coder:7b | Code-focused reviews on an 8 GB GPU | ~6–8 GB |
qwen2.5-coder:14b | Better review quality if you have 16 GB VRAM | ~12–16 GB |
llama3 | General-purpose baseline for quick smoke tests | ~4–8 GB |
Setup
Pull the model before starting MergeWatch — MergeWatch will not pull it for you:Docker Compose setup
Run Ollama as a sidecar alongside MergeWatch:docker-compose.yml
After starting the stack for the first time, exec into the Ollama container to pull the model:
Hardware requirements
| Component | Minimum | Recommended |
|---|---|---|
| GPU VRAM | 8 GB | 16 GB |
| System RAM | 16 GB | 32 GB |
| Disk | 10 GB free | 20 GB free |
Next steps
Air-gapped deployment
Full guide for deploying MergeWatch without internet access.
Anthropic (direct)
The recommended provider for the best review quality.
Configure review behavior
Tune sensitivity, ignored paths, and review focus areas.
Environment variables
Full list of supported environment variables.