When to use Ollama
- Your network has no outbound internet access
- Security policy prohibits sending code to external LLM APIs
- You want to evaluate MergeWatch without any API keys
Configuration
Set these environment variables in your.env file or container environment:
| Variable | Required | Value |
|---|---|---|
LLM_PROVIDER | Yes | ollama |
OLLAMA_BASE_URL | No | Ollama API endpoint (default: http://localhost:11434) |
LLM_MODEL | No | Override the default model |
Default model
MergeWatch usesqwen2.5-coder:7b by default. This model runs on GPUs with 8 GB VRAM and provides reasonable code understanding for its size.
Setup
Pull the model before starting MergeWatch:Docker Compose setup
Run Ollama as a sidecar alongside MergeWatch:docker-compose.yml
After starting the stack for the first time, exec into the Ollama container to pull the model:
Hardware requirements
| Component | Minimum | Recommended |
|---|---|---|
| GPU VRAM | 8 GB | 16 GB |
| System RAM | 16 GB | 32 GB |
| Disk | 10 GB free | 20 GB free |
