Skip to main content
Ollama is experimental. Review quality is significantly lower than Claude or GPT-4o. Use only if your environment cannot make external API calls.
Ollama lets you run open-source models locally. This provider is designed for air-gapped environments where no external API access is possible.

When to use Ollama

  • Your network has no outbound internet access
  • Security policy prohibits sending code to external LLM APIs
  • You want to evaluate MergeWatch without any API keys

Configuration

Set these environment variables in your .env file or container environment:
VariableRequiredValue
LLM_PROVIDERYesollama
OLLAMA_BASE_URLNoOllama API endpoint (default: http://localhost:11434)
LLM_MODELNoOverride the default model

Default model

MergeWatch uses qwen2.5-coder:7b by default. This model runs on GPUs with 8 GB VRAM and provides reasonable code understanding for its size.

Setup

Pull the model before starting MergeWatch:
ollama pull qwen2.5-coder:7b
Verify the model is available:
ollama list

Docker Compose setup

Run Ollama as a sidecar alongside MergeWatch:
docker-compose.yml
services:
  mergewatch:
    image: ghcr.io/santthosh/mergewatch:latest
    ports:
      - "3000:3000"
    env_file: .env
    environment:
      LLM_PROVIDER: ollama
      OLLAMA_BASE_URL: http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ollama-data:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

volumes:
  ollama-data:
After starting the stack for the first time, exec into the Ollama container to pull the model:
docker compose exec ollama ollama pull qwen2.5-coder:7b

Hardware requirements

ComponentMinimumRecommended
GPU VRAM8 GB16 GB
System RAM16 GB32 GB
Disk10 GB free20 GB free
Larger models like qwen2.5-coder:14b produce better reviews but require 16 GB VRAM. If you have the hardware, set LLM_MODEL=qwen2.5-coder:14b for improved quality.

Next steps