Skip to main content
MergeWatch can run entirely self-hosted with Docker Compose — your code never leaves your infrastructure. There is no per-seat pricing. The project is AGPL v3 open source, and reviews run through a multi-agent parallel pipeline rather than a single-pass model. You can choose your own LLM provider (Anthropic, Bedrock, LiteLLM, or Ollama).
  • Self-hosted: No. Everything runs on your own machine or server. MergeWatch has zero access to your code.
  • SaaS: MergeWatch processes the diff in-memory to call the LLM. Nothing is persisted beyond the review metadata stored in DynamoDB.
MergeWatch offers two ways to run:
  • Self-hosted: Clone the repo, configure a .env file, and run docker compose up. You control the entire stack — the server, database, dashboard, and LLM provider.
  • SaaS: Use the hosted version at mergewatch.ai. No infrastructure to manage. The first 5 PR reviews are free (lifetime), then pay-as-you-go with prepaid credits. A typical review costs 0.010.01–0.10.
No. Self-hosted MergeWatch runs entirely with Docker and does not require any AWS services. The only scenario where you need an AWS account is if you set LLM_PROVIDER=bedrock to use Amazon Bedrock as your LLM backend. All other providers (Anthropic, LiteLLM, Ollama) work without AWS.
Anthropic is the recommended default. It offers the best balance of review quality and ease of setup — you only need an API key. Here is a quick comparison:
ProviderBest for
anthropicRecommended default. High-quality reviews, simple setup.
bedrockTeams already using AWS who want to keep traffic within their AWS account.
litellmOrganizations running a LiteLLM proxy for centralized model access and cost tracking.
ollamaAir-gapped or fully offline environments.
Yes. Set LLM_PROVIDER=ollama and run an Ollama instance as a sidecar container (or on a separate host on your network). The MergeWatch server, Postgres database, and Ollama can all run on the same machine with no outbound internet required. The only external connection needed is the GitHub webhook — which requires inbound HTTPS from GitHub’s IP ranges.
The first 5 PR reviews per installation are free with no credit card required. After that, MergeWatch uses a prepaid credits model — add a card, top up a balance, and each review deducts its actual cost. The formula is llmCost + $0.005 infra fee + 40% margin, which typically works out to 0.010.01–0.10 per review. See Billing & Pricing for details.
Not unless you use LLM_PROVIDER=ollama. Anthropic and Bedrock are managed API services — the provider hosts the models. With Ollama, you run the model locally, which does require sufficient hardware (CPU or GPU depending on the model size).
The review fails gracefully. A “Review failed” comment is posted on the PR, and the review is marked as failed in the dashboard. You can retrigger it with @mergewatch review.
Yes. With LLM_PROVIDER=bedrock, you can use any Bedrock-supported model. With LLM_PROVIDER=litellm, you can use any model your LiteLLM proxy supports (OpenAI, Gemini, Mistral, etc.). With LLM_PROVIDER=ollama, you can use any model Ollama supports. Set the LLM_MODEL environment variable to override the default.
  • Self-hosted: No billing from MergeWatch. You pay only your LLM provider costs.
  • SaaS: Each PR that triggers a review counts. Skipped PRs (drafts, excluded paths) do not count.
MergeWatch posts a GitHub Check Run with action_required: credits required on the next PR and opens a GitHub Issue titled “MergeWatch: reviews paused — credits required”. Add a payment method and top up via Dashboard → Billing; the next PR event will be reviewed and the blocking issue auto-closed.
Yes. The GitHub App requests contents: read permission to access diffs.
GitHub Enterprise Cloud: yes. GitHub Enterprise Server: not yet (planned).
Install the GitHub App on each org separately. Each installation appears in the dashboard.
AGPL ensures improvements to the core stay open source. Companies can use it freely for internal purposes. A commercial license is available for those who need it.