Configuration
Set these environment variables in your.env file or container environment:
| Variable | Required | Value |
|---|---|---|
LLM_PROVIDER | Yes | anthropic |
ANTHROPIC_API_KEY | Yes | Your Anthropic API key |
LLM_MODEL | Yes | Anthropic model ID to use (e.g., claude-sonnet-4-20250514) |
When
LLM_PROVIDER=anthropic, you must set LLM_MODEL to an Anthropic-native model ID. The built-in defaults (us.anthropic.claude-sonnet-4-20250514-v1:0 for the primary model and us.anthropic.claude-haiku-4-5-20251001-v1:0 for the light model) are Bedrock inference profile IDs and are not valid on the Anthropic API.Model roles
MergeWatch uses two model slots per review:- Primary model (
modelin.mergewatch.yml) — runs the agent pipeline (security, bugs, style, error handling, test coverage, comment accuracy). Use Claude Sonnet 4 for the best review quality. - Light model (
lightModelin.mergewatch.yml) — runs the summary and diagram passes. Claude Haiku 4.5 is a good cost-optimized choice here.
LLM_MODEL overrides both slots with the same model. For split primary/light models, configure them in .mergewatch.yml instead — see mergewatch.yml reference.
Example .env file
Model suggestions
| Anthropic model ID | Best for |
|---|---|
claude-sonnet-4-20250514 | Recommended. Highest review quality for deep agent analysis. |
claude-haiku-4-5-20251001 | Cost-optimized. Faster and cheaper; good for summary/diagram. |
Pricing
You pay Anthropic directly for API usage. MergeWatch does not add any markup or proxy fees. See the Anthropic pricing page for current per-token rates.Next steps
Configure review behavior
Tune sensitivity, ignored paths, and review focus areas.
LiteLLM Proxy
Use OpenAI, Azure, Gemini, or 100+ other providers via LiteLLM.
Environment variables
Full list of supported environment variables.
Platform guides
Deploy MergeWatch on your platform of choice.