Skip to main content
Verified April 2026. Pricing, model availability, and features in this space change on a near-weekly basis — verify current details against each vendor’s own pricing and docs pages before making a purchase decision.

How to read this comparison

  • Primary sources only. Every factual claim on these pages was sourced from the vendor’s own docs, pricing, changelog, or product pages. Third-party summaries, Reddit threads, and AI tool directories were not used. Source URLs are cited inline.
  • “Not publicly documented” is a real answer. Some vendors don’t disclose their agent architecture, the exact model powering their product, or their data-retention policy. Where that’s the case, these pages say so rather than guessing.
  • MergeWatch is the publisher. These pages live in the MergeWatch docs. Factual bias was minimized by sticking to competitors’ own marketing claims and documented behavior; cells where MergeWatch has a differentiator are marked by what MergeWatch does, not by negatives inferred about competitors.
  • Pricing is in USD and per-seat/month unless noted. Promotional discounts and annual/monthly splits are collapsed for readability; see each vendor’s pricing page for current exact structure.

Tools covered

MergeWatch

Open-source, multi-agent, self-hostable

CodeRabbit

Commercial PR-review platform

Greptile

Graph-based codebase indexing

GitHub Copilot code review

Native PR review inside GitHub

Claude Code

Anthropic’s CLI + GitHub Action

OpenAI Codex

OpenAI’s agentic coding product

Cursor BugBot

Bug-focused PR reviewer

Qodo Merge

AGPL core + commercial product
Also see other tools worth knowing — Ellipsis, Sourcery, Graphite Diamond.

Quick matrix

MergeWatchCodeRabbitGreptileGitHub Copilot reviewOpenAI CodexClaude CodeCursor BugBotQodo Merge
LicenseAGPL-3.0ClosedClosedClosedMixed (CLI open, hosted closed)Closed (action open)ClosedAGPL-3.0 core + commercial
Self-hostYes (Docker + Postgres)Enterprise only (500+ seats)Yes (Enterprise, in AWS)NoNo (CLI runs locally but calls OpenAI API)GitHub Actions runnerNoYes (SaaS, on-prem, air-gapped)
BYO LLMBedrock, Anthropic, LiteLLM (100+), OllamaSelf-host: OpenAI, Azure, Bedrock, AnthropicEnterprise self-host onlyNo (uses Copilot’s selection)No (OpenAI only)Anthropic only (direct, Bedrock, Vertex)No (Cursor-managed)Open-source core: OpenAI, Claude, DeepSeek, more
Primary triggerPR webhook + @mergewatch comments + Checks UI re-runPR webhook + IDE + CLI + chatPR comments + IDE + MCP + /greploopManual or auto PR, in web + many IDEsCLI + IDE + web + desktopCLI + @claude mentions + GitHub ActionAuto on PR update + manual commentPR + IDE + CLI
Free tier for private reposYes (see pricing)14-day trialNo (OSS discount available)Inside paid Copilot onlyInside paid ChatGPT onlyInside paid Claude only14-day trialYes (Developer $0, limited credits)
Multi-agent architectureYes (6 review + 2 utility, parallel, capped at 3 concurrent)“Agentic reviews” + 40+ linters; agent count not disclosed”Swarm of agents” + TREX for testsNot publicly documentedAgentic, multi-tool, skills + automationsSingle agent loop with tool useNot publicly documented as multi-agent”Specialized agents” (count not disclosed)
MCP server (outbound)Yes (core + Lambda transport)Not publicly documentedMCP connection documentedNoNoNoNoNot publicly documented
Merge-readiness scoreYes (1–5)Not publicly documentedNot publicly documentedNoNoNoNoNot publicly documented
Conventions injectionYes (AGENTS.md / CONVENTIONS.md auto-discovered)YAML customizationPlain-English custom rules; learns from past PR commentsNot publicly documentedSkillsCLAUDE.md project standardsBugbot RulesPR-Agent rules
Data retention claimNo persistent code storage (diff in-memory only); 90-day review metadata TTL”Zero retention post-review”; SOC 2 Type IIDPA on EnterpriseBusiness/Enterprise: 28-day prompt retentionGoverned by OpenAI policyDirect to API, no backend index”Privacy-mode compliant”Zero data retention, SOC 2
Starting paid tier (per seat / month)Usage-based via Stripe balance24Pro,24 Pro, 48 Pro Plus30+30 + 1/review over 50Bundled in CopilotBundled in ChatGPT Plus/ProBundled in Claude Pro$40$30 Teams
Cells above are compressed for skim-ability. For side-by-side boolean features, see the feature matrix. For full sourced detail on any tool, click its name in the cards above.

Known gaps in this comparison

A handful of facts we could not fully verify in primary sources; worth checking vendor pages directly if these matter to your decision:
  • CodeRabbit — specific model used on the default SaaS tier.
  • Greptile — public data retention and training posture (DPA exists on Enterprise).
  • GitHub Copilot code review — the specific model powering the review feature.
  • OpenAI Codex — Codex-specific retention numbers beyond OpenAI’s general policy.
  • Cursor BugBot — the exact models used for detection (vs. autofix).
  • Qodo Merge commercial tier — LLM flexibility beyond the open-source PR-Agent core.

Dimensions deliberately left out

  • Review quality / false-positive rate. Every vendor has a marketing number (CodeRabbit’s “75M defects,” BugBot’s “50% of issues get fixed”). None are independently verifiable and all use different counting methods. Benchmarking is outside the scope of a feature-and-pricing comparison.
  • Supported languages / frameworks. Too stack-specific to matrix; check the individual vendor docs for your language.
  • SSO / RBAC granularity. All vendors at the Enterprise tier offer SSO; the differences between SAML providers and RBAC granularity need a vendor-specific RFP, not a table row.

Contributing

If a fact on any of these pages is wrong or out of date, please open a PR with the correction and the vendor URL that backs it up. Unsourced changes will be asked for a source before merge. The aim is for this section to stay factual — a snapshot readers can trust, not a marketing artifact.