When to use LiteLLM
Choose this provider when you want to use:- OpenAI (GPT-4o, GPT-4 Turbo)
- Azure OpenAI
- Google Gemini
- Mistral
- Cohere
- Any other provider supported by LiteLLM
Configuration
Set these environment variables in your MergeWatch.env file:
| Variable | Required | Value |
|---|---|---|
LLM_PROVIDER | Yes | litellm |
LITELLM_BASE_URL | Yes | URL of your LiteLLM proxy (e.g. http://litellm:4000) |
LITELLM_API_KEY | No | API key if your LiteLLM proxy requires authentication |
The model is configured in your LiteLLM
config.yaml, not in the MergeWatch .env file. MergeWatch sends requests to the proxy and LiteLLM routes them to the correct provider and model.Docker Compose setup
Run LiteLLM as a sidecar alongside MergeWatch:docker-compose.yml
LiteLLM config examples
Create alitellm-config.yaml file next to your docker-compose.yml.
OpenAI
litellm-config.yaml
Azure OpenAI
litellm-config.yaml
Google Gemini
litellm-config.yaml
