Overview
Integrate LiteLLM with Cloudidr LLM Ops to automatically track API usage and costs. LiteLLM sends webhook callbacks to our system after each API request, allowing you to monitor usage, costs, and organize by department, team, and agent.How it works:LiteLLM → Webhook Callback → CloudIDR LLM Ops → DashboardConfigure LiteLLM to send webhooks to our endpoint, add tracking headers to your API requests, and we’ll automatically track all usage and costs.
Your Tracking Token
Configuration Steps
1
Update LiteLLM config.yaml
Add webhook callback configuration to your LiteLLM config file
2
Add Tracking Headers
Include required and optional headers in your API requests
3
Restart & Test
Restart LiteLLM and verify tracking in your dashboard
Step 1: Update LiteLLM config.yaml
Choose the configuration that matches your setup:- With Database (Recommended)
- Without Database (Optional)
Configuration with Database
Use this if you want to keep LiteLLM’s database features (user management, spend tracking, etc.)Recommended for most users - This preserves all LiteLLM features while adding LLM Ops tracking.
Step 2: Add Tracking Headers to Your Requests
Include tracking headers in your API requests to LiteLLM.Required Header
| Header | Description | Example |
|---|---|---|
X-Cloudidr-Token | Required - Your tracking token | trk_fXOn-A1V8VrCxXyJ1WuMX... |
Optional Metadata Headers
| Header | Description | Example |
|---|---|---|
X-Department | Organize costs by department | engineering, sales, marketing |
X-Team | Organize costs by team | backend, frontend, ml |
X-Agent | Organize costs by agent/application | chatbot, summarizer, analyzer |
Optional Metadata: These headers are optional. If omitted, requests will still be tracked, but won’t be organized by department/team/agent in your dashboard.
Code Examples
- Python
- JavaScript
- cURL
Step 3: Restart LiteLLM and Test
Restart your LiteLLM proxy and make a test request. Check your dashboard to verify the request was tracked.Success! If everything is configured correctly, you should see the request appear in your LLM Ops Dashboard within a few seconds.
Troubleshooting
Webhooks not being received?
Webhooks not being received?
Common issues:
- ✅ Verify
callback_settingsis correctly formatted inconfig.yaml - ✅ Check that
headersfield is included (even if justContent-Type: application/json) - ✅ Ensure the endpoint URL is accessible from your LiteLLM server
- ✅ Check LiteLLM logs for callback errors (enable
set_verbose: true)
Requests not showing in dashboard?
Requests not showing in dashboard?
Common issues:
- ✅ Verify
X-Cloudidr-Tokenheader is included in your requests - ✅ Check that the tracking token is active (not revoked)
- ✅ Ensure the token belongs to your organization
- ✅ Check API server logs for
[LITELLM]messages
Metadata (Department/Team/Agent) not showing?
Metadata (Department/Team/Agent) not showing?
Remember:
- These headers are optional - requests will still be tracked without them
- Verify headers are passed correctly:
X-Department,X-Team,X-Agent - Check that headers are passed via
extra_headers(Python) ordefaultHeaders(JavaScript) - For cURL, include headers directly:
-H "X-Department: engineering"
Note: We currently extract metadata from custom headers. LiteLLM’s built-in
x-litellm-tags header is not automatically mapped to our metadata fields.Database configuration issues?
Database configuration issues?
With Database:
- Keep your existing
database_urlconfiguration - Don’t add
disable_database_checks
- Set
database_url: ""(empty string) - Must add
disable_database_checks: true
- You forgot
disable_database_checks: truewhen using emptydatabase_url
What Gets Tracked
LLM Ops automatically captures from LiteLLM webhooks: ✅ Token usage - Input, output, and total tokens✅ Cost - Real-time cost calculation
✅ Latency - Request duration
✅ Model - Which model was used
✅ Metadata - Department, team, agent (from headers)
✅ Errors - Failed requests and error types
✅ Source - Marked as
litellm in the database
Important Notes
Required vs Optional Headers
Required vs Optional Headers
- Required:
X-Cloudidr-Tokenheader must be included in all requests - Optional:
X-Department,X-Team,X-Agentheaders are optional metadata for organizing costs
LiteLLM Proxy Mode
LiteLLM Proxy Mode
- LiteLLM proxy mode requires
callback_settingswithgeneric_apitype - Direct URLs don’t work - must use webhook callbacks
- The
headersfield is required incallback_settings, even if justContent-Type: application/json
Database Configuration
Database Configuration
- If you’re using LiteLLM’s database, keep your existing
database_urlconfiguration - Only set
database_url: ""anddisable_database_checks: trueif you don’t need LiteLLM’s database features
Security Best Practices
Security Best Practices
- For production, use environment variables for API keys and master keys
- Never commit tokens or keys to version control
- Rotate tracking tokens regularly
- Use HTTPS for production deployments
LiteLLM Tags
LiteLLM Tags
View Your Data
After making requests, view your costs in the LLM Ops Dashboard:- Agent Explorer - See costs by agent/application
- Department Breakdown - Compare department spending
- Team Analysis - Track team-level costs
- Model Comparison - Compare costs across models routed through LiteLLM
- Time Series - Track spending over time
- Source Filter - Filter by source (all LiteLLM requests marked as
litellm)
LiteLLM Features
LiteLLM provides powerful features that work seamlessly with LLM Ops tracking:Multi-Provider Support
Route to OpenAI, Anthropic, Google, and 100+ providers
Load Balancing
Distribute requests across multiple API keys
Fallback Handling
Automatic failover when providers are down
Rate Limiting
Control costs with built-in rate limits
Need Help?
Email Support
Contact us at [email protected]
Discord Community
Join our Discord for quick help
LiteLLM Docs
Official LiteLLM documentation
View Dashboard
Check your tracked requests
Next Steps
1
Configure LiteLLM
Add webhook callback to your config.yaml
2
Add Headers
Include X-Cloudidr-Token in your requests
3
Monitor Costs
View usage and costs in your dashboard
4
Optimize
Use insights to reduce API costs

