Overview
Track costs and monitor usage for Anthropic’s Claude API by routing your requests through LLM Ops. This guide shows you how to integrate using Python, JavaScript, or cURL.
Security Guarantee : LLM Ops does NOT store your API keys, request prompts, or response content. We only track metadata needed for cost analytics.
Quick Start
Replace Anthropic’s base URL with LLM Ops proxy URL to automatically track costs:
Original: https://api.anthropic.com
LLM Ops: https://api.llm-ops.cloudidr.com
API Keys
You’ll need two credentials:
Anthropic API Key - Your Claude API key from console.anthropic.com
Cloudidr Token - Your tracking token from llmfinops.ai
Set them as environment variables:
export ANTHROPIC_API_KEY = "sk-ant-..."
export CLOUDIDR_TOKEN = "cloudidr_..."
Integration Examples
Install SDK Basic Example from anthropic import Anthropic
# Initialize client with LLM Ops proxy
client = Anthropic(
api_key = "sk-ant-..." , # Your Anthropic API key
base_url = "https://api.llm-ops.cloudidr.com/v1" ,
default_headers = {
"X-Cloudidr-Token" : "cloudidr_..." # Required for cost tracking
}
)
# Make API call - costs are automatically tracked
message = client.messages.create(
model = "claude-sonnet-4-20250514" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "What is the capital of France?" }
]
)
print (message.content[ 0 ].text)
from anthropic import Anthropic
client = Anthropic(
api_key = "sk-ant-..." ,
base_url = "https://api.llm-ops.cloudidr.com/v1"
)
# Track costs by department, team, and agent
message = client.messages.create(
model = "claude-sonnet-4-20250514" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Help me debug this code" }
],
extra_headers = {
"X-Cloudidr-Token" : "cloudidr_..." ,
"X-Department" : "engineering" ,
"X-Team" : "backend" ,
"X-Agent" : "code-assistant"
}
)
print (message.content[ 0 ].text)
Streaming Example from anthropic import Anthropic
client = Anthropic(
api_key = "sk-ant-..." ,
base_url = "https://api.llm-ops.cloudidr.com/v1" ,
default_headers = {
"X-Cloudidr-Token" : "cloudidr_..." ,
"X-Agent" : "streaming-bot"
}
)
# Streaming is fully supported
with client.messages.stream(
model = "claude-sonnet-4-20250514" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Write a short poem about AI" }
]
) as stream:
for text in stream.text_stream:
print (text, end = "" , flush = True )
Install SDK npm install @anthropic-ai/sdk
Basic Example import Anthropic from '@anthropic-ai/sdk' ;
// Initialize client with LLM Ops proxy
const client = new Anthropic ({
apiKey: 'sk-ant-...' , // Your Anthropic API key
baseURL: 'https://api.llm-ops.cloudidr.com/v1' ,
defaultHeaders: {
'X-Cloudidr-Token' : 'cloudidr_...' // Required for cost tracking
}
});
// Make API call - costs are automatically tracked
const message = await client . messages . create ({
model: 'claude-sonnet-4-20250514' ,
max_tokens: 1024 ,
messages: [
{ role: 'user' , content: 'What is the capital of France?' }
]
});
console . log ( message . content [ 0 ]. text );
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
apiKey: 'sk-ant-...' ,
baseURL: 'https://api.llm-ops.cloudidr.com/v1'
});
// Track costs by department, team, and agent
const message = await client . messages . create ({
model: 'claude-sonnet-4-20250514' ,
max_tokens: 1024 ,
messages: [
{ role: 'user' , content: 'Help me debug this code' }
]
}, {
headers: {
'X-Cloudidr-Token' : 'cloudidr_...' ,
'X-Department' : 'engineering' ,
'X-Team' : 'backend' ,
'X-Agent' : 'code-assistant'
}
});
console . log ( message . content [ 0 ]. text );
Streaming Example import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
apiKey: 'sk-ant-...' ,
baseURL: 'https://api.llm-ops.cloudidr.com/v1' ,
defaultHeaders: {
'X-Cloudidr-Token' : 'cloudidr_...' ,
'X-Agent' : 'streaming-bot'
}
});
// Streaming is fully supported
const stream = await client . messages . stream ({
model: 'claude-sonnet-4-20250514' ,
max_tokens: 1024 ,
messages: [
{ role: 'user' , content: 'Write a short poem about AI' }
]
});
for await ( const chunk of stream ) {
if ( chunk . type === 'content_block_delta' &&
chunk . delta . type === 'text_delta' ) {
process . stdout . write ( chunk . delta . text );
}
}
Basic Example curl https://api.llm-ops.cloudidr.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: sk-ant-..." \
-H "anthropic-version: 2023-06-01" \
-H "X-Cloudidr-Token: cloudidr_..." \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'
curl https://api.llm-ops.cloudidr.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: sk-ant-..." \
-H "anthropic-version: 2023-06-01" \
-H "X-Cloudidr-Token: cloudidr_..." \
-H "X-Department: engineering" \
-H "X-Team: backend" \
-H "X-Agent: code-assistant" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "Help me debug this code"
}
]
}'
Streaming Example curl https://api.llm-ops.cloudidr.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: sk-ant-..." \
-H "anthropic-version: 2023-06-01" \
-H "X-Cloudidr-Token: cloudidr_..." \
-H "X-Agent: streaming-bot" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"stream": true,
"messages": [
{
"role": "user",
"content": "Write a short poem about AI"
}
]
}'
Add these headers to organize your costs by department, team, or agent:
Header Description Example X-Cloudidr-TokenRequired - Your CloudIDR tracking tokencloudidr_abc123...X-DepartmentTrack costs by department engineering, sales, marketing, supportX-TeamTrack costs by team backend, frontend, ml, data, qaX-AgentTrack costs by agent/application chatbot, summarizer, analyzer, translator
Supported Models
All Anthropic Claude models are supported. See the Supported Models page for the complete list of available models and pricing.
What Gets Tracked
LLM Ops automatically captures:
✅ Token usage - Input and output tokens
✅ Cost - Real-time cost calculation
✅ Latency - Request duration
✅ Model - Which Claude model was used
✅ Metadata - Department, team, agent
✅ Errors - Failed requests and error types
What We DON’T Track:
❌ Customer API keys
❌ Request content (prompts)
❌ Response content (completions)
We only track metadata needed for cost analytics.
View Your Data
After making requests, view your costs in the LLM Ops Dashboard :
Agent Explorer - See costs by agent/application
Department Breakdown - Compare department spending
Team Analysis - Track team-level costs
Model Comparison - Compare costs across models
Time Series - Track spending over time
Migration from Direct API
Switching from direct Anthropic API to LLM Ops is a two-line change:
# Before
client = Anthropic( api_key = "sk-ant-..." )
# After - add base_url and X-Cloudidr-Token header
client = Anthropic(
api_key = "sk-ant-..." ,
base_url = "https://api.llm-ops.cloudidr.com/v1" , # ← Add this
default_headers = { "X-Cloudidr-Token" : "cloudidr_..." } # ← Add this
)
Everything else stays the same - no code changes needed!
Troubleshooting
Requests not being tracked
Check these common issues:
✅ Verify base URL is https://api.llm-ops.cloudidr.com/v1
✅ Confirm X-Cloudidr-Token header is included in all requests
✅ Check that your Anthropic API key is valid
✅ Ensure you’re using /v1 in the endpoint path
Two separate keys are needed:
Your Anthropic API key (for Claude access)
Your CloudIDR token (for cost tracking)
Make sure both are set correctly and not swapped.
Wait a few moments:
Cost data may take 10-30 seconds to appear in dashboard
Check the correct time range in dashboard filters
Verify requests are returning 200 OK status
Next Steps