Skip to main content

Overview

Track costs and monitor usage for Anthropic’s Claude API by routing your requests through LLM Ops. This guide shows you how to integrate using Python, JavaScript, or cURL.
Security Guarantee: LLM Ops does not store your API keys, request prompts, or response content in the analytics database—only metadata needed for cost analytics. The proxy must forward request bodies to Anthropic to complete the call; optional operational logging may exist in your deployment environment.

Quick Start

Point the Anthropic SDK at the LLM Ops API host (do not append /v1—the SDK adds /v1/messages itself). For raw HTTP (cURL), use the full path https://api.llm-ops.cloudidr.com/v1/messages.
  • Original API host: https://api.anthropic.com
  • LLM Ops API host (SDK base_url / baseURL): https://api.llm-ops.cloudidr.com
  • cURL URL: https://api.llm-ops.cloudidr.com/v1/messages

API Keys

You’ll need two credentials:
  1. Anthropic API Key - Your Claude API key from console.anthropic.com
  2. Cloudidr Key - Your tracking token from the LLM Ops dashboard (tokens are typically prefixed with trk_)
The marketing site llmfinops.ai points at the same product; the dashboard URL above is the canonical app host. Set them as environment variables:
export ANTHROPIC_API_KEY="sk-ant-..."
export CLOUDIDR_KEY="trk_..."

Integration Examples

Install SDK

pip install anthropic

Basic Example

from anthropic import Anthropic

# Initialize client with LLM Ops proxy (no /v1 — SDK appends /v1/messages)
client = Anthropic(
    api_key="sk-ant-...",  # Your Anthropic API key
    base_url="https://api.llm-ops.cloudidr.com",
    default_headers={
        "X-Cloudidr-Key": "trk_..."  # Required for cost tracking
    }
)

# Make API call - costs are automatically tracked
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

print(message.content[0].text)

With Metadata (Department/Project/Agent Tracking)

from anthropic import Anthropic

client = Anthropic(
    api_key="sk-ant-...",
    base_url="https://api.llm-ops.cloudidr.com"
)

# Track costs by department, team, and agent
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Help me debug this code"}
    ],
    extra_headers={
        "X-Cloudidr-Key": "trk_...",
        "X-Department": "engineering",
        "X-Project": "backend",
        "X-Agent": "code-assistant"
    }
)

print(message.content[0].text)

Streaming Example

from anthropic import Anthropic

client = Anthropic(
    api_key="sk-ant-...",
    base_url="https://api.llm-ops.cloudidr.com",
    default_headers={
        "X-Cloudidr-Key": "trk_...",
        "X-Agent": "streaming-bot"
    }
)

# Streaming is fully supported
with client.messages.stream(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Write a short poem about AI"}
    ]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Cost Tracking Headers

Add these headers to organize your costs by department, team, or agent:
HeaderDescriptionExample
X-Cloudidr-KeyRequired - Your Cloudidr tracking tokentrk_abc123...
X-DepartmentTrack costs by departmentengineering, sales, marketing, support
X-ProjectTrack costs by teambackend, frontend, ml, data, qa
X-AgentTrack costs by agent/applicationchatbot, summarizer, analyzer, translator

Supported Models

All Anthropic Claude models are supported. See the Supported Models page for the complete list of available models and pricing.

What Gets Tracked

LLM Ops automatically captures: Token usage - Input and output tokens
Cost - Real-time cost calculation
Latency - Request duration
Model - Which Claude model was used
Metadata - Department, team, agent
Errors - Failed requests and error types
What We DON’T Track:
  • ❌ Customer API keys
  • ❌ Request content (prompts)
  • ❌ Response content (completions)
We only persist metadata needed for cost analytics in our application database.

View Your Data

After making requests, view your costs in the LLM Ops Dashboard:
  • Agent Explorer - See costs by agent/application
  • Department Breakdown - Compare department spending
  • Team Analysis - Track team-level costs
  • Model Comparison - Compare costs across models
  • Time Series - Track spending over time

Migration from Direct API

Switching from direct Anthropic API to LLM Ops is a two-line change:
# Before
client = Anthropic(api_key="sk-ant-...")

# After - add base_url (API host only, no /v1) and X-Cloudidr-Key header
client = Anthropic(
    api_key="sk-ant-...",
    base_url="https://api.llm-ops.cloudidr.com",      # ← Add this
    default_headers={"X-Cloudidr-Key": "trk_..."}  # ← Add this
)
Everything else stays the same - no code changes needed!

Troubleshooting

Check these common issues:
  • ✅ For SDKs, set base_url / baseURL to https://api.llm-ops.cloudidr.com (do not include /v1; the SDK adds /v1/messages).
  • ✅ For cURL, call https://api.llm-ops.cloudidr.com/v1/messages.
  • ✅ Confirm X-Cloudidr-Key is included on every request (or in extra_headers / per-request headers where applicable).
  • ✅ Check that your Anthropic API key is valid.
Two separate keys are needed:
  • Your Anthropic API key (for Claude access)
  • Your Cloudidr tracking token (for cost tracking)
Make sure both are set correctly and not swapped.
Wait a few moments:
  • Cost data may take 10-30 seconds to appear in dashboard
  • Check the correct time range in dashboard filters
  • Verify requests are returning 200 OK status

Next Steps

View Dashboard

See your Claude API costs in real-time

Supported Models

View all supported Claude models

OpenAI Integration

Add cost tracking for GPT models

Set Budgets

Configure spending alerts and limits