Documentation Index
Fetch the complete documentation index at: https://docs-v1.latitude.so/llms.txt
Use this file to discover all available pages before exploring further.
This integration is only available in the Python SDK.
Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses LiteLLM — a unified interface to call 100+ LLM providers. After completing these steps:- Every LiteLLM call (e.g.
completion,acompletion) can be captured as a log in Latitude. - Logs are grouped under a prompt, identified by a
path, inside a Latitude project. - You can inspect inputs/outputs, measure latency, and debug LiteLLM-powered features from the Latitude dashboard.
You’ll keep calling LiteLLM exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:- A Latitude account and API key
- A Latitude project ID
- A Python-based project that uses LiteLLM
Steps
Wrap your LiteLLM-powered feature
Initialize Latitude Telemetry and wrap the code that calls LiteLLM using
telemetry.capture.You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
Using context manager
The
path:- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Streaming responses
When using streaming (stream=True), use a generator function with the decorator. The SDK keeps the span open until all chunks are yielded:
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.- Open the prompt in your Latitude dashboard (identified by
path) - Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation