Documentation Index
Fetch the complete documentation index at: https://docs-v1.latitude.so/llms.txt
Use this file to discover all available pages before exploring further.
Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official LangChain SDK.
After completing these steps:
- Every LangChain call (e.g.
invoke) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug LangChain-powered features from the Latitude dashboard.
You’ll keep calling LangChain exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the LangChain SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Wrap your LangChain-powered feature
Initialize Latitude Telemetry and wrap the code that calls LangChain using telemetry.capture.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import * as LangchainCallbacks from '@langchain/core/callbacks/manager'
import { ChatOpenAI } from '@langchain/openai'
import { HumanMessage } from '@langchain/core/messages'
const telemetry = new LatitudeTelemetry(
process.env.LATITUDE_API_KEY,
{
instrumentations: {
langchain: { callbackManagerModule: LangchainCallbacks },
},
}
)
async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
const llm = new ChatOpenAI({ model: 'gpt-4o' })
const response = await llm.invoke([new HumanMessage(input)])
return response.content
}
)
}
You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.LangChain]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
llm = ChatOpenAI(model="gpt-4o")
messages = [HumanMessage(content=input)]
response = llm.invoke(messages)
return response.content
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.LangChain]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
llm = ChatOpenAI(model="gpt-4o")
messages = [HumanMessage(content=input)]
response = llm.invoke(messages)
return response.content
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each LangChain call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your LangChain calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.