Documentation Index
Fetch the complete documentation index at: https://docs-v1.latitude.so/llms.txt
Use this file to discover all available pages before exploring further.
Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Together AI SDK.
After completing these steps:
- Every Together AI call (e.g.
generate) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug Together AI-powered features from the Latitude dashboard.
You’ll keep calling Together AI exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the Together AI SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Wrap your Together AI-powered feature
Initialize Latitude Telemetry and wrap the code that calls Together AI using telemetry.capture.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import { Together } from 'together-ai'
const telemetry = new LatitudeTelemetry(
process.env.LATITUDE_API_KEY,
{ instrumentations: { together: Together } }
)
async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
const client = new Together({ apiKey: process.env.TOGETHER_API_KEY })
const response = await client.chat.completions.create({
model: 'meta-llama/Llama-3-70b-chat-hf',
messages: [{ role: 'user', content: input }],
})
return response.choices[0].message.content
}
)
}
You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
from together import Together
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Together]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-3-70b-chat-hf",
messages=[{"role": "user", "content": input}],
)
return response.choices[0].message.content
import os
from together import Together
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Together]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
client = Together()
response = client.chat.completions.create(
model="meta-llama/Llama-3-70b-chat-hf",
messages=[{"role": "user", "content": input}],
)
return response.choices[0].message.content
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Streaming responses
When using streaming (stream: true), consume the stream inside your capture block so the span covers the entire operation.
Consume the stream inside your capture() callback:async function streamSupportReply(input: string, res: Response) {
await telemetry.capture(
{ projectId: 123, path: 'generate-support-reply' },
async () => {
const client = new Together({ apiKey: process.env.TOGETHER_API_KEY })
const stream = await client.chat.completions.create({
model: 'meta-llama/Llama-3-70b-chat-hf',
messages: [{ role: 'user', content: input }],
stream: true,
})
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content
if (content) {
res.write(content)
}
}
res.end()
}
)
}
Use a generator function with the decorator:@telemetry.capture(project_id=123, path="generate-support-reply")
async def stream_support_reply(input: str):
client = Together()
stream = client.chat.completions.create(
model="meta-llama/Llama-3-70b-chat-hf",
messages=[{"role": "user", "content": input}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each Together AI call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your Together AI calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.