Documentation Index
Fetch the complete documentation index at: https://docs-v1.latitude.so/llms.txt
Use this file to discover all available pages before exploring further.
Overview
This guide shows you how to integrate Latitude Telemetry into an existing application that uses the official Amazon Bedrock SDK.
After completing these steps:
- Every Amazon Bedrock call (e.g.
invokeModel) can be captured as a log in Latitude.
- Logs are grouped under a prompt, identified by a
path, inside a Latitude project.
- You can inspect inputs/outputs, measure latency, and debug Amazon Bedrock-powered features from the Latitude dashboard.
You’ll keep calling Amazon Bedrock exactly as you do today — Telemetry simply
observes and enriches those calls.
Requirements
Before you start, make sure you have:
- A Latitude account and API key
- A Latitude project ID
- A Node.js or Python-based project that uses the Amazon Bedrock SDK
That’s it — prompts do not need to be created ahead of time.
Steps
Install requirements
Add the Latitude Telemetry package to your project:npm add @latitude-data/telemetry
pip install latitude-telemetry
Wrap your Bedrock-powered feature
Initialize Latitude Telemetry and wrap the code that calls Amazon Bedrock using telemetry.capture.import { LatitudeTelemetry } from '@latitude-data/telemetry'
import * as Bedrock from '@aws-sdk/client-bedrock-runtime'
const telemetry = new LatitudeTelemetry(
process.env.LATITUDE_API_KEY,
{ instrumentations: { bedrock: Bedrock } }
)
async function generateSupportReply(input: string) {
return telemetry.capture(
{
projectId: 123, // The ID of your project in Latitude
path: 'generate-support-reply', // Add a path to identify this prompt in Latitude
},
async () => {
const client = new Bedrock.BedrockRuntimeClient({ region: 'us-east-1' })
const response = await client.send(
new Bedrock.InvokeModelCommand({
modelId: 'anthropic.claude-v2',
body: JSON.stringify({
prompt: `\n\nHuman: ${input}\n\nAssistant:`,
max_tokens_to_sample: 1024,
}),
})
)
const result = JSON.parse(new TextDecoder().decode(response.body))
return result.completion
}
)
}
You can use the capture method as a decorator (recommended) or as a context manager:Using decorator (recommended)
import os
import json
import boto3
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Bedrock]),
)
@telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
)
def generate_support_reply(input: str) -> str:
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.invoke_model(
modelId="anthropic.claude-v2",
body=json.dumps({
"prompt": f"\n\nHuman: {input}\n\nAssistant:",
"max_tokens_to_sample": 1024,
}),
)
result = json.loads(response["body"].read())
return result["completion"]
import os
import json
import boto3
from latitude_telemetry import Telemetry, Instrumentors, TelemetryOptions
telemetry = Telemetry(
os.environ["LATITUDE_API_KEY"],
TelemetryOptions(instrumentors=[Instrumentors.Bedrock]),
)
def generate_support_reply(input: str) -> str:
with telemetry.capture(
project_id=123, # The ID of your project in Latitude
path="generate-support-reply", # Add a path to identify this prompt in Latitude
):
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.invoke_model(
modelId="anthropic.claude-v2",
body=json.dumps({
"prompt": f"\n\nHuman: {input}\n\nAssistant:",
"max_tokens_to_sample": 1024,
}),
)
result = json.loads(response["body"].read())
return result["completion"]
The path:
- Identifies the prompt in Latitude
- Can be new or existing
- Should not contain spaces or special characters (use letters, numbers,
- _ / .)
Streaming responses
When using streaming (InvokeModelWithResponseStreamCommand), consume the stream inside your capture block so the span covers the entire operation.
Consume the stream inside your capture() callback:async function streamSupportReply(input: string, res: Response) {
await telemetry.capture(
{ projectId: 123, path: 'generate-support-reply' },
async () => {
const client = new Bedrock.BedrockRuntimeClient({ region: 'us-east-1' })
const response = await client.send(
new Bedrock.InvokeModelWithResponseStreamCommand({
modelId: 'anthropic.claude-v2',
body: JSON.stringify({
prompt: `\n\nHuman: ${input}\n\nAssistant:`,
max_tokens_to_sample: 1024,
}),
})
)
for await (const event of response.body) {
if (event.chunk?.bytes) {
const chunk = JSON.parse(new TextDecoder().decode(event.chunk.bytes))
if (chunk.completion) {
res.write(chunk.completion)
}
}
}
res.end()
}
)
}
Use a generator function with the decorator:@telemetry.capture(project_id=123, path="generate-support-reply")
async def stream_support_reply(input: str):
client = boto3.client("bedrock-runtime", region_name="us-east-1")
response = client.invoke_model_with_response_stream(
modelId="anthropic.claude-v2",
body=json.dumps({
"prompt": f"\n\nHuman: {input}\n\nAssistant:",
"max_tokens_to_sample": 1024,
}),
)
for event in response["body"]:
chunk = json.loads(event["chunk"]["bytes"])
if "completion" in chunk:
yield chunk["completion"]
Seeing your logs in Latitude
Once your feature is wrapped, logs will appear automatically.
- Open the prompt in your Latitude dashboard (identified by
path)
- Go to the Traces section
- Each execution will show:
- Input and output messages
- Model and token usage
- Latency and errors
- One trace per feature invocation
Each Amazon Bedrock call appears as a child span under the captured prompt execution, giving you a full, end-to-end view of what happened.
That’s it
No changes to your Amazon Bedrock calls, no special return values, and no extra plumbing — just wrap the feature you want to observe.