Documentation Index
Fetch the complete documentation index at: https://docs-v1.latitude.so/llms.txt
Use this file to discover all available pages before exploring further.
The Latitude Python SDK provides a convenient way to interact with the Latitude platform from your Python applications.
Installation
The Latitude SDK is compatible with Python 3.9 or higher.
pip install latitude-sdk
# or
poetry add latitude-sdk
# or
uv add latitude-sdk
Authentication and Initialization
Import the SDK and initialize it with your API key. You can generate API keys in your Latitude project settings under “API Access”.
import os
from latitude_sdk import Latitude
latitude = Latitude(os.getenv("LATITUDE_API_KEY"))
You can also provide additional options during initialization:
latitude = Latitude(os.getenv("LATITUDE_API_KEY"), LatitudeOptions(
project_id=12345, # Your Latitude project ID
version_uuid="optional-version-uuid", # Optional version UUID
)) # Keep your API key secure and avoid committing it directly into your codebase.
Both project_id and version_uuid options can be overridden on a per-method basis when needed.
Examples
Check out our Examples section for more examples of how to use the Latitude SDK.
SDK Usage
The Latitude Python SDK is an async library by design. This means you must use it within an async event loop, such as FastAPI or Async Django. Another option is to use the built-in asyncio library.
import asyncio
from latitude_sdk import Latitude
latitude = Latitude("your-api-key-here")
async def main():
prompt = await latitude.prompts.get("prompt-path")
print(prompt)
asyncio.run(main())
SDK Structure
The Latitude SDK is organized into several namespaces:
prompts: Methods for managing and running prompts
runs: Methods for managing active runs
logs: Methods for pushing logs to Latitude
evaluations: Methods for pushing evaluation results to Latitude
projects: Methods for managing projects
versions: Methods for managing project versions
Prompt Management
Get a Prompt
To retrieve a specific prompt by its path:
prompt = await latitude.prompts.get('prompt-path')
Get All Prompts
To retrieve all prompts in your project:
prompts = await latitude.prompts.get_all()
Get or Create a Prompt
To get an existing prompt or create a new one if it doesn’t exist:
prompt = await latitude.prompts.get_or_create('prompt-path')
You can also provide the content when creating a new prompt:
prompt = await latitude.prompts.get_or_create('prompt-path', GetOrCreatePromptOptions(
prompt='This is the content of my new prompt',
))
Delete a Prompt
To delete a prompt from a draft version:
result = await latitude.prompts.delete('prompt-path')
This soft-deletes the document. The deletion only works on draft (non-merged) commits.
Version Management
Get All Versions
To retrieve all versions from a project:
versions = await latitude.versions.get_all()
You can also specify a different project ID:
versions = await latitude.versions.get_all(GetAllVersionsOptions(
project_id=123,
))
Running Prompts
Non-Streaming Run
Execute a prompt and get the complete response once generation is finished:
async def on_finished(result: FinishedResult):
print('Run completed:', result.uuid)
async def on_error(error: ApiError):
print('Run error:', error.message)
result = await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'productName': 'CloudSync Pro',
'audience': 'Small Business Owners',
},
# Optional: Provide a custom identifier for this run
custom_identifier='email-campaign-2023',
# Optional: Provide callbacks for events
on_finished=on_finished,
on_error=on_error,
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
Handling Streaming Responses
For real-time applications (like chatbots), use streaming to get response chunks as they are generated:
async def on_event(event: StreamEvent):
# Provider event
if isinstance(event, dict) and event.get("type") == "text-delta":
print(event)
# Latitude event
elif isinstance(event, ChainEventChainCompleted):
print("Conversation UUID:", event.uuid)
print("Conversation messages:", event.messages)
async def on_finished(result: FinishedResult):
print('Stream completed:', result.uuid)
async def on_error(error: ApiError):
print('Stream error:', error.message)
await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'productName': 'CloudSync Pro',
'audience': 'Small Business Owners',
},
# Enable streaming
stream=True,
# Provide callbacks for events
on_event=on_event,
on_finished=on_finished,
on_error=on_error,
))
You can provide tool handlers that the model can call during execution:
async def get_weather(arguments: Dict[str, Any], details: OnToolCallDetails) -> Dict[str, Any]:
# `arguments` contains the arguments passed by the model
# `details` contains context like tool id, name, messages...
# The result can be anything JSON serializable
return { "weather": "sunny" }
await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'query': 'What is the weather in San Francisco?',
},
# Define the tools the model can use
tools={
'getWeather': get_weather,
},
))
Chat with a Prompt
Follow the conversation of a runned prompt:
messages = [
{
'role': 'user',
'content': 'Hello, how can you help me today?',
},
]
async def on_finished(result: FinishedResult):
print('Chat completed:', result.uuid)
async def on_error(error: ApiError):
print('Chat error:', error.message)
result = await latitude.prompts.chat('conversation-uuid', messages, ChatPromptOptions(
# Chat options are similar to the run method
on_finished=on_finished,
on_error=on_error,
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
Messages follow the PromptL format. If you’re
using a different method to run your prompts, you’ll need to format your
messages accordingly.
Running a Prompt in the Background
For long-running prompts, such as large Agent systems, that you don’t need to wait for, use background runs:
job = await latitude.prompts.run('prompt-path', RunPromptOptions(
parameters={
'productName': 'CloudSync Pro',
'audience': 'Small Business Owners',
},
# Enable background processing
background=True,
))
print('Job UUID:', job.uuid)
# The request returns immediately with a conversation UUID
# You can use this UUID to attach to the run later to check its status or stop it programmatically
async def on_event(event: StreamEvent):
# Provider event
if isinstance(event, dict) and event.get("type") == "text-delta":
print(event)
# Latitude event
elif isinstance(event, ChainEventChainCompleted):
print("Conversation UUID:", event.uuid)
print("Conversation messages:", event.messages)
result = await latitude.runs.attach(job.uuid, AttachRunOptions(
stream=True,
on_event=on_event,
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
Run Management
Stop a Run
Stop an active conversation that is currently running:
await latitude.runs.stop('conversation-uuid')
print('Run stopped successfully')
Attach to a Run
Attach to an active conversation to receive its ongoing output:
async def on_event(event: StreamEvent):
# Provider event
if isinstance(event, dict) and event.get("type") == "text-delta":
print(event)
# Latitude event
elif isinstance(event, ChainEventChainCompleted):
print("Conversation UUID:", event.uuid)
print("Conversation messages:", event.messages)
async def on_finished(result: FinishedResult):
print('Attach completed:', result.uuid)
async def on_error(error: ApiError):
print('Attach error:', error.message)
result = await latitude.runs.attach('conversation-uuid', AttachRunOptions(
# Optional: Enable streaming for real-time updates
stream=True,
# Optional: Provide callbacks for events
on_event=on_event,
on_finished=on_finished,
on_error=on_error,
))
print('Conversation UUID:', result.uuid)
print('Conversation messages:', result.conversation)
Rendering Prompts
Prompt Rendering
Render a prompt locally without running it:
result = await latitude.prompts.render(
'Your prompt content here with {{ parameters }}',
RenderPromptOptions(
parameters={
'topic': 'Artificial Intelligence',
'tone': 'Professional',
},
# Optional: Specify a provider adapter
adapter=Adapter.OpenAI,
))
print('Rendered config:', result.config)
print('Rendered messages:', result.messages)
Chain Rendering
Render a chain of prompts locally:
async def on_step(messages: list[MessageLike], config: dict[str, Any]) -> str | MessageLike:
# Process each step in the chain
print('Processing step with messages:', messages)
# Return a string or a message object
return 'Step response'
result = await latitude.prompts.render_chain(
Prompt(
path='prompt-path',
content='Your prompt content here with {{ parameters }}',
provider='openai',
),
on_step,
RenderChainOptions(
parameters={
'topic': 'Machine Learning',
'complexity': 'Advanced',
},
# Optional: Specify a provider adapter
adapter=Adapter.OpenAI,
))
print('Rendered config:', result.config)
print('Rendered messages:', result.messages)
Logging
Creating Logs
Push a log to Latitude manually for a prompt:
messages = [
{
'role': 'user',
'content': 'Hello, how can you help me today?',
},
]
log = await latitude.logs.create('prompt-path', messages, CreateLogOptions(
response='I can help you with anything!',
))
Evaluations
Annotate a log
Push an evaluation result (annotate) to Latitude:
result = await sdk.evaluations.annotate(
'conversation-uuid',
4, # In this case, the score is 4 out of 5
"evaluation-uuid",
AnnotateEvaluationOptions(reason="I liked it!"),
)
Complete Method Reference
Initialization
# SDK initialization
class GatewayOptions:
host: str
port: int
ssl: bool
class InternalOptions:
gateway: Optional[GatewayOptions]
retries: Optional[int]
delay: Optional[float]
timeout: Optional[float]
class LatitudeOptions:
promptl: Optional[PromptlOptions]
internal: Optional[InternalOptions]
project_id: Optional[int]
version_uuid: Optional[str]
tools: Optional[dict[str, OnToolCall]]
Latitude(
api_key: str,
options: Optional[LatitudeOptions]
)
Prompts Namespace
# Get a prompt
class GetPromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
class GetPromptResult:
uuid: str
path: str
content: str
config: dict[str, Any]
parameters: dict[str, PromptParameter]
provider: Optional[Providers]
latitude.prompts.get(
path: str,
options: Optional[GetPromptOptions]
) -> GetPromptResult
# Get all prompts
class GetAllPromptsOptions:
project_id: Optional[int]
version_uuid: Optional[str]
latitude.prompts.get_all(
options: Optional[GetAllPromptsOptions]
) -> List[GetPromptResult]
# Get or create a prompt
class GetOrCreatePromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
prompt: Optional[str]
class GetOrCreatePromptResult:
uuid: str
path: str
content: str
config: dict[str, Any]
parameters: dict[str, PromptParameter]
provider: Optional[Providers]
latitude.prompts.get_or_create(
path: str,
options: Optional[GetOrCreatePromptOptions]
) -> GetOrCreatePromptResult
# Delete a prompt
class DeletePromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
class DeletePromptResult:
document_uuid: str
path: str
latitude.prompts.delete(
path: str,
options: Optional[DeletePromptOptions]
) -> DeletePromptResult
# Run a prompt
class RunPromptOptions:
project_id: Optional[int]
version_uuid: Optional[str]
on_event: Optional[OnEvent]
on_finished: Optional[OnFinished]
on_error: Optional[OnError]
custom_identifier: Optional[str]
parameters: Optional[dict[str, Any]]
tools: Optional[dict[str, OnToolCall]]
stream: Optional[bool]
background: Optional[bool]
mcp_headers: Optional[dict[str, dict[str, str]]]
messages: Optional[Sequence[MessageLike]] # Messages to append after the compiled prompt
class FinishedResult:
uuid: str
conversation: List[Message]
response: ChainResponse
class BackgroundResult:
uuid: str
RunPromptResult = Union[FinishedResult, BackgroundResult]
latitude.prompts.run(
path: str,
options: Optional[RunPromptOptions]
) -> Optional[RunPromptResult]
# Chat with a prompt
class ChatPromptOptions:
on_event: Optional[OnEvent]
on_finished: Optional[OnFinished]
on_error: Optional[OnError]
tools: Optional[dict[str, OnToolCall]]
stream: Optional[bool]
class ChatPromptResult:
uuid: str
conversation: List[Message]
response: ChainResponse
latitude.prompts.chat(
uuid: str,
messages: Sequence[MessageLike],
options: Optional[ChatPromptOptions]
) -> Optional[ChatPromptResult]
# Render a prompt
class RenderPromptOptions:
parameters: Optional[dict[str, Any]]
adapter: Optional[Adapter]
class RenderPromptResult:
messages: List[MessageLike]
config: dict[str, Any]
latitude.prompts.render(
prompt: str,
options: Optional[RenderPromptOptions]
) -> RenderPromptResult
# Render a chain
class RenderChainOptions:
parameters: Optional[dict[str, Any]]
adapter: Optional[Adapter]
class RenderChainResult:
messages: List[MessageLike]
config: dict[str, Any]
latitude.prompts.render_chain(
prompt: Prompt,
on_step: OnStep,
options: Optional[RenderChainOptions]
) -> RenderChainResult
Runs Namespace
# Attach to a run
class AttachRunOptions:
on_event: Optional[OnEvent]
on_finished: Optional[OnFinished]
on_error: Optional[OnError]
tools: Optional[dict[str, OnToolCall]]
stream: Optional[bool]
class AttachRunResult:
uuid: str
conversation: List[Message]
response: ChainResponse
latitude.runs.attach(
uuid: str,
options: Optional[AttachRunOptions]
) -> Optional[AttachRunResult]
# Stop a run
latitude.runs.stop(
uuid: str
) -> None
Projects Namespace
# Get all projects
latitude.projects.get_all(
) -> List[Project]
# Create a project
class CreateProjectResult:
project: Project
version: Version
latitude.projects.create(
name: str
) -> CreateProjectResult
# Get all versions for a project
latitude.projects.get_all_versions(
project_id: int
) -> List[Version]
Logs Namespace
# Create a log
class CreateLogOptions:
project_id: Optional[int]
version_uuid: Optional[str]
response: Optional[str]
class CreateLogResult:
id: int
uuid: str
source: Optional[LogSources]
commit_id: int
resolved_content: str
content_hash: str
parameters: dict[str, Any]
custom_identifier: Optional[str]
duration: Optional[int]
created_at: datetime
updated_at: datetime
latitude.logs.create(
path: str,
messages: Sequence[MessageLike],
options: Optional[CreateLogOptions]
) -> CreateLogResult
Evaluations Namespace
class AnnotateEvaluationOptions:
reason: str
class AnnotateEvaluationResult:
uuid: str
version_uuid: str
score: int
normalized_score: int
metadata: dict[str, Any]
has_passed: bool
error: Optional[str]
created_at: datetime
updated_at: datetime
latitude.evaluations.annotate(
uuid: str,
score: int,
evaluation_uuid: str,
options: Optional[AnnotateEvaluationOptions]
) -> AnnotateEvaluationResult
Versions Namespace
class GetAllVersionsOptions:
project_id: Optional[int]
latitude.versions.get_all(
options: Optional[GetAllVersionsOptions]
) -> List[Version]
Error Handling
The SDK raises ApiError instances when API requests fail. You can catch and handle these errors:
from latitude_sdk import ApiError
async def handle_errors():
try:
prompt = await latitude.prompts.get("non-existent-prompt")
except ApiError as error:
print(f"API Error: {error.message}")
print(f"Error Code: {error.code}")
print(f"Status: {error.status}")
except Exception as error:
print(f"Unexpected error: {error}")
Logging Features
- Automatic Logging: All runs through
latitude.prompts.run() are automatically logged in Latitude, capturing inputs, outputs, performance metrics, and trace information.
- Custom Identifiers: Use the optional
custom_identifier parameter to tag runs for easier filtering and analysis in the Latitude dashboard.
- Response Identification: Each response includes identifying information like
uuid that can be used to reference the specific run later.