Documentation Index
Fetch the complete documentation index at: https://docs.openlayer.com/llms.txt
Use this file to discover all available pages before exploring further.
Openlayer can act as an OpenTelemetry backend,
enabling trace ingestion from any OpenTelemetry-compatible instrumentation library.
This guide shows how to use the OpenLIT
library to instrument an LLM framework or provider and send trace data to
Openlayer for monitoring and evaluation.
Configuration
The integration works by sending trace data to Openlayer’s OpenTelemetry endpoint.
The full code used in this guide is available
here.
To set it up, you need to:
Set the environment variables
Set the following environment variables:OTEL_EXPORTER_OTLP_ENDPOINT="https://api.openlayer.com/v1/otel"
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_PIPELINE_ID_HERE"
Initialize OpenLIT instrumentation
Initialize OpenLIT instrumentation in your application.import openlit
openlit.init(disable_batch=True)
Run LLMs and workflows as usual
Once instrumentation is set up, you can run your LLM calls as usual.
Trace data will be automatically captured and exported to Openlayer, where
you can begin testing and analyzing it.For example:from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "How are you doing today?"}],
)