Documentation Index
Fetch the complete documentation index at: https://docs.openlayer.com/llms.txt
Use this file to discover all available pages before exploring further.
OpenLLMetry (by Traceloop) is
an open-source project that makes it easy to monitor and trace the execution of LLM
applications. It builds on top of OpenTelemetry and captures traces in a non-intrusive way.
This guide shows how you can export traces captured by OpenLLMetry to Openlayer.
Configuration
The integration works by sending trace data to Openlayer’s OpenTelemetry endpoint.
The full code used in this guide is available
here.
To set it up, you need to:
Set the environment variables
Set the following environment variables:TRACELOOP_BASE_URL="https://api.openlayer.com/v1/otel"
TRACELOOP_HEADERS="Authorization=Bearer%20YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_PIPELINE_ID_HERE"
Make sure to include %20 between Bearer and your API key. It encodes the space character correctly in the TRACELOOP_HEADERS value.
Initialize OpenLLMetry instrumentation
Initialize OpenLLMetry instrumentation in your application.from traceloop.sdk import Traceloop
Traceloop.init(disable_batch=True)
Run LLMs and workflows as usual
Once instrumentation is set up, you can run your LLM calls as usual.
Trace data will be automatically captured and exported to Openlayer, where
you can begin testing and analyzing it.For example:from openai import OpenAI
client = OpenAI()
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "How are you doing today?"}],
)