Data publishing methods
The data publishing methods are categorized as streamlined approaches and the manual approach. The streamlined approaches exist for common AI patterns and frameworks. To use them, you need to wrap or decorate your code a certain way, and Openlayer automatically captures relevant data and metadata, such as the number of tokens, cost, latency, etc. This data is then published to the Openlayer platform. The manual approach is system-agnostic. It is equivalent to hitting the relevant endpoint from Openlayer’s REST API but via Openlayer’s SDKs.Streamlined approaches
There is a streamlined approach for each of the frameworks below:OpenAI
OpenAI
To monitor chat completions and completion calls to OpenAI LLMs, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.Refer to the OpenAI integration page for more details.
Azure OpenAI
Azure OpenAI
To monitor chat completions and completion calls to Azure OpenAI LLMs, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, and more.Refer to the Azure OpenAI integration page for more details.
Python
See full Python example
LangChain
LangChain
To monitor chat completions models and chains built with LangChain, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.Refer to the LangChain integration page for more details.
The code snippet above uses LangChain’s
ChatOpenAI
. However, the Openlayer
Callback Handler works for all LangChain chat
models and
LLMs.See full Python example
Tracing multi-step LLM systems (e.g., RAG, LLM chains)
Tracing multi-step LLM systems (e.g., RAG, LLM chains)
To trace a multi-step LLM system (such as a RAG system or LLM chains), you just need to
decorate all the functions you are interested in adding to a trace with Openlayer’s decorator.
For example:You can use the decorator together with the other streamlined methods. For example, if
your
Dynamic Trace Updates: You can enhance your traces with metadata and custom inference IDs using
update_current_trace()
and update_current_step()
functions. This enables:- Custom Inference IDs: Set custom IDs using
update_current_trace(inferenceId="your_id")
for request correlation and future data updates - Trace Metadata: Add context like
update_current_trace(user_id="123", session="abc")
for user tracking - Step Metadata: Add step-specific data using
update_current_step(model="gpt-4", tokens=150)
for detailed observability
generate_answer
function uses a wrapped version of the OpenAI client,
the chat completion calls will get added to the trace under the generate_answer
function step.See full Python example
Anthropic
Anthropic
To monitor Anthropic LLMs, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.Refer to the Anthropic integration page for more details.
See full Python example
Mistral AI
Mistral AI
To monitor Mistral AI LLMs, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.Refer to the Mistral AI integration page for more details.
See full Python example
Groq
Groq
To monitor Groq LLMs, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.Refer to the Groq integration page for more details.
See full Python example
OpenAI Assistants API
OpenAI Assistants API
To monitor runs from OpenAI Assistants, you need to:That’s it! Now, your calls are being published to Openlayer, along with
metadata, such as latency, number of tokens, cost estimate, and more.
- Set the environment variables:
- Instantiate the OpenAI client:
- Create assistant, thread, and run it
Manual approach
To manually stream data to Openlayer, you can use thestream
method, which hits the
/data-stream
endpoit of the Openlayer REST API.