This guide explains how assistants built with the OpenAI Assistants API can be monitored with Openlayer.

Install the openlayer library

The openlayer library is available for Python and TypeScript. You can install it with:

pip install openlayer

Set up the OpenAIMonitor class

The OpenAIMonitor class is available in the openlayer client. It contains the methods used to monitor the OpenAI assistant.

import openai
import os
from openlayer import llm_monitors

# Set the environment variables
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY_HERE"
os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_PROJECT_NAME"] = "YOUR_OPENLAYER_PROJECT_NAME_HERE"

openai_client = openai.OpenAI()
monitor = llm_monitors.OpenAIMonitor(client=openai_client, publish=True)

Create OpenAI assistant and thread

Now, you can create an OpenAI assistant and thread normally with:

# Create the assistant
assistant = openai_client.beta.assistants.create(
    name="Data visualizer",
    description="You are great at creating and explaining beautiful data visualizations.",
    model="gpt-4",
    tools=[{"type": "code_interpreter"}],
)

# Create a thread
thread = openai_client.beta.threads.create(
    messages=[
        {
        "role": "user",
        "content": "Create a data visualization of the american GDP.",
        }
    ]
)

Create and monitor a run

Now, you can create a run and monitor it with:

import time

# Run assistant on thread
run = openai_client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id
)

# Keep polling the run results
while run.status != "completed":
    run = openai_client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)

    # Monitor the run with the Openlayer `monitor`. If complete, the thread is sent to Openlayer
    monitor.monitor_thread_run(run)

    time.sleep(5)

Go to the Openlayer app

Once the run completes, the resulting thread is sent to Openlayer — to the project and inference pipeline you specified when creating the OpenAIMonitor object.

You can visualize the thread and create tests in the Openlayer app, along with metadata, such as the assistant, and thread IDs, the cost, and latency.