You can use Openlayer in or in .

Select the tab below that best describes your use case to get started.

Prerequisites

To follow along with the quickstart, you’ll need an:

  • Monitoring OpenAI LLMs

  • Monitoring other LLMs

  • Development

1. Install

pip install openlayer

2. Turn on monitoring

import os
import openai
from openlayer import llm_monitors

# Set the environment variables
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY_HERE"
os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_PROJECT_NAME"] = "YOUR_OPENLAYER_PROJECT_NAME_HERE"

openai_client = openai.OpenAI()
# With publish=True, every row is published to Openlayer
monitor = llm_monitors.OpenAIMonitor(client=openai_client, publish=True)
monitor.start_monitoring()

Now you can continue using OpenAI LLMs normally.

  • In Python, every time that you call openai_client.chat.completions.create or openai_client.completions.create, the data is automatically published to Openlayer.
  • In TypeScript, every time that you call monitor.createCompletion or monitor.createChatCompletion, the data is automatically published to Openlayer.

🚀 Head to the Openlayer platform

After running the snippets above, your models and data will be available on the Openlayer platform where you can start setting up tests.

For additional sample notebooks, check out our Examples gallery GitHub repositories.

To learn more, you can head to the Guided walkthroughs and Knowledge base.