The performance of your AI/ML system naturally fluctuates after you deploy it. Out in the wild, your system encounters data different from your training/validation sets and is susceptible to bugs like any other piece of software.

You can use Openlayer monitoring mode to log the live requests your AI/ML systems receive, measure their health, and know when you need to take corrective actions.

With Openlayer, you can create tests for your AI system. These tests run periodically on the live data you stream to the Openlayer platform. If tests start failing, you get notified immediately.

Publishing data to the Openlayer platform

To use Openlayer to observe and monitor your systems, you must set up a way to publish your live data to the Openlayer platform.

You can do this in two different ways:

In both cases, first, create a project. Then, inside the project, you must create an inference pipeline.

The inference pipeline represents a deployed model making inferences. A common setup is to have two inference pipelines: one named staging, and the other production. When you publish data to Openlayer, you must specify which inference pipeline it belongs to.

Openlayer SDKs

The most common way to publish data to Openlayer is using one of its SDKs.

The exact code that you need to write to set this up depends on the programming language and stack of choice. We offer streamlined approaches for common AI patterns and frameworks, such as OpenAI LLMs, LangChain, and tracing multi-step RAG systems. However, you can also monitor any system by streaming data to the Openlayer platform.

Check out the monitoring examples for code snippets for common use cases.

Refer to the Publishing data page for details.

Openlayer REST API

The Openlayer REST API is used to stream your data to Openlayer by making an HTTPS POST request to the /stream-data endpoint.

Refer to its specification in the API reference for details.

Viewing streamed data

As soon as you publish data to the Openlayer platform, it becomes available in the “Requests” page inside your Project > Inference pipeline. If you click any of the requests, you can see more details and the full trace — if you are using one of the tracing solutions.

Every request streamed to Openlayer is timestamped. You can provide this timestamp during stream time or Openlayer will default it to the timestamp when the data was received by the Openlayer backend.

Timestamps are important because the tests defined in the monitoring mode are evaluated at a regular cadence defined by the evaluation window.

The successive test evaluations give an important view of your system’s health over time. Furthermore, by configuring your notification preferences, you can be immediately alerted every time your tests start failing.