LiteLLM hero Openlayer integrates seamlessly with LiteLLM, which provides a unified interface to call 100+ LLM APIs using the same input/output format. LiteLLM supports providers including OpenAI, Azure OpenAI, Anthropic, Cohere, Replicate, PaLM, and many more, making it easy to switch between different LLM providers while maintaining consistent evaluation with Openlayer.

Evaluating LiteLLM applications

You can set up Openlayer tests to evaluate your Lit eLLM applications in monitoring and development.

Monitoring

To use the monitoring mode, you must instrument your code to publish the requests your AI system receives to the Openlayer platform. To set it up, you must follow the steps in the code snippet below:
import litellm

# 1. Set the environment variables
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY_HERE"  # Or other provider keys
os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_INFERENCE_PIPELINE_ID"] = "YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE"

# 2. Enable Openlayertracing for all LiteLLM completions
from openlayer.lib import trace_litellm

trace_litellm()

# 3. Now use LiteLLM normally - tracing happens automatically
response = litellm.completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)

See full Python example

Once the code is instrumented, all your LiteLLM completions are automatically published to Openlayer, along with metadata, such as latency, number of tokens, cost estimate, and more. If you navigate to the “Data” page of your Openlayer data source, you can see the traces for each request. LangChain trace
If the LiteLLM completions are just one of the steps of your AI system, you can use the code snippets above together with tracing. In this case, your LiteLLM completions get added as a step of a larger trace. Refer to the Tracing guide for details.
After your AI system requests are continuously published and logged by Openlayer, you can create tests that run at a regular cadence on top of them. Refer to the Monitoring overview, for details on Openlayer’s monitoring mode, to the Publishing data guide, for more information on setting it up, or to the Tracing guide, to understand how to trace more complex systems.

Development

In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events. Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:
  1. either provide a way for Openlayer to run your AI system on your datasets, or
  2. before pushing, generate the model outputs yourself and push them alongside your artifacts.
For LiteLLM applications, if you are not computing your system’s outputs yourself, you must provide the required API credentials for the LLM providers you’re using. For example, if your application uses OpenAI models through LiteLLM, you provide an OPENAI_API_KEY, if it uses Anthropic models, you must provide an ANTHROPIC_API_KEY, and so on. To provide the required API credentials, navigate to “Workspace settings” -> “Environment variables,” and add the credentials as variables.

Next steps