Bedrock hero Openlayer integrates with Amazon Bedrock in two different ways:
  • If you are building an AI system with Bedrock LLMs or agents and want to evaluate it, you can use the SDKs to make Openlayer part of your workflow.
  • Some tests on Openlayer are based on a score produced by an LLM judge. You can set any of Bedrock’s LLMs as the LLM judge for these tests.
This integration guide explores each of these paths.

Evaluating Bedrock LLMs and agents

You can set up Openlayer tests to evaluate your Bedrock LLMs and agents in monitoring and development.

Monitoring

To use the monitoring mode, you must instrument your code to publish the requests your AI system receives to the Openlayer platform. To set it up, you must follow the steps in the code snippet below:
# 1. Set the environment variables
import os

os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_INFERENCE_PIPELINE_ID"] = "YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE"

# 2. Initialize the AWS session
import json
import boto3

session = boto3.Session(
    aws_access_key_id='YOUR_AWS_ACCESS_KEY_ID_HERE',
    aws_secret_access_key='YOUR_AWS_SECRET_ACCESS_KEY_HERE',
    region_name='us-east-1'  # Change to your desired region
)

# 3. Wrap the Bedrock client in Openlayer's `trace_bedrock` function
from openlayer.lib import trace_bedrock

bedrock_client = trace_bedrock(session.client(service_name='bedrock-runtime'))

# 4. From now on, every model/agent invocation call with
# the `bedrock_client` is traced and published to Openlayer. E.g.,
# Define the model ID and the input prompt
model_id = 'anthropic.claude-3-5-sonnet-20240620-v1:0'  # Replace with your model ID
input_data = {
"max_tokens": 256,
"messages": [{"role": "user", "content": "Hello, world"}],
"anthropic_version": "bedrock-2023-05-31"
}
completion = bedrock_client.invoke_model(
    body=json.dumps(input_data),
    contentType='application/json',
    accept='application/json',
    modelId=model_id
)

See full Python example

Once the code is instrumented, all your Bedrock calls are automatically published to Openlayer, along with metadata, such as latency, number of tokens, cost estimate, and more. If you navigate to the “Data” page of your Openlayer data source, you can see the traces for each request. Bedrock trace
If the Bedrock LLM call is just one of the steps of your AI system, you can use the code snippets above together with tracing. In this case, your Bedrock LLM calls get added as a step of a larger trace.
After your AI system requests are continuously published and logged by Openlayer, you can create tests that run at a regular cadence on top of them. Refer to the Monitoring overview, for details on Openlayer’s monitoring mode, to the Publishing data guide, for more information on setting it up, or to the Tracing guide, to understand how to trace more complex systems.

Development

In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events. Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:
  1. either provide a way for Openlayer to run your AI system on your datasets, or
  2. before pushing, generate the model outputs yourself and push them alongside your artifacts.
For AI systems built with Bedrock LLMs, if you are not computing your system’s outputs yourself, you must provide your API credentials. To do so, navigate to “Workspace settings” -> “Environment variables,” and add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables. If you don’t add the required Bedrock API credentials, you’ll encounter a “Missing API credentials” error when Openlayer tries to run your AI system to get its outputs.

Using Bedrock LLMs as the LLM judge

Some tests on Openlayer rely on scores produced by an LLM judge. For example, tests that use Ragas metrics and the LLM as a judge test. You can use any of Bedrock’s LLMs as the underlying LLM judge for these tests. You can change the default LLM judge for a project in the project settings page. To do so, navigate to “Settings” > Select your project in the left sidebar > click on “Metrics” to go to the metric settings page. Under “LLM evaluator,” choose the Bedrock LLM you want to use. Furthermore, make sure to add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables. LLM evaluator with Bedrock