This guide explains how chains built with LangChain can be evaluated with Openlayer.

Colab notebook

This guide is based on this Colab notebook. The notebook illustrates the full upload process to an Openlayer development project.

Define the chain

LangChain chains are composed by an LLM and a prompt. Let’s say we have the following chain:

# Define the LLM
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(openai_api_key="YOUR_OPENAI_API_KEY_HERE")

# Define the prompt
from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

template = """You are a helpful assistant who answers user's questions about Python.
A user will pass in a question, and you should answer it very objectively.
Use AT MOST 5 sentences. If you need more than 5 sentences to answer, say that the
user should make their question more objective."""
system_message_prompt = SystemMessagePromptTemplate.from_template(template)

human_template = "{question}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

# Define the chain
from langchain.chains import LLMChain

chain = LLMChain(
    llm=llm,
    prompt=chat_prompt,
)

You can use the chain calling the run method to generate answers to questions:

chain.run("How can I define a class?")

Get chain outputs

To evaluate the chain, it is important to capture its outputs for a given dataset.

If you have a dataset as a pandas DataFrame (df) with inputs in the column input, you can add a column output with the chain outputs using the following code:

df["output"] = df["input"].apply(chain.run)

Upload to Openlayer

The data with the chain outputs can easily be uploaded to Openlayer to run evaluations on the chain.

You can do it using one of the client libraries or the API.