
Want to integrate with LangChain? Check out the LangChain
integration page.
Evaluating LangGraph applications
You can set up Openlayer tests to evaluate your LangGraph applications in monitoring and development.Monitoring
To use the monitoring mode, you must instrument your code to publish the requests your AI system receives to the Openlayer platform. To set it up, you must follow the steps in the code snippet below:See full Python example
The code snippet above uses builds a simple chatbot. However, the Openlayer
Callback Handler also works for more complex LangGraph applications, including
multi-agent workflows. Refer to the final section of the notebook
example
for a tracing example for multi-agent workflows.

If the LangGraph graph invocation is just one of the steps of your AI system,
you can use the code snippets above together with
tracing. In this case, your graph invocations get added
as steps of a larger trace. Refer to the Tracing guide
for details.
Development
In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events. Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:- either provide a way for Openlayer to run your AI system on your datasets, or
- before pushing, generate the model outputs yourself and push them alongside your artifacts.
OPENAI_API_KEY
, if it uses ChatMistralAI,
you must provide a MISTRAL_API_KEY
,
and so on.
To provide the required API credentials, navigate to “Workspace settings” -> “Environment variables,”
and add the credentials as secrets.
If fail to add the required credentials, you’ll likely encounter a “Missing API key”
error when Openlayer tries to run your AI system to get its outputs.