
Evaluating LangChain applications
You can set up Openlayer tests to evaluate your LangChain applications in development and monitoring.Development
You can use the LangChain
template
to check out how a sample app fully set up with Openlayer looks like.
- either provide a way for Openlayer to run your AI system on your datasets, or
- before pushing, generate the model outputs yourself and push them alongside your artifacts.
OPENAI_API_KEY
, if it uses ChatMistralAI,
you must provide a MISTRAL_API_KEY
,
and so on.
To provide the required API credentials, navigate to “Settings” > “Workspace secrets,”
and add the credentials as secrets.

If you do not see a field for the API credential your application needs, click
the “Add secret” button on the top to add additional secrets.

Monitoring
To use the monitoring mode, you must set up a way to publish the requests your AI system receives to the Openlayer platform. This process is streamlined for LangChain applications with the Openlayer Callback Handler. To set it up, you must follow the steps in the code snippet below:Advanced Callback Handler Features
The Openlayer LangChain callback handler supports several advanced features for enhanced observability and control:Metadata Transformer
You can use ametadata_transformer
function to filter, modify, or enrich metadata before it’s logged to Openlayer:
Python
Streaming Response Support
The callback handler automatically captures usage information from streaming responses whenstream_usage=True
:
Python
Context Logging for RAG Systems
The handler automatically logs context from retrieval steps and chains containingsource_documents
, enabling context-dependent metrics:
Python
See full Python example
The code snippet above uses LangChain’s
ChatOpenAI
. However, the Openlayer
Callback Handler works for all LangChain chat
models and
LLMs.
If the LangChain LLM/chain invocation is just one of the steps of your AI
system, you can use the code snippets above together with
tracing. In this case, your LLM/chain invocations get
added as a step of a larger trace. You can enrich traces with metadata using
update_current_trace(user_id="123", inferenceId="custom_id")
and
update_current_step(chain_type="retrieval", tokens=200)
for enhanced
observability and request correlation. Refer to the Tracing
guide for details.