LangChain
Openlayer integrates with Langchain using Langchain Callbacks. Therfore, Openlayer automatically traces every run of your Langchain applications.
This allows you to set up tests, log, and analyze your LangChain application with minimal integration efforts.
Evaluating LangChain applications
You can set up Openlayer tests to evaluate your LangChain applications in development and monitoring.
Development
You can use the LangChain template to check out how a sample app fully set up with Openlayer looks like.
In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events.
Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:
- either provide a way for Openlayer to run your AI system on your datasets, or
- before pushing, generate the model outputs yourself and push them alongside your artifacts.
For LangChain applications, if you are not computing your system’s outputs yourself, you must provide the required API credentials.
For example, if you application uses LangChain’s ChatOpenAI,
you provide an OPENAI_API_KEY
, if it uses ChatMistralAI,
you must provide a MISTRAL_API_KEY
,
and so on.
To provide the required API credentials, navigate to “Settings” > “Workspace secrets,” and add the credentials as secrets.
If you do not see a field for the API credential your application needs, click the “Add secret” button on the top to add additional secrets.
If fail to add the required credentials, you’ll likely encounter a “Missing API key” error when Openlayer tries to run your AI system to get its outputs:
Monitoring
To use the monitoring mode, you must set up a way to publish the requests your AI system receives to the Openlayer platform. This process is streamlined for LangChain applications with the Openlayer Callback Handler.
To set it up, you must follow the steps in the code snippet below:
See full Python example
The code snippet above uses LangChain’s ChatOpenAI
. However, the Openlayer
Callback Handler works for all LangChain chat
models and
LLMs.
Once the code is set up, all your invocations are automatically published to Openlayer, along with metadata, such as latency, number of tokens, cost estimate, and more.
If you navigate to the “Requests” page of your Openlayer inference pipeline, you can see the traces for each request.
If the LangChain LLM/chain invocation is just one of the steps of your AI system, you can use the code snippets above together with tracing. In this case, your LLM/chain invocations get added as a step of a larger trace. Refer to the Tracing guide for details.
After your AI system requests are continuously published and logged by Openlayer, you can create tests that run at a regular cadence on top of them.
Refer to the Monitoring overview, for details on Openlayer’s monitoring mode, to the Publishing data guide, for more information on setting it up, or to the Tracing guide, to understand how to trace more complex systems.
Was this page helpful?