Semantic Kernel is an open-source SDK from Microsoft that helps you build AI applications using languages like Python, C#, and Java. It comes with built-in OpenTelemetry instrumentation, making it easy to export trace data.

This guide shows how to export Semantic Kernel traces to Openlayer for observability and evaluation.

While this guide shows code snippets in Python, the integration also works for all other programming languages supported by Semantic Kernel, such as C#, and Java.

Configuration

The integration works by sending trace data to Openlayer’s OpenTelemetry endpoint.

The full code used in this guide is available here.

To set it up, you need to:

1

Set the environment variables

Set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT="https://api.openlayer.com/v1/otel"
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_PIPELINE_ID_HERE"
2

Initialize OpenLIT and Semantic Kernel

Initialize OpenLIT and Semantic Kernel in your application.

import openlit
from semantic_kernel import Kernel

openlit.init(disable_batch=True)
kernel = Kernel()
3

Run LLMs and workflows as usual

Once instrumentation is set up, you can run your LLM calls as usual. Trace data will be automatically captured and exported to Openlayer, where you can begin testing and analyzing it.

For example:

from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig

kernel.add_service(
    OpenAIChatCompletion(ai_model_id="gpt-4o-mini"),
)


prompt = """{{$input}}
Please provide a concise response to the question above.
"""

prompt_template_config = PromptTemplateConfig(
    template=prompt,
    name="question_answerer",
    template_format="semantic-kernel",
    input_variables=[
        InputVariable(name="input", description="The question from the user", is_required=True),
    ]
)

summarize = kernel.add_function(
    function_name="answerQuestionFunc",
    plugin_name="questionAnswererPlugin",
    prompt_template_config=prompt_template_config,
)

await kernel.invoke(summarize, input="What's the meaning of life?")