Spring AI is a Spring-based framework that helps you build AI applications. It comes with built-in OpenTelemetry instrumentation, making it easy to export trace data.

This guide shows how to export Spring AI traces to Openlayer for observability and evaluation.

Configuration

The integration works by sending trace data to Openlayer’s OpenTelemetry endpoint.

The full code used in this guide is available here.

To set it up, you need to:

1

Add dependencies

Make sure your project includes the following dependencies:

  • Spring Boot Actuator
  • Micrometer tracing with OpenTelemetry support
  • OTLP exporter

For Maven projects, add the required dependencies to your pom.xml. (Gradle users can use the equivalent coordinates.)

<dependencyManagement>
  <dependencies>
      <dependency>
          <groupId>io.opentelemetry.instrumentation</groupId>
          <artifactId>opentelemetry-instrumentation-bom</artifactId>
          <version>2.13.2</version>
          <type>pom</type>
          <scope>import</scope>
      </dependency>
  </dependencies>

</dependencyManagement>

<dependencies>
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter</artifactId>
  </dependency>
  <dependency>
      <groupId>org.springframework.ai</groupId>
      <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
      <version>1.0.0-M6</version>
  </dependency>
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
      <groupId>io.opentelemetry.instrumentation</groupId>
      <artifactId>opentelemetry-spring-boot-starter</artifactId>
  </dependency>
  <!-- Spring Boot actuator -->
  <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
  <!-- Micrometer Observation-OpenTelemetry bridge -->
  <dependency>
      <groupId>io.micrometer</groupId>
      <artifactId>micrometer-tracing-bridge-otel</artifactId>
  </dependency>
  <!-- OpenTelemetry OTLP exporter -->
  <dependency>
      <groupId>io.opentelemetry</groupId>
      <artifactId>opentelemetry-exporter-otlp</artifactId>
  </dependency>
</dependencies>
2

Configure Tracing in `application.yml`

With the dependencies in place, Spring Boot will auto-configure OpenTelemetry tracing.

You just need to:

  • Set the OTLP endpoint (pointing to Openlayer)
  • Enable tracing for Spring AI

Example configuration:

spring:
  application:
    name: my-llm-app
  ai:
    chat:
      observations:
        include-prompt: true # Include prompt content in tracing (disabled by default for privacy)
        include-completion: true # Include completion content in tracing (disabled by default)
management:
  tracing:
    sampling:
      probability: 1.0 # Sample 100% of requests for full tracing (adjust in production as needed)
  observations:
    annotations:
      enabled: true # Enable @Observed (if you use observation annotations in code)
3

Point to Openlayer's OpenTelemetry endpoint

Finally, point your application to Openlayer’s OpenTelemetry endpoint via the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT="https://api.openlayer.com/v1/otel"
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_PIPELINE_ID_HERE"
4

Run LLMs and workflows as usual

Once instrumentation is set up, you can run your Spring application and LLM calls as usual. Trace data will be automatically captured and exported to Openlayer, where you can begin testing and analyzing it.