Skip to main content
Microsoft Copilot Studio hero Openlayer integrates with Microsoft Copilot Studio to help you trace, evaluate, and monitor your Copilot conversations. The integration works by consuming conversation transcripts directly from Dataverse, automatically parsing them into structured trace data that includes LLM calls, tool executions, plan steps, latency, token usage, and RAG citations.

How it works

Copilot Studio stores conversation transcripts in the ConversationTranscript table in Dataverse. Openlayer provides a dedicated API endpoint that accepts these transcripts and automatically:
  • Parses conversations into structured trace data
  • Extracts LLM calls, tool executions, and plan steps
  • Captures latency, token usage, and cost estimates
  • Logs RAG citations and retrieved documents
  • Auto-creates projects and pipelines per bot

Automatic project creation

Openlayer automatically creates a new project for each unique Copilot Studio agent the first time it receives conversation data. Each unique combination of BotName + AADTenantId maps to a dedicated Openlayer project and data source. This means that as your organization creates new Copilot Studio agents, they will automatically appear in Openlayer without any additional configuration—simply send the conversation transcripts to the API endpoint and Openlayer handles the rest.

Manual project configuration

Alternatively, you can create projects directly in the Openlayer platform and configure them for Microsoft Copilot Studio:
  1. Navigate to your workspace and create a new project
  2. Select Monitoring mode
  3. Choose Microsoft Copilot Studio as the data source
  4. Enter your agent details:
    • Schema name (bot name)
    • Tenant ID

Finding your bot name and tenant ID

You can retrieve these values from your Copilot Studio agent:
  1. In Copilot Studio, navigate to SettingsAdvanced
  2. Expand the Metadata section
  3. Copy the Schema name (this is your bot name)
  4. Copy the Tenant ID
Microsoft Copilot Studio metadata section showing Schema name and Tenant ID This approach gives you more control over project settings and organization before data starts flowing in.

Prerequisites

Enable enhanced transcripts

To get the most detailed traces in Openlayer, you must enable Enhanced Transcripts in your Copilot Studio agent settings. This setting allows Openlayer to capture node-level details such as name, type, and start and end times. To enable it:
  1. In Copilot Studio, navigate to SettingsAdvanced
  2. Expand the Enhance Transcripts section
  3. Toggle on Include node-level details in transcripts
  4. Click Save
Microsoft Copilot Studio agent settings showing Enhanced Transcripts option
Without this setting enabled, Openlayer will still capture basic conversation data, but you won’t get the full execution traces showing individual node executions and timing information.

Integration options

There are two ways to send Copilot Studio data to Openlayer:
OptionBest forLatency
Azure Logic App (Recommended)Production, real-time monitoringNear real-time
Batch API ScriptHistorical analysis, backfillsScheduled

Use an Azure Logic App with a Dataverse trigger for near real-time, low-maintenance integration.

Architecture overview

The Logic App integration follows this flow: Microsoft Copilot Studio to Openlayer integration architecture diagram
  1. User interacts with Copilot Studio workflows/agents
  2. Copilot Studio stores session data in the ConversationTranscript table in Dataverse
  3. A Logic App triggers on Add/Update/Delete events in the ConversationTranscript table
  4. The Logic App’s HTTP action sends the transcript data to Openlayer’s REST API
Copilot Studio only writes conversation data to Dataverse after the session finishes, with an additional delay of up to 30 minutes. This means traces will not appear in Openlayer in real-time during an active conversation, but will be available shortly after the conversation ends.

Step 1: Create the Logic App

  1. In the Azure Portal, create a new Logic App (Consumption).
  2. In the Logic App designer, search for Dataverse in the triggers.
  3. Select the When a row is added, modified or deleted trigger.
  4. Configure the trigger:
    • Table name: ConversationTranscript
    • Scope: Organization
    • Filter rows: (Optional) Add filters to exclude test/design mode conversations
Logic App trigger configuration for ConversationTranscript table

Step 2: Configure the HTTP action

Add an HTTP action after the trigger with the following configuration:
  • Method: POST
  • URI: https://api.openlayer.com/copilot-studio/sessions
  • Headers:
    • Authorization: Bearer <YOUR_OPENLAYER_API_KEY>
    • Content-Type: application/json
  • Body: @{triggerBody()}
Logic App HTTP action configuration for Openlayer API
The Logic App passes the entire Dataverse row directly to Openlayer—no transformation needed.
On-premise deployments: If you’re using a self-hosted Openlayer instance, replace api.openlayer.com with your deployment’s base URL (e.g., openlayer.yourcompany.com).

Step 3: (Optional) Add error handling

Add a Scope action around the HTTP call and configure Run after settings to handle failures:
  • Send alerts to a Teams channel or email on failure
  • Use a dead-letter queue for failed transcripts

Option 2: Batch integration via Python

For teams that prefer a code-first approach or need scheduled batch syncs.

Prerequisites

pip install PowerPlatform-Dataverse-Client azure-identity requests

Example script

"""Sync Copilot Studio transcripts to Openlayer."""

import os
import requests
from datetime import datetime, timedelta
from PowerPlatform.Dataverse.client import DataverseClient
from azure.identity import InteractiveBrowserCredential

# Configuration
DATAVERSE_URL = os.environ["DATAVERSE_URL"]  # e.g., https://yourorg.crm.dynamics.com
OPENLAYER_API_KEY = os.environ["OPENLAYER_API_KEY"]

# Initialize Dataverse client with authentication
credential = InteractiveBrowserCredential()
client = DataverseClient(DATAVERSE_URL, credential)

# Query recent transcripts from Dataverse using SQL
# Get transcripts modified in the last 24 hours
yesterday = (datetime.utcnow() - timedelta(days=1)).strftime("%Y-%m-%d %H:%M:%S")

query = f"""
SELECT TOP 100
    conversationtranscriptid,
    name,
    content,
    metadata,
    conversationstarttime,
    createdon,
    modifiedon,
    schemaversion,
    schematype,
    statecode,
    statuscode
FROM conversationtranscript
WHERE modifiedon > '{yesterday}'
    AND statecode = 0
ORDER BY modifiedon DESC
"""

transcripts = client.query_sql(query)

# Send each transcript to Openlayer
for transcript in transcripts:
    resp = requests.post(
        "https://api.openlayer.com/copilot-studio/sessions",
        headers={
            "Authorization": f"Bearer {OPENLAYER_API_KEY}",
            "Content-Type": "application/json",
        },
        json=transcript,
    )

    if resp.ok:
        result = resp.json()
        print(f"✓ Processed transcript {transcript['name']}: "
              f"{result['requestsProcessed']} requests")
    else:
        print(f"✗ Error processing {transcript['name']}: "
              f"{resp.status_code} - {resp.text}")

Use cases for batch integration

  • Nightly or hourly jobs that backfill conversations
  • Historical analysis with new evaluation pipelines
  • Environments where Logic Apps are restricted
On-premise deployments: If you’re using a self-hosted Openlayer instance, replace api.openlayer.com in the script with your deployment’s base URL.

Monitoring in Openlayer

Once integrated, Copilot Studio conversations will automatically appear in your Openlayer project.

View conversation traces

Navigate to your project’s Records tab to see detailed conversation traces: Example of a Microsoft Copilot Studio conversation trace in Openlayer Each trace captures:
  • User queries and bot responses
  • Latency and token usage per turn
  • Nested execution steps (LLM calls, tool executions, plan steps)
  • RAG citations and knowledge source references

Analyze RAG quality

For bots using knowledge sources:
  • View retrieved citations per response
  • Evaluate relevance of retrieved documents
  • Track knowledge source utilization

Monitor session outcomes

Track conversation-level metrics:
  • Resolution rates (session_outcome)
  • CSAT scores
  • Turn counts and engagement

Run evaluations

Create evaluation pipelines to:
  • Score response quality
  • Detect hallucinations
  • Measure citation accuracy
  • Track safety and compliance