Definition

The context utilization test measures how effectively the LLM uses the provided context when generating its response. This metric evaluates whether the model is appropriately leveraging the available contextual information to produce better answers.

Taxonomy

  • Task types: LLM.
  • Availability: and .

Why it matters

  • Context utilization ensures that your LLM is effectively using the retrieved or provided context to improve its responses.
  • This metric helps identify when your model is ignoring relevant context or not incorporating it appropriately into its answers.
  • It’s particularly important for RAG (Retrieval-Augmented Generation) systems where context should enhance the quality of generated responses.

Required columns

To compute this metric, your dataset must contain the following columns:
  • Input: The question or prompt given to the LLM
  • Outputs: The generated answer/response from your LLM
  • Context: The provided context or background information
This metric relies on an LLM evaluator judging your submission. On Openlayer, you can configure the underlying LLM used to compute it. Check out the OpenAI or Anthropic integration guides for details.

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the context utilization test:
[
  {
    "name": "Context utilization above 0.7",
    "description": "Ensure that the LLM effectively uses provided context with a score above 0.7",
    "type": "performance",
    "subtype": "metricThreshold",
    "thresholds": [
      {
        "insightName": "metrics",
        "insightParameters": null,
        "measurement": "contextUtilization",
        "operator": ">",
        "value": 0.7
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true,
    "usesTrainingDataset": false,
    "usesMlModel": false,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689"
  }
]