Definition

The precision test measures the accuracy of positive predictions, calculated as TP / (TP + FP). For binary classification, it considers class 1 as “positive.” For multiclass classification, it uses the macro-average of the precision score for each class, treating all classes equally.

Taxonomy

  • Task types: Tabular classification, text classification.
  • Availability: and .

Why it matters

  • Precision measures how many of the predicted positive cases are actually positive, making it crucial when false positives are costly.
  • It’s particularly important in applications like spam detection, medical diagnosis, or fraud detection where incorrect positive predictions can have serious consequences.
  • Higher precision values indicate better model performance, with 1.0 representing no false positives.
  • Precision complements recall to provide a complete picture of model performance on positive class predictions.

Required columns

To compute this metric, your dataset must contain the following columns:
  • Predictions: The predicted class labels from your classification model
  • Ground truths: The actual/true class labels

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the precision test:
[
  {
    "name": "Precision above 0.8",
    "description": "Ensure that the precision is above 0.8",
    "type": "performance",
    "subtype": "metricThreshold",
    "thresholds": [
      {
        "insightName": "metrics",
        "insightParameters": null,
        "measurement": "precision",
        "operator": ">",
        "value": 0.8
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true,
    "usesTrainingDataset": false,
    "usesMlModel": true,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689"
  }
]