Definition

The false positive rate test measures the ratio of false positives to the total number of actual negatives, calculated as FP / (FP + TN). This metric indicates how often the model incorrectly predicts the positive class when the true class is negative.

Taxonomy

  • Task types: Tabular classification, text classification.
  • Availability: and .
The false positive rate is only available for binary classification tasks.

Why it matters

  • False positive rate is crucial for understanding the model’s tendency to make incorrect positive predictions.
  • It’s particularly important in applications where false positives are costly, such as medical diagnosis, fraud detection, or spam filtering.
  • Lower false positive rates indicate better model performance, with 0 representing no false positives.
  • This metric complements precision and recall by focusing specifically on the negative class performance.

Required columns

To compute this metric, your dataset must contain the following columns:
  • Predictions: The predicted class labels from your binary classification model
  • Ground truths: The actual/true class labels

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the false positive rate test:
[
  {
    "name": "False positive rate below 0.05",
    "description": "Ensure that the false positive rate is below 0.05",
    "type": "performance",
    "subtype": "metricThreshold",
    "thresholds": [
      {
        "insightName": "metrics",
        "insightParameters": null,
        "measurement": "falsePositiveRate",
        "operator": "<",
        "value": 0.05
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true,
    "usesTrainingDataset": false,
    "usesMlModel": true,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689"
  }
]