Definition

The accuracy test measures the classification accuracy, defined as the ratio of the number of correctly classified samples and the total number of samples. Accuracy provides an overall measure of how often the classifier makes correct predictions.

Taxonomy

  • Task types: Tabular classification, text classification.
  • Availability: and .

Why it matters

  • Accuracy is one of the most intuitive and commonly used metrics for evaluating classification performance.
  • It provides a single number that represents the overall correctness of the model across all classes.
  • Higher accuracy values indicate better model performance, with 1.0 representing perfect classification.
  • However, accuracy can be misleading in cases of class imbalance, where other metrics like precision, recall, or F1 might be more appropriate.

Required columns

To compute this metric, your dataset must contain the following columns:
  • Predictions: The predicted class labels from your classification model
  • Ground truths: The actual/true class labels

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the accuracy test:
[
  {
    "name": "Accuracy above 0.85",
    "description": "Ensure that the classification accuracy is above 0.85",
    "type": "performance",
    "subtype": "metricThreshold",
    "thresholds": [
      {
        "insightName": "metrics",
        "insightParameters": null,
        "measurement": "accuracy",
        "operator": ">",
        "value": 0.85
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true,
    "usesTrainingDataset": false,
    "usesMlModel": true,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689"
  }
]