Definition
The accuracy test measures the classification accuracy, defined as the ratio of the number of correctly classified samples and the total number of samples. Accuracy provides an overall measure of how often the classifier makes correct predictions.Taxonomy
- Task types: Tabular classification, text classification.
- Availability: and .
Why it matters
- Accuracy is one of the most intuitive and commonly used metrics for evaluating classification performance.
- It provides a single number that represents the overall correctness of the model across all classes.
- Higher accuracy values indicate better model performance, with 1.0 representing perfect classification.
- However, accuracy can be misleading in cases of class imbalance, where other metrics like precision, recall, or F1 might be more appropriate.
Required columns
To compute this metric, your dataset must contain the following columns:- Predictions: The predicted class labels from your classification model
- Ground truths: The actual/true class labels
Test configuration examples
If you are writing atests.json
, here are a few valid configurations for the accuracy test:
Related
- Precision test - Measure positive prediction accuracy.
- Recall test - Measure ability to find all positive instances.
- F1 test - Harmonic mean of precision and recall.
- ROC AUC test - Area under the receiver operating characteristic curve.
- Aggregate metrics - Overview of all available metrics.