Definition
The precision test measures the accuracy of positive predictions, calculated as TP / (TP + FP). For binary classification, it considers class 1 as “positive.” For multiclass classification, it uses the macro-average of the precision score for each class, treating all classes equally.Taxonomy
- Task types: Tabular classification, text classification.
- Availability: and .
Why it matters
- Precision measures how many of the predicted positive cases are actually positive, making it crucial when false positives are costly.
- It’s particularly important in applications like spam detection, medical diagnosis, or fraud detection where incorrect positive predictions can have serious consequences.
- Higher precision values indicate better model performance, with 1.0 representing no false positives.
- Precision complements recall to provide a complete picture of model performance on positive class predictions.
Required columns
To compute this metric, your dataset must contain the following columns:- Predictions: The predicted class labels from your classification model
- Ground truths: The actual/true class labels
Test configuration examples
If you are writing atests.json
, here are a few valid configurations for the precision test:
Related
- Recall test - Measure ability to find all positive instances.
- F1 test - Harmonic mean of precision and recall.
- False positive rate test - Measure incorrect positive predictions.
- Accuracy test - Overall classification correctness.
- Aggregate metrics - Overview of all available metrics.