Model evaluation metrics

This guide explains the model evaluation metrics currently available throughout the Openlayer platform.

🛠️

Reach out

Our range of supported metrics is constantly expanding. If you don't see the specific metric you need, please don't hesitate to contact us, and we will work to make it available for you!

Classification metrics

MetricDescriptionComments
AccuracyThe classification accuracy. Defined as the ratio of the number of correctly classified samples and the total number of samples.
Precision per classThe precision score for each class. Given by TP / (TP + FP).
Recall per classThe recall score for each class. Given by TP / (TP + FN).
F1 per classThe F1 score for each class. Given by 2 * ( Precision * Recall ) / ( Precision + Recall ).
PrecisionThe macro-average of the precision score for each class, i.e., treating all classes equally.
RecallThe macro-average of the recall score for each class, i.e., treating all classes equally.
F1The macro-average of the F1 score for each class, i.e., treating all classes equally.
ROC AUCThe macro-average of the area under the receiver operating characteristic curve score for each class, i.e., treating all classes equally. For multi-class classification tasks, uses the one-versus-one configuration.The ROC AUC is available only if the class probabilities are uploaded with the model. This is done by specifying a predictionScoresColumnName on the dataset configs. Refer to the API reference for details.

Where:

  • TP: true positive;
  • TN: true negative;
  • FP: false positive;
  • FN: false negative.

Regression metrics

Coming soon...