Definition

The mean squared error (MSE) test measures the average of the squared differences between the predicted values and the true values. MSE provides a measure of how close predictions are to the actual outcomes, with larger errors being penalized more heavily due to the squaring operation.

Taxonomy

  • Task types: Tabular regression.
  • Availability: and .

Why it matters

  • MSE is one of the most commonly used metrics for evaluating regression model performance.
  • The squaring of errors means that larger prediction errors are penalized more heavily than smaller ones, making MSE sensitive to outliers.
  • Lower MSE values indicate better model performance, with 0 representing perfect predictions.
  • MSE is differentiable, making it suitable for gradient-based optimization algorithms during model training.

Required columns

To compute this metric, your dataset must contain the following columns:
  • Predictions: The predicted values from your regression model
  • Ground truths: The actual/true target values

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the MSE test:
[
  {
    "name": "MSE below 100",
    "description": "Ensure that the mean squared error is below 100",
    "type": "performance",
    "subtype": "metricThreshold",
    "thresholds": [
      {
        "insightName": "metrics",
        "insightParameters": null,
        "measurement": "mse",
        "operator": "<",
        "value": 100
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true,
    "usesTrainingDataset": false,
    "usesMlModel": true,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689"
  }
]