mean_absolute_calibration_error#
- openstef_beam.metrics.mean_absolute_calibration_error(y_true: ndarray[tuple[Any, ...], dtype[floating]], y_pred: ndarray[tuple[Any, ...], dtype[floating]], quantiles: ndarray[tuple[Any, ...], dtype[floating]]) float[source]#
Calculate the Mean Absolute Calibration Error (MACE) for probabilistic forecasts.
MACE measures how well the predicted quantiles match their nominal levels by comparing observed probabilities to expected quantile levels. Perfect calibration yields MACE = 0.
- Parameters:
y_true (ndarray[tuple[Any, ...], dtype[floating]]) – Observed values with shape (num_samples,).
y_pred (ndarray[tuple[Any, ...], dtype[floating]]) – Predicted quantiles with shape (num_samples, num_quantiles). Each column represents predictions for a specific quantile level.
quantiles (ndarray[tuple[Any, ...], dtype[floating]]) – Nominal quantile levels with shape (num_quantiles,). Must be sorted in ascending order and contain values in [0, 1].
- Returns:
The mean absolute calibration error as a float in [0, 0.5]. Values closer to 0 indicate better calibration.
- Return type:
float
Example
Evaluate calibration of quantile forecasts:
>>> import numpy as np >>> y_true = np.array([95, 105, 100, 110, 90, 115, 85, 120]) >>> quantiles = np.array([0.1, 0.5, 0.9]) >>> # Well-calibrated forecasts >>> y_pred = np.array([[90, 95, 100], # 10%, 50%, 90% quantiles ... [100, 105, 110], ... [95, 100, 105], ... [105, 110, 115], ... [85, 90, 95], ... [110, 115, 120], ... [80, 85, 90], ... [115, 120, 125]]) >>> mace = mean_absolute_calibration_error(y_true, y_pred, quantiles) >>> round(mace, 2) 0.23
Note
MACE is a key diagnostic for probabilistic forecasts. High MACE values indicate that the forecast confidence intervals are either too wide (overconfident) or too narrow (underconfident).
- Parameters:
y_true (
ndarray[tuple[Any,...],dtype[floating]])y_pred (
ndarray[tuple[Any,...],dtype[floating]])quantiles (
ndarray[tuple[Any,...],dtype[floating]])
- Return type:
float