ExplainableForecaster#

class openstef_models.explainability.ExplainableForecaster[source]#

Bases: ABC

Mixin for forecasters that can explain feature importance.

Provides a standardized interface for accessing and visualizing feature importance scores across different forecasting models.

abstract property feature_importances: DataFrame#

Get feature importance scores for this model.

Returns DataFrame with feature names as index and quantiles as columns. Each quantile represents the importance distribution across multiple model training runs or folds.

Returns:

DataFrame with feature names as index and quantile columns. Values represent normalized importance scores summing to 1.0.

Note

The returned DataFrame must have feature names as index and quantile columns in format ‘quantile_PXX’ (e.g., ‘quantile_P50’, ‘quantile_P95’). All quantile values must be between 0 and 1.

plot_feature_importances(quantile: Quantile = Q(0.5)) Figure[source]#

Create interactive treemap visualization of feature importances.

Parameters:
  • quantile (Quantile) – Which quantile of importance scores to display. Defaults to median (0.5).

  • quantile

Returns:

Plotly Figure containing treemap with feature importance scores. Color intensity indicates relative importance of each feature.

Return type:

Figure