EvaluationSubsetReport#
- class openstef_beam.evaluation.EvaluationSubsetReport(**data: Any) None[source]#
Bases:
BaseModelContainer for evaluation results on a specific data subset.
Bundles filtering criteria, evaluation subset data, and computed metrics for a particular slice of the evaluation dataset. Enables persistence and retrieval of evaluation results for analysis.
- Parameters:
data (
Any)
-
filtering:
TypeAliasType#
-
subset:
ForecastDataset#
-
metrics:
list[SubsetMetric]#
- to_parquet(path: Path)[source]#
Save the subset report to parquet files in the specified directory.
- Parameters:
path (
Path) – Directory where to save the report data.path
- classmethod read_parquet(path: Path) Self[source]#
Load a subset report from parquet files in the specified directory.
- Parameters:
path (
Path) – Directory containing the saved report data.path
- Returns:
Loaded EvaluationSubsetReport instance.
- Return type:
Self
- get_global_metric() SubsetMetric | None[source]#
Returns the SubsetMetric with window=’global’, or None if not found.
- Return type:
SubsetMetric|None
- get_windowed_metrics() list[SubsetMetric][source]#
Returns all SubsetMetrics with window != ‘global’.
- Return type:
list[SubsetMetric]
- get_measurements() Series[source]#
Extract measurements Series from the report for the given target.
- Return type:
Series- Returns:
Ground truth measurements as a pandas Series.
- get_quantile_predictions() DataFrame[source]#
Extract forecasted quantiles from the report.
- Return type:
DataFrame- Returns:
Dataset containing forecasted quantile predictions.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'protected_namespaces': (), 'ser_json_inf_nan': 'null'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].