EvaluationReport#

class openstef_beam.evaluation.EvaluationReport(**data: Any) None[source]#

Bases: BaseModel

Complete evaluation report containing results for all data subsets.

Aggregates evaluation results across different filtering criteria, enabling analysis of model performance across various conditions (lead times, data availability, etc.).

Parameters:

data (Any)

subset_reports: list[EvaluationSubsetReport]#
get_subset(filtering: Filtering) EvaluationSubsetReport | None[source]#

Retrieve the subset report for the specified filtering criteria.

Parameters:
  • filtering (TypeAliasType) – The filtering criteria to search for.

  • filtering

Returns:

The matching subset report, or None if not found.

Return type:

EvaluationSubsetReport | None

to_parquet(path: Path)[source]#

Save the complete evaluation report to parquet files.

Parameters:
  • path (Path) – Directory where to save all subset reports.

  • path

classmethod read_parquet(path: Path) Self[source]#

Load a complete evaluation report from parquet files.

Parameters:
  • path (Path) – Directory containing all subset report data.

  • path

Returns:

Loaded EvaluationReport instance with all subset reports.

Return type:

Self

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'protected_namespaces': (), 'ser_json_inf_nan': 'null'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].