AnalysisOutput#

class openstef_beam.analysis.AnalysisOutput(**data: Any) None[source]#

Bases: BaseModel

Container for analysis results from the benchmarking pipeline.

Holds all visualizations generated for a specific analysis scope, organized by lead time filtering conditions. This allows comparing model performance across different forecasting horizons (e.g., 1-hour vs 24-hour ahead predictions).

The output structure enables systematic organization of results from benchmark runs, making it easy to generate reports that compare multiple models across various lead times and targets.

Variables:
  • scope – Analysis context defining what was analyzed (targets, runs, aggregation)

  • visualizations – Generated charts and plots grouped by lead time filtering

Parameters:

data (Any)

scope: AnalysisScope#
visualizations: dict[TypeAliasType, list[VisualizationOutput]]#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'protected_namespaces': (), 'ser_json_inf_nan': 'null'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].