AnalysisPipeline#
- class openstef_beam.analysis.AnalysisPipeline(config: AnalysisConfig) None[source]#
Bases:
objectOrchestrates the generation of visualizations from evaluation reports.
The pipeline processes evaluation reports at different aggregation levels: - Individual targets: Creates detailed visualizations for single targets - Multiple targets: Creates comparative visualizations across target groups
It integrates with the benchmarking framework by being called from BenchmarkPipeline after evaluation is complete, ensuring consistent visualization generation across all benchmark runs.
- Parameters:
config (
AnalysisConfig)
- __init__(config: AnalysisConfig) None[source]#
Initialize the analysis pipeline with configuration.
- Parameters:
config (
AnalysisConfig) – Analysis configuration containing visualization providers.config
- run_for_subsets(reports: list[tuple[TargetMetadata, EvaluationSubsetReport]], aggregation: AnalysisAggregation) list[VisualizationOutput][source]#
Generate visualizations for a set of evaluation reports at a specific aggregation level.
Processes the provided evaluation reports through all configured visualization providers that support the requested aggregation level. Only providers that declare support for the aggregation are used.
- Parameters:
reports (
list[tuple[TargetMetadata,EvaluationSubsetReport]]) – List of (metadata, evaluation_subset_report) tuples to visualize. The metadata provides context about the target and run, while the evaluation report contains the metrics and predictions to visualize.aggregation (
AnalysisAggregation) – The aggregation level determining how reports are grouped and compared in visualizations.reports
aggregation
- Returns:
List of visualization outputs from all applicable providers. Empty if no providers support the requested aggregation level.
- Return type:
list[VisualizationOutput]
- run_for_reports(reports: Sequence[tuple[TargetMetadata, EvaluationReport]], scope: AnalysisScope) AnalysisOutput[source]#
Generate visualizations for evaluation reports within a specific scope.
Groups reports by lead time filtering conditions and generates visualizations for each group using all configured visualization providers that support the scope’s aggregation level. This enables comparing model performance across different forecasting horizons (e.g., short-term vs long-term predictions).
- Parameters:
reports (
Sequence[tuple[TargetMetadata,EvaluationReport]]) – List of (metadata, evaluation_report) tuples to visualize.scope (
AnalysisScope) – Analysis scope defining aggregation level and context.reports
scope
- Returns:
Analysis output containing all generated visualizations grouped by lead time filtering conditions.
- Return type:
- run_for_groups(reports: Sequence[tuple[TargetMetadata, EvaluationReport]], scope: AnalysisScope) dict[GroupName, AnalysisOutput][source]#
Generate visualizations for multiple target groups at a specific aggregation level.
This method processes all evaluation reports, grouping them by their target group names and generating visualizations for each group.
- Parameters:
reports (
Sequence[tuple[TargetMetadata,EvaluationReport]]) – List of (metadata, evaluation_report) tuples to visualize. The metadata provides context about the target and run, while the evaluation report contains the metrics and predictions to visualize.scope (
AnalysisScope) – The analytics scope defining how reports are grouped and aggregated.reports
scope
- Returns:
Dictionary mapping group names to their corresponding AnalyticsOutput.
- Return type:
dict[TypeAliasType,AnalysisOutput]