BenchmarkComparisonPipeline#
- class openstef_beam.benchmarking.BenchmarkComparisonPipeline(analysis_config: AnalysisConfig, target_provider: TargetProvider[TypeVar, TypeVar], storage: BenchmarkStorage)[source]#
Bases:
GenericPipeline for comparing results across multiple benchmark runs.
Enables systematic comparison of forecasting models by analyzing results from multiple benchmark runs side-by-side. Provides aggregated analysis at different levels (global, group, target) to identify performance patterns and improvements.
Use cases: - Compare model variants (different hyperparameters, algorithms) - Evaluate performance before/after model updates - Cross-validation analysis across different time periods - A/B testing of forecasting approaches
The pipeline operates on existing benchmark results, making it efficient for retrospective analysis without re-running expensive computations.
- Multi-level analysis:
The pipeline automatically generates analysis at three aggregation levels: - Global: Overall performance across all runs and targets - Group: Performance comparison within target groups - Target: Individual target performance across runs
This hierarchical approach helps identify whether improvements are consistent across the entire portfolio or specific to certain target types.
Example
Comparing three model versions across all targets:
>>> from openstef_beam.benchmarking import BenchmarkComparisonPipeline >>> from openstef_beam.analysis import AnalysisConfig >>> from openstef_beam.benchmarking.storage import LocalBenchmarkStorage >>> from openstef_beam.analysis.visualizations import SummaryTableVisualization >>> from pathlib import Path >>> >>> from openstef_beam.analysis.visualizations import ( ... GroupedTargetMetricVisualization, ... TimeSeriesVisualization ... ) >>> >>> # Configure analysis >>> analysis_config = AnalysisConfig( ... visualization_providers=[ ... GroupedTargetMetricVisualization(name="model_comparison", metric="rCRPS"), ... SummaryTableVisualization(name="performance_summary"), ... TimeSeriesVisualization(name="prediction_quality") ... ] ... ) >>> >>> # Set up comparison pipeline >>> comparison = BenchmarkComparisonPipeline( ... analysis_config=analysis_config, ... target_provider=..., ... storage=... ... ) >>> >>> # Compare multiple model versions across all targets >>> run_data = { ... "baseline_v1": LocalBenchmarkStorage("results/baseline"), ... "improved_v2": LocalBenchmarkStorage("results/improved"), ... "experimental_v3": LocalBenchmarkStorage("results/experimental") ... } >>> >>> # Generate comparison analysis >>> # comparison.run( >>> # run_data=run_data, >>> # )
- Parameters:
analysis_config (
AnalysisConfig)target_provider (
TargetProvider[TypeVar, TypeVar])storage (
BenchmarkStorage)
- __init__(analysis_config: AnalysisConfig, target_provider: TargetProvider[TypeVar, TypeVar], storage: BenchmarkStorage)[source]#
Initialize the comparison pipeline.
- Parameters:
analysis_config (
AnalysisConfig) – Configuration for analysis and visualization generation.target_provider (
TargetProvider[TypeVar, TypeVar]) – Provider that supplies targets for comparison.storage (
BenchmarkStorage) – Storage backend for saving comparison results.analysis_config
target_provider
storage
- run(run_data: dict[RunName, BenchmarkStorage], filter_args: F | None = None)[source]#
Execute comparison analysis across multiple benchmark runs.
Orchestrates the complete comparison workflow: loads evaluation reports from all specified runs, then generates comparative analysis at global, group, and target levels.
- Parameters:
run_data (
dict[TypeAliasType,BenchmarkStorage]) – Mapping from run names to their corresponding storage backends. Each storage backend should contain evaluation results for the run.filter_args (
Optional[TypeVar(F)]) – Optional criteria for filtering targets. Only targets matching these criteria will be included in the comparison.run_data
filter_args
- run_global(reports: list[tuple[TargetMetadata, EvaluationReport]])[source]#
Generate global comparison analysis across all runs and targets.
Creates aggregate visualizations comparing performance across all runs and target groups, providing a high-level overview of model improvements.
- Parameters:
reports (
list[tuple[TargetMetadata,EvaluationReport]]) – List of target metadata and evaluation report pairs from all runs.reports
- run_for_groups(reports: list[tuple[TargetMetadata, EvaluationReport]])[source]#
Generate group-level comparison analysis for each target group.
Creates comparative visualizations within each target group, showing how different runs perform for similar types of targets.
- Parameters:
reports (
list[tuple[TargetMetadata,EvaluationReport]]) – List of target metadata and evaluation report pairs from all runs.reports
- run_for_targets(reports: list[tuple[TargetMetadata, EvaluationReport]])[source]#
Generate target-level comparison analysis for individual targets.
Creates detailed comparative visualizations for each individual target, showing how different runs perform on the same forecasting challenge.
- Parameters:
reports (
list[tuple[TargetMetadata,EvaluationReport]]) – List of target metadata and evaluation report pairs from all runs.reports