read_evaluation_reports#

openstef_beam.benchmarking.read_evaluation_reports(targets: Sequence, storage: BenchmarkStorage, run_name: RunName, *, strict: bool = True) list[tuple[TargetMetadata, EvaluationReport]][source]#

Load evaluation reports for multiple targets from storage.

Utility function for retrieving evaluation results from benchmark storage and formatting them for analysis workflows.

Parameters:
  • targets (Sequence[TypeVar(T, bound= BenchmarkTarget)]) – Sequence of benchmark targets to load reports for.

  • storage (BenchmarkStorage) – Storage backend containing the evaluation outputs.

  • run_name (TypeAliasType) – Name of the benchmark run to load reports from.

  • strict (bool) – If True, raises an error if any target’s report is missing. If False, skips missing reports.

  • targets

  • storage

  • run_name

  • strict

Returns:

List of tuples containing target metadata and evaluation reports.

Return type:

list[tuple[TargetMetadata, EvaluationReport]]