BenchmarkCallbackManager#
- class openstef_beam.benchmarking.BenchmarkCallbackManager(callbacks: list[BenchmarkCallback] | None = None)[source]#
Bases:
BenchmarkCallbackGroup of callbacks that can be used to aggregate multiple callbacks.
- Parameters:
callbacks (
list[BenchmarkCallback] |None)
- __init__(callbacks: list[BenchmarkCallback] | None = None)[source]#
Initialize the callback manager.
- Parameters:
callbacks (
list[BenchmarkCallback] |None) – List of callbacks to manage. If None, starts with empty list.callbacks
- add_callback(callback: BenchmarkCallback) None[source]#
Add a new callback to the manager.
- Parameters:
callback (
BenchmarkCallback)- Return type:
None
- add_callbacks(callbacks: list[BenchmarkCallback]) None[source]#
Add multiple callbacks to the manager.
- Parameters:
callbacks (
list[BenchmarkCallback])- Return type:
None
- on_benchmark_start(runner: BenchmarkPipeline[Any, Any], targets: list[BenchmarkTarget]) bool[source]#
Called when benchmark starts.
- Returns:
True if benchmark should start, False to skip.
- Return type:
bool
- Parameters:
runner (
BenchmarkPipeline[Any,Any])targets (
list[BenchmarkTarget])
- Return type:
bool
- on_target_start(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget) bool[source]#
Called when processing a target begins.
- Returns:
True if target processing should start, False to skip.
- Return type:
bool
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)
- Return type:
bool
- on_backtest_start(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget) bool[source]#
Called before backtest execution.
- Returns:
True if backtesting should start, False to skip.
- Return type:
bool
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)
- Return type:
bool
- on_backtest_complete(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget, predictions: TimeSeriesDataset) None[source]#
Called after backtest completes.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)predictions (
TimeSeriesDataset)
- Return type:
None
- on_evaluation_start(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget) bool[source]#
Called before evaluation starts.
- Returns:
True if evaluation should start, False to skip.
- Return type:
bool
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)
- Return type:
bool
- on_evaluation_complete(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget, report: EvaluationReport) None[source]#
Called after evaluation completes.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)report (
EvaluationReport)
- Return type:
None
- on_analysis_complete(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget | None, output: AnalysisOutput) None[source]#
Called after analysis (visualization) completes for a target.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget|None)output (
AnalysisOutput)
- Return type:
None
- on_target_complete(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget) None[source]#
Called when target processing finishes.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)
- Return type:
None
- on_benchmark_complete(runner: BenchmarkPipeline[Any, Any], targets: list[BenchmarkTarget]) None[source]#
Called when entire benchmark finishes.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])targets (
list[BenchmarkTarget])
- Return type:
None
- on_error(runner: BenchmarkPipeline[Any, Any], target: BenchmarkTarget, error: Exception) None[source]#
Called when an error occurs.
- Parameters:
runner (
BenchmarkPipeline[Any,Any])target (
BenchmarkTarget)error (
Exception)
- Return type:
None