InMemoryBenchmarkStorage#

class openstef_beam.benchmarking.InMemoryBenchmarkStorage[source]#

Bases: BenchmarkStorage

In-memory implementation of BenchmarkStorage for testing and temporary use.

Stores all benchmark data in memory using dictionaries. Data is lost when the instance is destroyed. Does not support analysis output storage.

Note

This implementation is not suitable for production use with large datasets or when persistence across sessions is required.

__init__()[source]#

Initialize empty in-memory storage containers.

save_backtest_output(target: BenchmarkTarget, output: TimeSeriesDataset) None[source]#

Save the backtest output for a specific benchmark target.

Stores the results of a backtest execution, associating it with the target configuration. Must handle overwrites of existing data gracefully.

Parameters:
Return type:

None

load_backtest_output(target: BenchmarkTarget) TimeSeriesDataset[source]#

Load previously saved backtest output for a benchmark target.

Returns:

The stored backtest results as a TimeSeriesDataset.

Raises:

KeyError – When no backtest output exists for the given target.

Parameters:

target (BenchmarkTarget)

Return type:

TimeSeriesDataset

has_backtest_output(target: BenchmarkTarget) bool[source]#

Check if backtest output exists for the given benchmark target.

Returns:

True if backtest output is stored for the target, False otherwise.

Parameters:

target (BenchmarkTarget)

Return type:

bool

save_evaluation_output(target: BenchmarkTarget, output: EvaluationReport) None[source]#

Save the evaluation report for a specific benchmark target.

Stores the evaluation metrics and analysis results, associating them with the target configuration. Must handle overwrites of existing data gracefully.

Parameters:
Return type:

None

load_evaluation_output(target: BenchmarkTarget) EvaluationReport[source]#

Load previously saved evaluation report for a benchmark target.

Returns:

The stored evaluation report containing metrics and analysis results.

Raises:

KeyError – When no evaluation output exists for the given target.

Parameters:

target (BenchmarkTarget)

Return type:

EvaluationReport

has_evaluation_output(target: BenchmarkTarget) bool[source]#

Check if evaluation output exists for the given benchmark target.

Returns:

True if evaluation output is stored for the target, False otherwise.

Parameters:

target (BenchmarkTarget)

Return type:

bool

save_analysis_output(output: AnalysisOutput) None[source]#

Save analysis output, optionally associated with a benchmark target.

Parameters:
  • output (AnalysisOutput) – The analysis results to store, typically containing insights

  • output

Return type:

None

has_analysis_output(scope: AnalysisScope) bool[source]#

Check if analysis output exists for the given scope.

Returns:

True if analysis output exists for the scope, False otherwise.

Return type:

bool

Parameters:

scope (AnalysisScope)

Return type:

bool

property analysis_data: dict[AnalysisScope, AnalysisOutput]#

Get all stored analysis outputs.

Returns:

Mapping from analysis scopes to their corresponding outputs.

Return type:

dict