SubsetMetric#
- class openstef_beam.evaluation.SubsetMetric(**data: Any) None[source]#
Bases:
BaseModelContainer for evaluation metrics computed on a data subset.
Stores performance metrics organized by quantile and window, enabling detailed analysis of forecast quality across different probability levels and temporal periods.
- Parameters:
data (
Any)
-
timestamp:
datetime#
-
metrics:
dict[Union[Quantile,Literal['global']],dict[str,Annotated[float]]]#
- get_quantiles() list[Quantile][source]#
Return a list of quantiles present in the metrics.
- Return type:
list[Quantile]- Returns:
Sorted list of quantile values (excluding ‘global’).
- to_dataframe() DataFrame[source]#
Convert the metrics to a pandas DataFrame.
- Return type:
DataFrame- Returns:
DataFrame with quantiles as index and metric names as columns.
- get_metric(quantile: Quantile | Literal['global'], metric_name: str) Annotated[float, BeforeValidator(func=_convert_none_to_nan, json_schema_input_type=PydanticUndefined)] | None[source]#
Retrieve a specific metric value for a given quantile.
- Parameters:
quantile (
Union[Quantile,Literal['global']]) – The quantile level or ‘global’.metric_name (
str) – The name of the metric to retrieve.quantile
metric_name
- Returns:
The metric value if it exists, otherwise None.
- Return type:
Optional[float]
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'protected_namespaces': (), 'ser_json_inf_nan': 'null'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].