Schemas
This module defines pydantic schemas, which are used to validate the configuration before a training run is started. The top-level config yaml matches the BaseSchema.
- class anemoi.training.schemas.base_schema.BaseSchema(*, data: DataSchema, dataloader: DataLoaderSchema, datamodule: DataModuleSchema, diagnostics: DiagnosticsSchema, hardware: HardwareSchema, graph: BaseGraphSchema, model: BaseModelSchema | EnsModelSchema, training: ForecasterSchema | ForecasterEnsSchema | InterpolationSchema, config_validation: bool = True)
Bases:
BaseModel
Top-level schema for the training configuration.
- data: DataSchema
Data configuration.
- dataloader: DataLoaderSchema
Dataloader configuration.
- datamodule: DataModuleSchema
Datamodule configuration.
- diagnostics: DiagnosticsSchema
Diagnostics configuration such as logging, plots and metrics.
- hardware: HardwareSchema
Hardware configuration.
- graph: BaseGraphSchema
Graph configuration.
- model: ModelSchema
Model configuration.
- training: TrainingSchema
Training configuration.
- model_dump(by_alias: bool = False) dict
- !!! abstract “Usage Documentation”
[model_dump](../concepts/serialization.md#modelmodel_dump)
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include – A set of fields to include in the output.
exclude – A set of fields to exclude from the output.
context – Additional context to pass to the serializer.
by_alias – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset – Whether to exclude fields that have not been explicitly set.
exclude_defaults – Whether to exclude fields that are set to their default value.
exclude_none – Whether to exclude fields that have a value of None.
round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.base_schema.UnvalidatedBaseSchema(*, data: Any, dataloader: Any, datamodule: Any, diagnostics: Any, hardware: Any, graph: Any, model: Any, training: Any, config_validation: bool = False)
Bases:
BaseModel
- data: Any
Data configuration.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- dataloader: Any
Dataloader configuration.
- datamodule: Any
Datamodule configuration.
- diagnostics: Any
Diagnostics configuration such as logging, plots and metrics.
- hardware: Any
Hardware configuration.
- graph: Any
Graph configuration.
- model: Any
Model configuration.
- training: Any
Training configuration.
- model_dump(by_alias: bool = False) dict
- !!! abstract “Usage Documentation”
[model_dump](../concepts/serialization.md#modelmodel_dump)
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include – A set of fields to include in the output.
exclude – A set of fields to exclude from the output.
context – Additional context to pass to the serializer.
by_alias – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset – Whether to exclude fields that have not been explicitly set.
exclude_defaults – Whether to exclude fields that are set to their default value.
exclude_none – Whether to exclude fields that have a value of None.
round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
The below schemas are organised below identically to the training config files,
Data
- class anemoi.training.schemas.data.NormalizerSchema(*, default: str | None, remap: dict[str, str] | None = <factory>, std: list[str] | None = <factory>, mean_std: list[str] | None = <factory>, min_max: list[str] | None = <factory>, max: list[str] | None = <factory>, none: list[str] | None = <factory>)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.data.ImputerSchema(*, default: str, maximum: list[str] | None, minimum: list[str] | None, none: list[str] | None = <factory>)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.data.RemapperSchema(*, default: str, none: list[str] | None = <factory>)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.data.PreprocessorTarget(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
- class anemoi.training.schemas.data.PreprocessorSchema(*, _target_: PreprocessorTarget, config: NormalizerSchema | ImputerSchema | RemapperSchema)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- target_: PreprocessorTarget
Processor object from anemoi.models.preprocessing.[normalizer|imputer|remapper].
- config: NormalizerSchema | ImputerSchema | RemapperSchema
Target schema containing processor methods.
- class anemoi.training.schemas.data.DataSchema(*, format: str, frequency: str, timestep: str, processors: dict[str, PreprocessorSchema], forcing: list[str], diagnostic: list[str], remapped: dict | None, num_features: int | None)
Bases:
BaseModel
A class used to represent the overall configuration of the dataset.
- num_features
The number of features in the forecast state. To be set in the code.
- Type:
Optional[int]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- processors: dict[str, PreprocessorSchema]
Layers of model performing computation on latent space. Processors including imputers and normalizers are applied in order of definition.
- forcing: list[str]
Features that are not part of the forecast state but are used as forcing to generate the forecast state.
Dataloader
- class anemoi.training.schemas.dataloader.Frequency(root: RootModelRootType = PydanticUndefined)
Bases:
RootModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.dataloader.DatasetSchema(*, dataset: str | dict | Path | list[dict] | None = None, start: str | int | None = None, end: str | int | None = None, frequency: Frequency, drop: list | None = None)
Bases:
BaseModel
Dataset configuration schema.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.dataloader.LoaderSet(*, training: Annotated[int, Gt(gt=0)] | None, validation: Annotated[int, Gt(gt=0)] | None, test: Annotated[int, Gt(gt=0)] | None)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.dataloader.FullGridIndicesSchema(*, _target_: Literal['anemoi.training.data.grid_indices.FullGrid'] = 'anemoi.training.data.grid_indices.FullGrid', nodes_name: str)
Bases:
BaseModel
- target_: Literal['anemoi.training.data.grid_indices.FullGrid']
Grid indices for full grid class implementation from anemoi.training.data.grid_indices.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.dataloader.MaskedGridIndicesSchema(*, _target_: Literal['anemoi.training.data.grid_indices.MaskedGrid'] = 'anemoi.training.data.grid_indices.MaskedGrid', nodes_name: str, node_attribute_name: str)
Bases:
BaseModel
- target_: Literal['anemoi.training.data.grid_indices.MaskedGrid']
Grid indices for masked grid class implementation from anemoi.training.data.grid_indices.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.dataloader.DataLoaderSchema(*, prefetch_factor: Annotated[int, Ge(ge=0)], pin_memory: bool, num_workers: LoaderSet, batch_size: LoaderSet, limit_batches: LoaderSet, training: DatasetSchema | DictConfig, validation: DatasetSchema | DictConfig, test: DatasetSchema | DictConfig, validation_rollout: Annotated[int, Gt(gt=0)], read_group_size: Annotated[int, Gt(gt=0)], grid_indices: FullGridIndicesSchema | MaskedGridIndicesSchema, **extra_data: Any)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- pin_memory: bool
If True, the data loader will copy Tensors into device/CUDA pinned memory before returning them.
- limit_batches: LoaderSet
Limit number of batches to run. Default value null, will run on all the batches.
- training: DatasetSchema | DictConfig
Training DatasetSchema.
- validation: DatasetSchema | DictConfig
Validation DatasetSchema.
- test: DatasetSchema | DictConfig
Test DatasetSchema.
- validation_rollout: PositiveInt
Number of rollouts to use for validation, must be equal or greater than rollout expected by callbacks.
- read_group_size: PositiveInt
Number of GPUs per reader group. Defaults to number of GPUs (see BaseSchema validators).
- grid_indices: FullGridIndicesSchema | MaskedGridIndicesSchema
Grid indice schema.
Diagnostics
- class anemoi.training.schemas.diagnostics.LongRolloutPlotsSchema
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.LongRolloutPlots']
LongRolloutPlots object from anemoi training diagnostics callbacks.
- cmap_accumulation: list[str] | None
Colors of the accumulation levels. Default to None. Kept for backward compatibility.
- animation_interval: int | None
Delay between frames in the animation in milliseconds, by default 400.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.GraphTrainableFeaturesPlotSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.plot.GraphTrainableFeaturesPlot'], every_n_epochs: int | None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.GraphTrainableFeaturesPlot']
GraphTrainableFeaturesPlot object from anemoi training diagnostics callbacks.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlotLossSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotLoss'], parameter_groups: dict[str, list[str]], every_n_batches: int | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotLoss']
PlotLoss object from anemoi training diagnostics callbacks.
- parameter_groups: dict[str, list[str]]
Dictionary with parameter groups with parameter names as key.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.MatplotlibColormapSchema(*, _target_: Literal['anemoi.training.utils.custom_colormaps.MatplotlibColormap'], name: str, variables: list[str] | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.utils.custom_colormaps.MatplotlibColormap']
CustomColormap object from anemoi training utils.
- variables: list[str] | None
A list of strings representing the variables for which the colormap is used, by default None.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.MatplotlibColormapClevelsSchema(*, _target_: Literal['anemoi.training.utils.custom_colormaps.MatplotlibColormapClevels'], clevels: list, variables: list[str] | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.utils.custom_colormaps.MatplotlibColormapClevels']
CustomColormap object from anemoi training utils.
- variables: list[str] | None
A list of strings representing the variables for which the colormap is used, by default None.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.DistinctipyColormapSchema(*, _target_: Literal['anemoi.training.utils.custom_colormaps.DistinctipyColormap'], n_colors: int, variables: list[str] | None = None, colorblind_type: str | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.utils.custom_colormaps.DistinctipyColormap']
CustomColormap object from anemoi training utils.
- variables: list[str] | None
A list of strings representing the variables for which the colormap is used, by default None.
- colorblind_type: str | None
The type of colorblindness to simulate. If None, the default colorblindness from distinctipy is applied.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlotSampleSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotSample'], sample_idx: int, parameters: list[str], accumulation_levels_plot: list[float], cmap_accumulation: list[str] | None = None, precip_and_related_fields: list[str] | None = None, per_sample: int, every_n_batches: int | None = None, colormaps: dict[str, Annotated[MatplotlibColormapSchema | MatplotlibColormapClevelsSchema | DistinctipyColormapSchema, FieldInfo(annotation=NoneType, required=True, discriminator='target_')]] | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotSample']
PlotSample object from anemoi training diagnostics callbacks.
- cmap_accumulation: list[str] | None
Colors of the accumulation levels. Default to None. Kept for backward compatibility.
List of precipitation related fields, by default None.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlotSpectrumSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotSpectrum'], sample_idx: int, parameters: list[str], every_n_batches: int | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotSpectrum']
PlotSpectrum object from anemoi training diagnostics callbacks.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlotHistogramSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotHistogram'], sample_idx: int, parameters: list[str], precip_and_related_fields: list[str] | None = None, every_n_batches: int | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.plot.PlotHistogram']
PlotHistogram object from anemoi training diagnostics callbacks.
List of precipitation related fields, by default None.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlotSchema
Bases:
BaseModel
- frequency: PlottingFrequency
Frequency of the plotting.
List of precipitation related fields from the parameters list.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.PlottingFrequency(*, batch: Annotated[int, Gt(gt=0)], epoch: Annotated[int, Gt(gt=0)])
Bases:
BaseModel
- batch: PositiveInt
Frequency of the plotting in number of batches.
- epoch: PositiveInt
Frequency of the plotting in number of epochs.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.TimeLimitSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.stopping.TimeLimit'], limit: int | str, record_file: str | None = None)
Bases:
BaseModel
- target_: Literal['anemoi.training.diagnostics.callbacks.stopping.TimeLimit']
TimeLimit object from anemoi training diagnostics callbacks.
- limit: int | str
Time limit, if int, assumed to be hours, otherwise must be a string with units (e.g. ‘1h’, ‘30m’).
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.EarlyStoppingSchema(*, _target_: Literal['anemoi.training.diagnostics.callbacks.stopping.EarlyStopping'], monitor: str, min_delta: float = 0.0, patience: int = 3, verbose: bool = False, mode: Literal['min', 'max'] = 'min', strict: bool = True, check_finite: bool = True, stopping_threshold: float | None = None, divergence_threshold: float | None = None, check_on_train_epoch_end: bool | None = None)
Bases:
BaseModel
- mode: Literal['min', 'max']
One of {‘min’, ‘max’}, changes if minimisation or maximimisation of the metric is ‘good’.
- stopping_threshold: float | None
Stop training immediately once the monitored quantity reaches this threshold.
- divergence_threshold: float | None
Stop training as soon as the monitored quantity becomes worse than this threshold..
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.Debug(*, anomaly_detection: bool)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.CheckpointSchema(*, save_frequency: int | None, num_models_saved: int)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.WandbSchema(*, enabled: bool, offline: bool, log_model: bool | Literal['all'], project: str, gradients: bool, parameters: bool, entity: str | None = None)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.MlflowSchema(*, enabled: bool, offline: bool, authentication: bool, log_model: bool | ~typing.Literal['all'], tracking_uri: str | None, experiment_name: str, project_name: str, system: bool, terminal: bool, run_name: str | None, on_resume_create_child: bool, expand_hyperparams: list[str] = <factory>, http_max_retries: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- log_model: bool | Literal['all']
Log checkpoints created by ModelCheckpoint as MLFlow artifacts. If True, checkpoints are logged at the end of training. If ‘all’, checkpoints are logged during training.
- expand_hyperparams: list[str]
Keys to expand within params. Any key being expanded will have lists converted according to expand_iterables.
- http_max_retries: PositiveInt
Specifies the maximum number of retries for MLflow HTTP requests, default 35.
- class anemoi.training.schemas.diagnostics.TensorboardSchema(*, enabled: bool)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.LoggingSchema(*, wandb: WandbSchema, tensorboard: TensorboardSchema, mlflow: MlflowSchema, interval: Annotated[int, Gt(gt=0)])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- wandb: WandbSchema
W&B logging schema.
- tensorboard: TensorboardSchema
TensorBorad logging schema.
- mlflow: MlflowSchema
MLflow logging schema.
- interval: PositiveInt
Logging frequency in batches.
- class anemoi.training.schemas.diagnostics.MemorySchema(*, enabled: bool, steps: Annotated[int, Gt(gt=0)], warmup: Annotated[int, Ge(ge=0)], extra_plots: bool, trace_rank0_only: bool)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- steps: PositiveInt
Frequency of memory profiling. Default to 5.
- warmup: NonNegativeInt
Number of step to discard before the profiler starts to record traces. Default to 2.
- class anemoi.training.schemas.diagnostics.Snapshot(*, enabled: bool, steps: Annotated[int, Gt(gt=0)], warmup: Annotated[int, Ge(ge=0)])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- steps: PositiveInt
Frequency of snapshot. Default to 4.
- warmup: NonNegativeInt
Number of step to discard before the profiler starts to record traces. Default to 0.
- class anemoi.training.schemas.diagnostics.Profiling(*, enabled: bool, verbose: bool | None = None)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.diagnostics.BenchmarkProfilerSchema(*, memory: ~anemoi.training.schemas.diagnostics.MemorySchema = <factory>, time: ~anemoi.training.schemas.diagnostics.Profiling = <factory>, speed: ~anemoi.training.schemas.diagnostics.Profiling = <factory>, system: ~anemoi.training.schemas.diagnostics.Profiling = <factory>, model_summary: ~anemoi.training.schemas.diagnostics.Profiling = <factory>, snapshot: ~anemoi.training.schemas.diagnostics.Snapshot = <factory>)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- memory: MemorySchema
Schema for memory report containing metrics associated with CPU and GPU memory allocation.
- class anemoi.training.schemas.diagnostics.DiagnosticsSchema(*, plot: ~anemoi.training.schemas.diagnostics.PlotSchema | None = None, callbacks: list = <factory>, benchmark_profiler: ~anemoi.training.schemas.diagnostics.BenchmarkProfilerSchema, debug: ~anemoi.training.schemas.diagnostics.Debug, profiler: bool, log: ~anemoi.training.schemas.diagnostics.LoggingSchema, enable_progress_bar: bool, print_memory_summary: bool, enable_checkpointing: bool, checkpoint: dict[str, ~anemoi.training.schemas.diagnostics.CheckpointSchema] = <factory>)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- plot: PlotSchema | None
Plot schema.
- benchmark_profiler: BenchmarkProfilerSchema
Benchmark profiler schema for profile command.
- log: LoggingSchema
Log schema.
- checkpoint: dict[str, CheckpointSchema]
Checkpoint schema for defined frequency (every_n_minutes, every_n_epochs, …).
Hardware
- class anemoi.training.schemas.hardware.Checkpoint(*, every_n_epochs: str = 'anemoi-by_epoch-epoch_{epoch:03d}-step_{step:06d}', every_n_train_steps: str = 'anemoi-by_step-epoch_{epoch:03d}-step_{step:06d}', every_n_minutes: str = 'anemoi-by_time-epoch_{epoch:03d}-step_{step:06d}')
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.hardware.FilesSchema(*, dataset: Path | dict[str, Path] | None = None, graph: Path | None = None, truncation: Path | None = None, truncation_inv: Path | None = None, checkpoint: dict[str, str], warm_start: str | None = None)
Bases:
BaseModel
- checkpoint: dict[str, str]
Each dictionary key is a checkpoint name, and the value is the path to the checkpoint file.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.hardware.Logs(*, wandb: Path | None = None, mlflow: Path | None = None, tensorboard: Path | None = None)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.hardware.PathsSchema(*, data: Path | dict[str, Path] | None = None, graph: Path | None = None, truncation: Path | None = None, output: Path | None = None, logs: Logs | None = None, checkpoints: Path, plots: Path | None = None, profiler: Path | None)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- checkpoints: Path
Path to the checkpoints directory.
- class anemoi.training.schemas.hardware.HardwareSchema(*, accelerator: ~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['cpu', 'gpu', 'auto', 'cuda', 'tpu']))] = 'auto', num_gpus_per_node: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)] = 1, num_nodes: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)] = 1, num_gpus_per_model: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)] = 1, num_gpus_per_ensemble: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)] = 1, files: ~anemoi.training.schemas.hardware.FilesSchema, paths: ~anemoi.training.schemas.hardware.PathsSchema)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- accelerator: Annotated[str, AfterValidator(partial(allowed_values, values=['cpu', 'gpu', 'auto', 'cuda', 'tpu']))]
Accelerator to use for training.
- num_gpus_per_node: NonNegativeInt
Number of GPUs per node.
- num_nodes: NonNegativeInt
Number of nodes.
- num_gpus_per_model: NonNegativeInt
Number of GPUs per model.
- num_gpus_per_ensemble: NonNegativeInt
Number of GPUs per ensemble.
- files: FilesSchema
Files schema.
- paths: PathsSchema
Paths schema.
Graph
- class anemoi.graphs.schemas.base_graph.NodeSchema(*, node_builder: ZarrNodeSchema | NPZnodeSchema | TextNodeSchema | ICONNodeSchema | ICONMeshNodeSchema | LimitedAreaNPZFileNodesSchema | ReducedGaussianGridNodeSchema | IcosahedralandHealPixNodeSchema | LimitedAreaIcosahedralandHealPixNodeSchema | StretchedIcosahdralNodeSchema, attributes: dict[str, PlanarAreaWeightSchema | SphericalAreaWeightSchema | CutOutMaskSchema | NonmissingAnemoiDatasetVariableSchema | BooleanOperationSchema] | None = None)
Bases:
BaseModel
- node_builder: NodeBuilderSchemas
Node builder schema.
- attributes: dict[str, NodeAttributeSchemas] | None
Dictionary of attributes with names as keys and anemoi.graphs.nodes.attributes objects as values.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.graphs.schemas.base_graph.EdgeSchema(*, source_name: str, target_name: str, edge_builders: list[Annotated[KNNEdgeSchema | CutoffEdgeSchema | MultiScaleEdgeSchema | ICONTopologicalEdgeSchema, FieldInfo(annotation=NoneType, required=True, discriminator='target_')]], attributes: dict[str, BaseEdgeAttributeSchema | EdgeAttributeFromNodeSchema])
Bases:
BaseModel
- attributes: dict[str, EdgeAttributeSchema]
Dictionary of attributes with names as keys and anemoi.graphs.edges.attributes objects as values.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.graphs.schemas.base_graph.BaseGraphSchema(*, nodes: dict[str, ~anemoi.graphs.schemas.base_graph.NodeSchema] | None = None, edges: list[~anemoi.graphs.schemas.base_graph.EdgeSchema] | None = None, overwrite: bool, post_processors: list[~typing.Annotated[~anemoi.graphs.schemas.post_processors.RemoveUnconnectedNodesSchema | ~anemoi.graphs.schemas.post_processors.RestrictEdgeLengthSchema, FieldInfo(annotation=NoneType, required=True, discriminator='target_')]] = <factory>, data: str, hidden: str | list[str])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- nodes: dict[str, NodeSchema] | None
Nodes schema for all types of nodes (ex. data, hidden).
- edges: list[EdgeSchema] | None
List of edges schema.
Key name for the hidden nodes. Default to ‘hidden’.
Model
- class anemoi.models.schemas.models.DefinedModels(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
- class anemoi.models.schemas.models.Model(*, _target_: DefinedModels, _convert_: str = 'all')
Bases:
BaseModel
- target_: DefinedModels
Model object defined in anemoi.models.model.
- convert_: str
The target’s parameters to convert to primitive containers. Other parameters will use OmegaConf. Default to all.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.TrainableParameters(*, data: Annotated[int, Ge(ge=0)], hidden: Annotated[int, Ge(ge=0)])
Bases:
BaseModel
- data: NonNegativeInt
Size of the learnable data node tensor. Default to 8.
Size of the learnable hidden node tensor. Default to 8.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.ReluBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.ReluBounding'], variables: list[str])
Bases:
BaseModel
- target_: Literal['anemoi.models.layers.bounding.ReluBounding']
Relu bounding object defined in anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.LeakyReluBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.LeakyReluBounding'], variables: list[str])
Bases:
ReluBoundingSchema
- target_: Literal['anemoi.models.layers.bounding.LeakyReluBounding']
Leaky Relu bounding object defined in anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.FractionBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.FractionBounding'], variables: list[str], min_val: float, max_val: float, total_var: str)
Bases:
BaseModel
- target_: Literal['anemoi.models.layers.bounding.FractionBounding']
Fraction bounding object defined in anemoi.models.layers.bounding.
- min_val: float
The minimum value for the HardTanh activation. Correspond to the minimum fraction of the total_var.
- max_val: float
The maximum value for the HardTanh activation. Correspond to the maximum fraction of the total_var.
- total_var: str
Variable from which the secondary variables are derived. For example, convective precipitation should be a fraction of total precipitation.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.LeakyFractionBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.LeakyFractionBounding'], variables: list[str], min_val: float, max_val: float, total_var: str)
Bases:
FractionBoundingSchema
- target_: Literal['anemoi.models.layers.bounding.LeakyFractionBounding']
Leaky fraction bounding object defined in anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.HardtanhBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.HardtanhBounding'], variables: list[str], min_val: float, max_val: float)
Bases:
BaseModel
- target_: Literal['anemoi.models.layers.bounding.HardtanhBounding']
Hard tanh bounding method function from anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.LeakyHardtanhBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.LeakyHardtanhBounding'], variables: list[str], min_val: float, max_val: float)
Bases:
HardtanhBoundingSchema
- target_: Literal['anemoi.models.layers.bounding.LeakyHardtanhBounding']
Leaky hard tanh bounding method function from anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.NormalizedReluBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.NormalizedReluBounding'], variables: list[str], min_val: list[float], normalizer: list[str])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.LeakyNormalizedReluBoundingSchema(*, _target_: Literal['anemoi.models.layers.bounding.LeakyNormalizedReluBounding'], variables: list[str], min_val: list[float], normalizer: list[str])
Bases:
NormalizedReluBoundingSchema
- target_: Literal['anemoi.models.layers.bounding.LeakyNormalizedReluBounding']
Leaky normalized Relu bounding object defined in anemoi.models.layers.bounding.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.models.schemas.models.BaseModelSchema(*, num_channels: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], model: ~anemoi.models.schemas.models.Model = <factory>, layer_kernels: dict[str, dict] | None = <factory>, trainable_parameters: ~anemoi.models.schemas.models.TrainableParameters = <factory>, bounding: list[~typing.Annotated[~anemoi.models.schemas.models.ReluBoundingSchema | ~anemoi.models.schemas.models.LeakyReluBoundingSchema | ~anemoi.models.schemas.models.FractionBoundingSchema | ~anemoi.models.schemas.models.LeakyFractionBoundingSchema | ~anemoi.models.schemas.models.HardtanhBoundingSchema | ~anemoi.models.schemas.models.LeakyHardtanhBoundingSchema | ~anemoi.models.schemas.models.NormalizedReluBoundingSchema | ~anemoi.models.schemas.models.LeakyNormalizedReluBoundingSchema, FieldInfo(annotation=NoneType, required=True, discriminator='target_')]], output_mask: str | None, latent_skip: bool = True, grid_skip: int | None = 0, processor: ~anemoi.models.schemas.processor.GNNProcessorSchema | ~anemoi.models.schemas.processor.GraphTransformerProcessorSchema | ~anemoi.models.schemas.processor.TransformerProcessorSchema, encoder: ~anemoi.models.schemas.encoder.GNNEncoderSchema | ~anemoi.models.schemas.encoder.GraphTransformerEncoderSchema, decoder: ~anemoi.models.schemas.decoder.GNNDecoderSchema | ~anemoi.models.schemas.decoder.GraphTransformerDecoderSchema)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- num_channels: NonNegativeInt
Feature tensor size in the hidden space.
- layer_kernels: dict[str, dict] | None
Settings related to custom kernels for encoder processor and decoder blocks
- trainable_parameters: TrainableParameters
Learnable node and edge parameters.
- bounding: list[Bounding]
List of bounding configuration applied in order to the specified variables.
- latent_skip: bool
Add skip connection in latent space before/after processor. Currently only in interpolator.
- grid_skip: int | None
Index of grid residual connection, or use none. Currently only in interpolator.
- processor: GNNProcessorSchema | GraphTransformerProcessorSchema | TransformerProcessorSchema
GNN processor schema.
- encoder: GNNEncoderSchema | GraphTransformerEncoderSchema
GNN encoder schema.
- decoder: GNNDecoderSchema | GraphTransformerDecoderSchema
GNN decoder schema.
- class anemoi.models.schemas.models.NoiseInjectorSchema(*, _target_: Literal['anemoi.models.layers.ensemble.NoiseConditioning'], noise_std: Annotated[int, Ge(ge=0)], noise_channels_dim: Annotated[int, Ge(ge=0)], noise_mlp_hidden_dim: Annotated[int, Ge(ge=0)], inject_noise: bool = True)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- target_: Literal['anemoi.models.layers.ensemble.NoiseConditioning']
Noise injection layer class
- noise_std: NonNegativeInt
Standard deviation of the noise to be injected.
- noise_channels_dim: NonNegativeInt
Number of channels in the noise tensor.
Hidden dimension of the MLP used to process the noise.
- class anemoi.models.schemas.models.EnsModelSchema(*, num_channels: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], model: ~anemoi.models.schemas.models.Model = <factory>, layer_kernels: dict[str, dict] | None = <factory>, trainable_parameters: ~anemoi.models.schemas.models.TrainableParameters = <factory>, bounding: list[~typing.Annotated[~anemoi.models.schemas.models.ReluBoundingSchema | ~anemoi.models.schemas.models.LeakyReluBoundingSchema | ~anemoi.models.schemas.models.FractionBoundingSchema | ~anemoi.models.schemas.models.LeakyFractionBoundingSchema | ~anemoi.models.schemas.models.HardtanhBoundingSchema | ~anemoi.models.schemas.models.LeakyHardtanhBoundingSchema | ~anemoi.models.schemas.models.NormalizedReluBoundingSchema | ~anemoi.models.schemas.models.LeakyNormalizedReluBoundingSchema, FieldInfo(annotation=NoneType, required=True, discriminator='target_')]], output_mask: str | None, latent_skip: bool = True, grid_skip: int | None = 0, processor: ~anemoi.models.schemas.processor.GNNProcessorSchema | ~anemoi.models.schemas.processor.GraphTransformerProcessorSchema | ~anemoi.models.schemas.processor.TransformerProcessorSchema, encoder: ~anemoi.models.schemas.encoder.GNNEncoderSchema | ~anemoi.models.schemas.encoder.GraphTransformerEncoderSchema, decoder: ~anemoi.models.schemas.decoder.GNNDecoderSchema | ~anemoi.models.schemas.decoder.GraphTransformerDecoderSchema, noise_injector: ~anemoi.models.schemas.models.NoiseInjectorSchema = <factory>)
Bases:
BaseModelSchema
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- noise_injector: NoiseInjectorSchema
Settings related to custom kernels for encoder processor and decoder blocks
Training
- class anemoi.training.schemas.training.GradientClip(*, val: float = 32.0, algorithm: ~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['value', 'norm']))])
Bases:
BaseModel
Gradient clipping configuration.
- algorithm: Annotated[str, AfterValidator(partial(allowed_values, values=['value', 'norm']))]
The gradient clipping algorithm to use
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.SWA(*, enabled: bool, lr: Annotated[float, Ge(ge=0)])
Bases:
BaseModel
Stochastic weight averaging configuration.
See https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
- lr: NonNegativeFloat
Learning rate for SWA.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.Rollout(*, start: Annotated[int, Gt(gt=0)], epoch_increment: Annotated[int, Ge(ge=0)], max: Annotated[int, Gt(gt=0)])
Bases:
BaseModel
Rollout configuration.
- start: PositiveInt
Number of rollouts to start with.
- epoch_increment: NonNegativeInt
Number of epochs to increment the rollout.
- max: PositiveInt
Maximum number of rollouts.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.LR(*, rate: Annotated[float, Ge(ge=0)], iterations: Annotated[int, Ge(ge=0)], min: Annotated[float, Ge(ge=0)], warmup: Annotated[int, Ge(ge=0)])
Bases:
BaseModel
Learning rate configuration.
Changes in per-gpu batch_size should come with a rescaling of the local_lr, in order to keep a constant global_lr global_lr = local_lr * num_gpus_per_node * num_nodes / gpus_per_model.
- rate: NonNegativeFloat
Initial learning rate. Is adjusteed according to the hardware configuration
- iterations: NonNegativeInt
Number of iterations.
- min: NonNegativeFloat
Minimum learning rate.
- warmup: NonNegativeInt
Number of warm up iteration. Default to 1000.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.OptimizerSchema(*, zero: bool, kwargs: dict[str, ~typing.Any] = <factory>)
Bases:
BaseModel
Optimizer configuration.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.ExplicitTimes(*, input: list[Annotated[int, Ge(ge=0)]], target: list[Annotated[int, Ge(ge=0)]])
Bases:
BaseModel
Time indices for input and output.
Starts at index 0. Input and output can overlap.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.TargetForcing(*, data: list[str], time_fraction: bool)
Bases:
BaseModel
Forcing parameters for target output times.
Extra forcing parameters to use as input to distinguish between different target times.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.LossScalingSchema(*, default: int = 1, pl: dict[str, Annotated[float, Ge(ge=0)]], sfc: dict[str, Annotated[float, Ge(ge=0)]])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.PressureLevelScalerTargets(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
- class anemoi.training.schemas.training.PressureLevelScalerSchema(*, _target_: PressureLevelScalerTargets, minimum: float, slope: float = 0.001)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
- class anemoi.training.schemas.training.BaseLossSchema(*, _target_: ~anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema, scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False)
Bases:
BaseModel
- target_: ImplementedLossesUsingBaseLossSchema
Loss function object from anemoi.training.losses.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.KernelCRPSSchema(*, _target_: ~anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema, scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False, fair: bool = True)
Bases:
BaseLossSchema
- fair: bool
Calculate a ‘fair’ (unbiased) score - ensemble variance component weighted by (ens-size-1)^-1
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.AlmostFairKernelCRPSSchema(*, _target_: ~anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema, scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False, alpha: float = 1.0, no_autocast: bool = True)
Bases:
BaseLossSchema
- alpha: float
Factor for linear combination of fair (unbiased, ensemble variance component weighted by (ens-size-1)^-1) and standard CRPS (1.0 = fully fair, 0.0 = fully unfair)
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.HuberLossSchema(*, _target_: ~anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema, scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False, delta: float = 1.0)
Bases:
BaseLossSchema
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema(*, _target_: ~anemoi.training.schemas.training.ImplementedLossesUsingBaseLossSchema, scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False, inside_lam: bool = True, wmse_contribution: bool = False)
Bases:
BaseLossSchema
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.CombinedLossSchema(*, _target_: ~typing.Literal['anemoi.training.losses.combined.CombinedLoss'], scalars: list[~typing.Annotated[str, ~pydantic.functional_validators.AfterValidator(func=functools.partial(<function allowed_values at 0x7e11d65c4b80>, values=['limited_area_mask', 'variable', 'loss_weights_mask', '*']))]], ignore_nans: bool = False, losses: ~typing.Annotated[list[~anemoi.training.schemas.training.BaseLossSchema], ~annotated_types.MinLen(min_length=1)], loss_weights: list[int | float] | None = None)
Bases:
BaseLossSchema
- target_: Literal['anemoi.training.losses.combined.CombinedLoss']
Loss function object from anemoi.training.losses.
- losses: list[BaseLossSchema]
Losses to combine, can be any of the normal losses.
- loss_weights: list[int | float] | None
Weightings of losses, if not set, all losses are weighted equally.
- classmethod add_empty_scalars(losses: Any) Any
Add empty scalars to loss functions, as scalars can be set at top level.
- check_length_of_weights_and_losses() CombinedLossSchema
Check that the number of losses and weights match, or if not set, skip.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.ImplementedStrategiesUsingBaseDDPStrategySchema(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
- class anemoi.training.schemas.training.BaseDDPStrategySchema(*, _target_: ImplementedStrategiesUsingBaseDDPStrategySchema, num_gpus_per_model: Annotated[int, Gt(gt=0)], read_group_size: Annotated[int, Gt(gt=0)])
Bases:
BaseModel
Strategy configuration.
- num_gpus_per_model: PositiveInt
Number of GPUs per model.
- read_group_size: PositiveInt
Number of GPUs per reader group. Defaults to number of GPUs.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.DDPEnsGroupStrategyStrategySchema(*, _target_: ImplementedStrategiesUsingBaseDDPStrategySchema, num_gpus_per_model: Annotated[int, Gt(gt=0)], read_group_size: Annotated[int, Gt(gt=0)], num_gpus_per_ensemble: Annotated[int, Gt(gt=0)])
Bases:
BaseDDPStrategySchema
Strategy object from anemoi.training.strategy.
- num_gpus_per_ensemble: PositiveInt
Number of GPUs per ensemble.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.GraphNodeAttributeSchema(*, _target_: Literal['anemoi.training.losses.nodeweights.GraphNodeAttribute'], target_nodes: str, node_attribute: str)
Bases:
BaseModel
- target_: Literal['anemoi.training.losses.nodeweights.GraphNodeAttribute']
Node loss weights object from anemoi.training.losses.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.ReweightedGraphNodeAttributeSchema(*, _target_: Literal['anemoi.training.losses.nodeweights.ReweightedGraphNodeAttribute'], target_nodes: str, node_attribute: str, scaled_attribute: str, weight_frac_of_total: Annotated[float, Ge(ge=0), Le(le=1)])
Bases:
BaseModel
- target_: Literal['anemoi.training.losses.nodeweights.ReweightedGraphNodeAttribute']
Node loss weights object from anemoi.training.losses.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.ScaleValidationMetrics(*, scalars_to_apply: list[str], metrics: list[str])
Bases:
BaseModel
Configuration for scaling validation metrics.
Here variable scaling is possible due to the metrics being calculated in the same way as the training loss, within internal model space.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class anemoi.training.schemas.training.BaseTrainingSchema(*, run_id: str | None, fork_run_id: str | None, load_weights_only: bool, transfer_learning: bool, submodules_to_freeze: list[str], deterministic: bool = False, precision: str = '16-mixed', multistep_input: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)], accum_grad_batches: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 1, num_sanity_val_steps: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], gradient_clip: ~anemoi.training.schemas.training.GradientClip, strategy: ~anemoi.training.schemas.training.BaseDDPStrategySchema | ~anemoi.training.schemas.training.DDPEnsGroupStrategyStrategySchema, swa: ~anemoi.training.schemas.training.SWA = <factory>, training_loss: ~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema, loss_gradient_scaling: bool = False, validation_metrics: list[~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema], scale_validation_metrics: ~anemoi.training.schemas.training.ScaleValidationMetrics, rollout: ~anemoi.training.schemas.training.Rollout = <factory>, max_epochs: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] | None = None, max_steps: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 150000, lr: ~anemoi.training.schemas.training.LR = <factory>, optimizer: ~anemoi.training.schemas.training.OptimizerSchema = <factory>, variable_loss_scaling: ~anemoi.training.schemas.training.LossScalingSchema, pressure_level_scaler: ~anemoi.training.schemas.training.PressureLevelScalerSchema, metrics: list[str], node_loss_weights: ~anemoi.training.schemas.training.GraphNodeAttributeSchema | ~anemoi.training.schemas.training.ReweightedGraphNodeAttributeSchema)
Bases:
BaseModel
Training configuration.
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- run_id: str | None
used to resume a run from a checkpoint, either last.ckpt or specified in hardware.files.warm_start.
- Type:
Run ID
- fork_run_id: str | None
Run ID to fork from, either last.ckpt or specified in hardware.files.warm_start.
- deterministic: bool
This flag sets the torch.backends.cudnn.deterministic flag. Might be slower, but ensures reproducibility.
- multistep_input: PositiveInt
Number of input steps for the model. E.g. 1 = single step scheme, X(t-1) used to predict X(t), k > 1: multistep scheme, uses [X(t-k), X(t-k+1), … X(t-1)] to predict X(t).
- accum_grad_batches: PositiveInt
Accumulates gradients over k batches before stepping the optimizer. K >= 1 (if K == 1 then no accumulation). The effective bacthsize becomes num-device * k.
- num_sanity_val_steps: NonNegativeInt
Sanity check runs n batches of val before starting the training routine.
- gradient_clip: GradientClip
Config for gradient clipping.
- strategy: StrategySchemas
Strategy to use.
- training_loss: LossSchemas
Training loss configuration.
- scale_validation_metrics: ScaleValidationMetrics
Configuration for scaling validation metrics.
- max_epochs: PositiveInt | None
Maximum number of epochs, stops earlier if max_steps is reached first.
- max_steps: PositiveInt
Maximum number of steps, stops earlier if max_epochs is reached first.
- optimizer: OptimizerSchema
Optimizer configuration.
- variable_loss_scaling: LossScalingSchema
Configuration of the variable scaling used in the loss computation.
- pressure_level_scaler: PressureLevelScalerSchema
Configuration of the pressure level scaler apllied in the loss computation.
- node_loss_weights: NodeLossWeightsSchema
Node loss weights configuration.
- class anemoi.training.schemas.training.ForecasterSchema(*, run_id: str | None, fork_run_id: str | None, load_weights_only: bool, transfer_learning: bool, submodules_to_freeze: list[str], deterministic: bool = False, precision: str = '16-mixed', multistep_input: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)], accum_grad_batches: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 1, num_sanity_val_steps: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], gradient_clip: ~anemoi.training.schemas.training.GradientClip, strategy: ~anemoi.training.schemas.training.BaseDDPStrategySchema | ~anemoi.training.schemas.training.DDPEnsGroupStrategyStrategySchema, swa: ~anemoi.training.schemas.training.SWA = <factory>, training_loss: ~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema, loss_gradient_scaling: bool = False, validation_metrics: list[~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema], scale_validation_metrics: ~anemoi.training.schemas.training.ScaleValidationMetrics, rollout: ~anemoi.training.schemas.training.Rollout = <factory>, max_epochs: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] | None = None, max_steps: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 150000, lr: ~anemoi.training.schemas.training.LR = <factory>, optimizer: ~anemoi.training.schemas.training.OptimizerSchema = <factory>, variable_loss_scaling: ~anemoi.training.schemas.training.LossScalingSchema, pressure_level_scaler: ~anemoi.training.schemas.training.PressureLevelScalerSchema, metrics: list[str], node_loss_weights: ~anemoi.training.schemas.training.GraphNodeAttributeSchema | ~anemoi.training.schemas.training.ReweightedGraphNodeAttributeSchema, model_task: ~typing.Literal['anemoi.training.train.forecaster.GraphForecaster'])
Bases:
BaseTrainingSchema
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_task: Literal['anemoi.training.train.forecaster.GraphForecaster']
Training objective.
- class anemoi.training.schemas.training.ForecasterEnsSchema(*, run_id: str | None, fork_run_id: str | None, load_weights_only: bool, transfer_learning: bool, submodules_to_freeze: list[str], deterministic: bool = False, precision: str = '16-mixed', multistep_input: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)], accum_grad_batches: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 1, num_sanity_val_steps: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], gradient_clip: ~anemoi.training.schemas.training.GradientClip, strategy: ~anemoi.training.schemas.training.BaseDDPStrategySchema | ~anemoi.training.schemas.training.DDPEnsGroupStrategyStrategySchema, swa: ~anemoi.training.schemas.training.SWA = <factory>, training_loss: ~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema, loss_gradient_scaling: bool = False, validation_metrics: list[~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema], scale_validation_metrics: ~anemoi.training.schemas.training.ScaleValidationMetrics, rollout: ~anemoi.training.schemas.training.Rollout = <factory>, max_epochs: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] | None = None, max_steps: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 150000, lr: ~anemoi.training.schemas.training.LR = <factory>, optimizer: ~anemoi.training.schemas.training.OptimizerSchema = <factory>, variable_loss_scaling: ~anemoi.training.schemas.training.LossScalingSchema, pressure_level_scaler: ~anemoi.training.schemas.training.PressureLevelScalerSchema, metrics: list[str], node_loss_weights: ~anemoi.training.schemas.training.GraphNodeAttributeSchema | ~anemoi.training.schemas.training.ReweightedGraphNodeAttributeSchema, model_task: ~typing.Literal['anemoi.training.train.forecaster.GraphEnsForecaster'], ensemble_size_per_device: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)])
Bases:
BaseTrainingSchema
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_task: Literal['anemoi.training.train.forecaster.GraphEnsForecaster']
Training objective.
- ensemble_size_per_device: PositiveInt
Number of ensemble member per device
- class anemoi.training.schemas.training.InterpolationSchema(*, run_id: str | None, fork_run_id: str | None, load_weights_only: bool, transfer_learning: bool, submodules_to_freeze: list[str], deterministic: bool = False, precision: str = '16-mixed', multistep_input: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)], accum_grad_batches: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 1, num_sanity_val_steps: ~typing.Annotated[int, ~annotated_types.Ge(ge=0)], gradient_clip: ~anemoi.training.schemas.training.GradientClip, strategy: ~anemoi.training.schemas.training.BaseDDPStrategySchema | ~anemoi.training.schemas.training.DDPEnsGroupStrategyStrategySchema, swa: ~anemoi.training.schemas.training.SWA = <factory>, training_loss: ~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema, loss_gradient_scaling: bool = False, validation_metrics: list[~anemoi.training.schemas.training.BaseLossSchema | ~anemoi.training.schemas.training.HuberLossSchema | ~anemoi.training.schemas.training.WeightedMSELossLimitedAreaSchema | ~anemoi.training.schemas.training.CombinedLossSchema | ~anemoi.training.schemas.training.KernelCRPSSchema | ~anemoi.training.schemas.training.AlmostFairKernelCRPSSchema], scale_validation_metrics: ~anemoi.training.schemas.training.ScaleValidationMetrics, rollout: ~anemoi.training.schemas.training.Rollout = <factory>, max_epochs: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] | None = None, max_steps: ~typing.Annotated[int, ~annotated_types.Gt(gt=0)] = 150000, lr: ~anemoi.training.schemas.training.LR = <factory>, optimizer: ~anemoi.training.schemas.training.OptimizerSchema = <factory>, variable_loss_scaling: ~anemoi.training.schemas.training.LossScalingSchema, pressure_level_scaler: ~anemoi.training.schemas.training.PressureLevelScalerSchema, metrics: list[str], node_loss_weights: ~anemoi.training.schemas.training.GraphNodeAttributeSchema | ~anemoi.training.schemas.training.ReweightedGraphNodeAttributeSchema, model_task: ~typing.Literal['anemoi.training.train.forecaster.GraphInterpolator'], explicit_times: ~anemoi.training.schemas.training.ExplicitTimes, target_forcing: ~anemoi.training.schemas.training.TargetForcing)
Bases:
BaseTrainingSchema
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'use_attribute_docstrings': True, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_task: Literal['anemoi.training.train.forecaster.GraphInterpolator']
Training objective.
- explicit_times: ExplicitTimes
Time indices for input and output.
- target_forcing: TargetForcing
Forcing parameters for target output times.