SupervisedSolver#
- class SupervisedSolver(problem, model, extra_features=None, loss=MSELoss(), optimizer=<class 'torch.optim.adam.Adam'>, optimizer_kwargs={'lr': 0.001}, scheduler=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_kwargs={'factor': 1, 'total_iters': 0})[source]#
Bases:
SolverInterface
SupervisedSolver solver class. This class implements a SupervisedSolver, using a user specified
model
to solve a specificproblem
.The Supervised Solver class aims to find a map between the input \(\mathbf{s}:\Omega\rightarrow\mathbb{R}^m\) and the output \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\). The input can be discretised in space (as in
ROMe2eSolver
), or not (e.g. when training Neural Operators).Given a model \(\mathcal{M}\), the following loss function is minimized during training:
\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(\mathbf{u}_i - \mathcal{M}(\mathbf{v}_i))\]where \(\mathcal{L}\) is a specific loss function, default Mean Square Error:
\[\mathcal{L}(v) = \| v \|^2_2.\]In this context \(\mathbf{u}_i\) and \(\mathbf{v}_i\) means that we are seeking to approximate multiple (discretised) functions given multiple (discretised) input functions.
- Parameters:
problem (AbstractProblem) – The formualation of the problem.
model (torch.nn.Module) – The neural network model to use.
loss (torch.nn.Module) – The loss function used as minimizer, default
torch.nn.MSELoss
.extra_features (torch.nn.Module) – The additional input features to use as augmented input.
optimizer (torch.optim.Optimizer) – The neural network optimizer to use; default is
torch.optim.Adam
.optimizer_kwargs (dict) – Optimizer constructor keyword args.
lr (float) – The learning rate; default is 0.001.
scheduler (torch.optim.LRScheduler) – Learning rate scheduler.
scheduler_kwargs (dict) – LR scheduler constructor keyword args.
- forward(x)[source]#
Forward pass implementation for the solver.
- Parameters:
x (torch.Tensor) – Input tensor.
- Returns:
Solver solution.
- Return type:
- training_step(batch, batch_idx)[source]#
Solver training step.
- Parameters:
- Returns:
The sum of the loss functions.
- Return type:
- loss_data(input_pts, output_pts)[source]#
The data loss for the Supervised solver. It computes the loss between the network output against the true solution. This function should not be override if not intentionally.
- Parameters:
input_tensor (LabelTensor) – The input to the neural networks.
output_tensor (LabelTensor) – The true solution to compare the network solution.
- Returns:
The residual loss averaged on the input coordinates
- Return type:
- property scheduler#
Scheduler for training.
- property neural_net#
Neural network for training.
- property loss#
Loss for training.