SupervisedSolver#

class SupervisedSolver(problem, model, loss=None, optimizer=None, scheduler=None, weighting=None, use_lt=True)[source]#

Bases: SingleSolverInterface

Supervised Solver solver class. This class implements a Supervised Solver, using a user specified model to solve a specific problem.

The Supervised Solver class aims to find a map between the input \(\mathbf{s}:\Omega\rightarrow\mathbb{R}^m\) and the output \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\).

Given a model \(\mathcal{M}\), the following loss function is minimized during training:

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(\mathbf{u}_i - \mathcal{M}(\mathbf{v}_i)),\]

where \(\mathcal{L}\) is a specific loss function, typically the MSE:

\[\mathcal{L}(v) = \| v \|^2_2.\]

In this context, \(\mathbf{u}_i\) and \(\mathbf{v}_i\) indicates the will to approximate multiple (discretised) functions given multiple (discretised) input functions.

Initialization of the SupervisedSolver class.

Parameters:
accepted_conditions_types#

alias of InputTargetCondition

optimization_cycle(batch)[source]#

The optimization cycle for the solvers.

Parameters:

batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

Returns:

The losses computed for all conditions in the batch, casted to a subclass of torch.Tensor. It should return a dict containing the condition name and the associated scalar loss.

Return type:

dict

loss_data(input_pts, output_pts)[source]#

Compute the data loss for the Supervised solver by evaluating the loss between the network’s output and the true solution. This method should not be overridden, if not intentionally.

Parameters:
Returns:

The supervised loss, averaged over the number of observations.

Return type:

torch.Tensor

property loss#

The loss function to be minimized.

Returns:

The loss function to be minimized.

Return type:

torch.nn.Module