ReducedOrderModelSolver#

class ReducedOrderModelSolver(problem, reduction_network, interpolation_network, loss=None, optimizer=None, scheduler=None, weighting=None, use_lt=True)[source]#

Bases: SupervisedSolver

Reduced Order Model solver class. This class implements the Reduced Order Model solver, using user specified reduction_network and interpolation_network to solve a specific problem.

The Reduced Order Model solver aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of a differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}(\mu)](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}(\mu)](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

This is done by means of two neural networks: the reduction_network, which defines an encoder \(\mathcal{E}_{\rm{net}}\), and a decoder \(\mathcal{D}_{\rm{net}}\); and the interpolation_network \(\mathcal{I}_{\rm{net}}\). The input is assumed to be discretised in the spatial dimensions.

The following loss function is minimized during training:

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)] - \mathcal{I}_{\rm{net}}[\mu_i]) + \mathcal{L}( \mathcal{D}_{\rm{net}}[\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)]] - \mathbf{u}(\mu_i))\]

where \(\mathcal{L}\) is a specific loss function, typically the MSE:

\[\mathcal{L}(v) = \| v \|^2_2.\]

See also

Original reference: Hesthaven, Jan S., and Stefano Ubbiali. Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics 363 (2018): 55-78. DOI 10.1016/j.jcp.2018.02.037.

Note

The specified reduction_network must contain two methods, namely encode for input encoding, and decode for decoding the former result. The interpolation_network network forward output represents the interpolation of the latent space obtained with reduction_network.encode.

Note

This solver uses the end-to-end training strategy, i.e. the reduction_network and interpolation_network are trained simultaneously. For reference on this trainig strategy look at the following:

..seealso::

Original reference: Pichi, Federico, Beatriz Moya, and Jan S. Hesthaven. A graph convolutional autoencoder approach to model order reduction for parametrized PDEs. Journal of Computational Physics 501 (2024): 112762. DOI 10.1016/j.jcp.2024.112762.

Warning

This solver works only for data-driven model. Hence in the problem definition the codition must only contain input (e.g. coefficient parameters, time parameters), and target.

Initialization of the ReducedOrderModelSolver class.

Parameters:
  • problem (AbstractProblem) – The formualation of the problem.

  • reduction_network (torch.nn.Module) – The reduction network used for reducing the input space. It must contain two methods, namely encode for input encoding, and decode for decoding the former result.

  • interpolation_network (torch.nn.Module) – The interpolation network for interpolating the control parameters to latent space obtained by the reduction_network encoding.

  • loss (torch.nn.Module) – The loss function to be minimized. If None, the torch.nn.MSELoss loss is used. Default is None.

  • optimizer (Optimizer) – The optimizer to be used. If None, the torch.optim.Adam optimizer is used. Default is None.

  • scheduler (Scheduler) – Learning rate scheduler. If None, the torch.optim.lr_scheduler.ConstantLR scheduler is used. Default is None.

  • weighting (WeightingInterface) – The weighting schema to be used. If None, no weighting schema is used. Default is None.

  • use_lt (bool) – If True, the solver uses LabelTensors as input. Default is True.

forward(x)[source]#

Forward pass implementation. It computes the encoder representation by calling the forward method of the interpolation_network on the input, and maps it to output space by calling the decode methode of the reduction_network.

Parameters:

x (torch.Tensor | LabelTensor) – Input tensor.

Returns:

Solver solution.

Return type:

torch.Tensor | LabelTensor

loss_data(input_pts, output_pts)[source]#

Compute the data loss by evaluating the loss between the network’s output and the true solution. This method should not be overridden, if not intentionally.

Parameters:
  • input_pts (LabelTensor) – The input points to the neural network.

  • output_pts (LabelTensor) – The true solution to compare with the network’s output.

Returns:

The supervised loss, averaged over the number of observations.

Return type:

torch.Tensor