ReducedOrderModelSolver#

class ReducedOrderModelSolver(problem, reduction_network, interpolation_network, loss=MSELoss(), optimizer=<class 'torch.optim.adam.Adam'>, optimizer_kwargs={'lr': 0.001}, scheduler=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_kwargs={'factor': 1, 'total_iters': 0})[source]#

Bases: SupervisedSolver

ReducedOrderModelSolver solver class. This class implements a Reduced Order Model solver, using user specified reduction_network and interpolation_network to solve a specific problem.

The Reduced Order Model approach aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of the differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}(\mu)](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}(\mu)](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

This is done by using two neural networks. The reduction_network, which contains an encoder \(\mathcal{E}_{\rm{net}}\), a decoder \(\mathcal{D}_{\rm{net}}\); and an interpolation_network \(\mathcal{I}_{\rm{net}}\). The input is assumed to be discretised in the spatial dimensions.

The following loss function is minimized during training

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)] - \mathcal{I}_{\rm{net}}[\mu_i]) + \mathcal{L}( \mathcal{D}_{\rm{net}}[\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)]] - \mathbf{u}(\mu_i))\]

where \(\mathcal{L}\) is a specific loss function, default Mean Square Error:

\[\mathcal{L}(v) = \| v \|^2_2.\]

See also

Original reference: Hesthaven, Jan S., and Stefano Ubbiali. “Non-intrusive reduced order modeling of nonlinear problems using neural networks.” Journal of Computational Physics 363 (2018): 55-78. DOI 10.1016/j.jcp.2018.02.037.

Note

The specified reduction_network must contain two methods, namely encode for input encoding and decode for decoding the former result. The interpolation_network network forward output represents the interpolation of the latent space obtain with reduction_network.encode.

Note

This solver uses the end-to-end training strategy, i.e. the reduction_network and interpolation_network are trained simultaneously. For reference on this trainig strategy look at: Pichi, Federico, Beatriz Moya, and Jan S. Hesthaven. “A graph convolutional autoencoder approach to model order reduction for parametrized PDEs.” Journal of Computational Physics 501 (2024): 112762. DOI 10.1016/j.jcp.2024.112762.

Warning

This solver works only for data-driven model. Hence in the problem definition the codition must only contain input_points (e.g. coefficient parameters, time parameters), and output_points.

Warning

This solver does not currently support the possibility to pass extra_feature.

Parameters:
  • problem (AbstractProblem) – The formualation of the problem.

  • reduction_network (torch.nn.Module) – The reduction network used for reducing the input space. It must contain two methods, namely encode for input encoding and decode for decoding the former result.

  • interpolation_network (torch.nn.Module) – The interpolation network for interpolating the control parameters to latent space obtain by the reduction_network encoding.

  • loss (torch.nn.Module) – The loss function used as minimizer, default torch.nn.MSELoss.

  • extra_features (torch.nn.Module) – The additional input features to use as augmented input.

  • optimizer (torch.optim.Optimizer) – The neural network optimizer to use; default is torch.optim.Adam.

  • optimizer_kwargs (dict) – Optimizer constructor keyword args.

  • lr (float) – The learning rate; default is 0.001.

  • scheduler (torch.optim.LRScheduler) – Learning rate scheduler.

  • scheduler_kwargs (dict) – LR scheduler constructor keyword args.

forward(x)[source]#

Forward pass implementation for the solver. It finds the encoder representation by calling interpolation_network.forward on the input, and maps this representation to output space by calling reduction_network.decode.

Parameters:

x (torch.Tensor) – Input tensor.

Returns:

Solver solution.

Return type:

torch.Tensor

loss_data(input_pts, output_pts)[source]#

The data loss for the ReducedOrderModelSolver solver. It computes the loss between the network output against the true solution. This function should not be override if not intentionally.

Parameters:
  • input_tensor (LabelTensor) – The input to the neural networks.

  • output_tensor (LabelTensor) – The true solution to compare the network solution.

Returns:

The residual loss averaged on the input coordinates

Return type:

torch.Tensor

property neural_net#

Neural network for training. It returns a ModuleDict containing the reduction_network and interpolation_network.