ReducedOrderModelSolver#
- class ReducedOrderModelSolver(problem, reduction_network, interpolation_network, loss=None, optimizer=None, scheduler=None, weighting=None, use_lt=True)[source]#
Bases:
SupervisedSolver
Reduced Order Model solver class. This class implements the Reduced Order Model solver, using user specified
reduction_network
andinterpolation_network
to solve a specificproblem
.The Reduced Order Model solver aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of a differential problem:
\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}(\mu)](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}(\mu)](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]This is done by means of two neural networks: the
reduction_network
, which defines an encoder \(\mathcal{E}_{\rm{net}}\), and a decoder \(\mathcal{D}_{\rm{net}}\); and theinterpolation_network
\(\mathcal{I}_{\rm{net}}\). The input is assumed to be discretised in the spatial dimensions.The following loss function is minimized during training:
\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)] - \mathcal{I}_{\rm{net}}[\mu_i]) + \mathcal{L}( \mathcal{D}_{\rm{net}}[\mathcal{E}_{\rm{net}}[\mathbf{u}(\mu_i)]] - \mathbf{u}(\mu_i))\]where \(\mathcal{L}\) is a specific loss function, typically the MSE:
\[\mathcal{L}(v) = \| v \|^2_2.\]See also
Original reference: Hesthaven, Jan S., and Stefano Ubbiali. Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics 363 (2018): 55-78. DOI 10.1016/j.jcp.2018.02.037.
Note
The specified
reduction_network
must contain two methods, namelyencode
for input encoding, anddecode
for decoding the former result. Theinterpolation_network
networkforward
output represents the interpolation of the latent space obtained withreduction_network.encode
.Note
This solver uses the end-to-end training strategy, i.e. the
reduction_network
andinterpolation_network
are trained simultaneously. For reference on this trainig strategy look at the following:- ..seealso::
Original reference: Pichi, Federico, Beatriz Moya, and Jan S. Hesthaven. A graph convolutional autoencoder approach to model order reduction for parametrized PDEs. Journal of Computational Physics 501 (2024): 112762. DOI 10.1016/j.jcp.2024.112762.
Warning
This solver works only for data-driven model. Hence in the
problem
definition the codition must only containinput
(e.g. coefficient parameters, time parameters), andtarget
.Initialization of the
ReducedOrderModelSolver
class.- Parameters:
problem (AbstractProblem) – The formualation of the problem.
reduction_network (torch.nn.Module) – The reduction network used for reducing the input space. It must contain two methods, namely
encode
for input encoding, anddecode
for decoding the former result.interpolation_network (torch.nn.Module) – The interpolation network for interpolating the control parameters to latent space obtained by the
reduction_network
encoding.loss (torch.nn.Module) – The loss function to be minimized. If
None
, thetorch.nn.MSELoss
loss is used. Default isNone
.optimizer (Optimizer) – The optimizer to be used. If
None
, thetorch.optim.Adam
optimizer is used. Default isNone
.scheduler (Scheduler) – Learning rate scheduler. If
None
, thetorch.optim.lr_scheduler.ConstantLR
scheduler is used. Default isNone
.weighting (WeightingInterface) – The weighting schema to be used. If
None
, no weighting schema is used. Default isNone
.use_lt (bool) – If
True
, the solver uses LabelTensors as input. Default isTrue
.
- forward(x)[source]#
Forward pass implementation. It computes the encoder representation by calling the forward method of the
interpolation_network
on the input, and maps it to output space by calling the decode methode of thereduction_network
.- Parameters:
x (torch.Tensor | LabelTensor) – Input tensor.
- Returns:
Solver solution.
- Return type:
- loss_data(input_pts, output_pts)[source]#
Compute the data loss by evaluating the loss between the network’s output and the true solution. This method should not be overridden, if not intentionally.
- Parameters:
input_pts (LabelTensor) – The input points to the neural network.
output_pts (LabelTensor) – The true solution to compare with the network’s output.
- Returns:
The supervised loss, averaged over the number of observations.
- Return type: