RBAPINN#

class RBAPINN(problem, model, extra_features=None, loss=MSELoss(), optimizer=<class 'torch.optim.adam.Adam'>, optimizer_kwargs={'lr': 0.001}, scheduler=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_kwargs={'factor': 1, 'total_iters': 0}, eta=0.001, gamma=0.999)[source]#

Bases: PINN

Residual-based Attention PINN (RBAPINN) solver class. This class implements Residual-based Attention Physics Informed Neural Network solvers, using a user specified model to solve a specific problem. It can be used for solving both forward and inverse problems.

The Residual-based Attention Physics Informed Neural Network aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of the differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

minimizing the loss function

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N} \sum_{i=1}^{N_\Omega} \lambda_{\Omega}^{i} \mathcal{L} \left( \mathcal{A} [\mathbf{u}](\mathbf{x}) \right) + \frac{1}{N} \sum_{i=1}^{N_{\partial\Omega}} \lambda_{\partial\Omega}^{i} \mathcal{L} \left( \mathcal{B}[\mathbf{u}](\mathbf{x}) \right),\]

denoting the weights as \(\lambda_{\Omega}^1, \dots, \lambda_{\Omega}^{N_\Omega}\) and \(\lambda_{\partial \Omega}^1, \dots, \lambda_{\Omega}^{N_\partial \Omega}\) for \(\Omega\) and \(\partial \Omega\), respectively.

Residual-based Attention Physics Informed Neural Network computes the weights by updating them at every epoch as follows

\[\lambda_i^{k+1} \leftarrow \gamma\lambda_i^{k} + \eta\frac{\lvert r_i\rvert}{\max_j \lvert r_j\rvert},\]

where \(r_i\) denotes the residual at point \(i\), \(\gamma\) denotes the decay rate, and \(\eta\) is the learning rate for the weights’ update.

See also

Original reference: Sokratis J. Anagnostopoulos, Juan D. Toscano, Nikolaos Stergiopulos, and George E. Karniadakis. “Residual-based attention and connection to information bottleneck theory in PINNs”. Computer Methods in Applied Mechanics and Engineering 421 (2024): 116805 DOI: 10.1016/ j.cma.2024.116805.

Parameters:
  • problem (AbstractProblem) – The formulation of the problem.

  • model (torch.nn.Module) – The neural network model to use.

  • extra_features (torch.nn.Module) – The additional input features to use as augmented input.

  • loss (torch.nn.Module) – The loss function used as minimizer, default torch.nn.MSELoss.

  • optimizer (torch.optim.Optimizer) – The neural network optimizer to use; default is torch.optim.Adam.

  • optimizer_kwargs (dict) – Optimizer constructor keyword args.

  • scheduler (torch.optim.LRScheduler) – Learning rate scheduler.

  • scheduler_kwargs (dict) – LR scheduler constructor keyword args.

  • eta (float | int) – The learning rate for the weights of the residual.

  • gamma (float) – The decay parameter in the update of the weights of the residual.

loss_phys(samples, equation)[source]#

Computes the physics loss for the residual-based attention PINN solver based on given samples and equation.

Parameters:
  • samples (LabelTensor) – The samples to evaluate the physics loss.

  • equation (EquationInterface) – The governing equation representing the physics.

Returns:

The physics loss calculated based on given samples and equation.

Return type:

LabelTensor