SAPINN#

class SAPINN(problem, model, weights_function=Sigmoid(), extra_features=None, loss=MSELoss(), optimizer_model=<class 'torch.optim.adam.Adam'>, optimizer_model_kwargs={'lr': 0.001}, optimizer_weights=<class 'torch.optim.adam.Adam'>, optimizer_weights_kwargs={'lr': 0.001}, scheduler_model=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_model_kwargs={'factor': 1, 'total_iters': 0}, scheduler_weights=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_weights_kwargs={'factor': 1, 'total_iters': 0})[source]#

Bases: PINNInterface

Self Adaptive Physics Informed Neural Network (SAPINN) solver class. This class implements Self-Adaptive Physics Informed Neural Network solvers, using a user specified model to solve a specific problem. It can be used for solving both forward and inverse problems.

The Self Adapive Physics Informed Neural Network aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of the differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

integrating the pointwise loss evaluation through a mask \(m\) and self adaptive weights that permit to focus the loss function on specific training samples. The loss function to solve the problem is

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N} \sum_{i=1}^{N_\Omega} m \left( \lambda_{\Omega}^{i} \right) \mathcal{L} \left( \mathcal{A} [\mathbf{u}](\mathbf{x}) \right) + \frac{1}{N} \sum_{i=1}^{N_{\partial\Omega}} m \left( \lambda_{\partial\Omega}^{i} \right) \mathcal{L} \left( \mathcal{B}[\mathbf{u}](\mathbf{x}) \right),\]

denoting the self adaptive weights as \(\lambda_{\Omega}^1, \dots, \lambda_{\Omega}^{N_\Omega}\) and \(\lambda_{\partial \Omega}^1, \dots, \lambda_{\Omega}^{N_\partial \Omega}\) for \(\Omega\) and \(\partial \Omega\), respectively.

Self Adaptive Physics Informed Neural Network identifies the solution and appropriate self adaptive weights by solving the following problem

\[\min_{w} \max_{\lambda_{\Omega}^k, \lambda_{\partial \Omega}^s} \mathcal{L} ,\]

where \(w\) denotes the network parameters, and \(\mathcal{L}\) is a specific loss function, default Mean Square Error:

\[\mathcal{L}(v) = \| v \|^2_2.\]

See also

Original reference: McClenny, Levi D., and Ulisses M. Braga-Neto. “Self-adaptive physics-informed neural networks.” Journal of Computational Physics 474 (2023): 111722. DOI: 10.1016/ j.jcp.2022.111722.

Parameters:
  • problem (AbstractProblem) – The formualation of the problem.

  • model (torch.nn.Module) – The neural network model to use for the model.

  • weights_function (torch.nn.Module) – The neural network model related to the mask of SAPINN. default Sigmoid.

  • extra_features (list(torch.nn.Module)) – The additional input features to use as augmented input. If None no extra features are passed. If it is a list of torch.nn.Module, the extra feature list is passed to all models. If it is a list of extra features’ lists, each single list of extra feature is passed to a model.

  • loss (torch.nn.Module) – The loss function used as minimizer, default torch.nn.MSELoss.

  • optimizer_model (torch.optim.Optimizer) – The neural network optimizer to use for the model network , default is torch.optim.Adam.

  • optimizer_model_kwargs (dict) – Optimizer constructor keyword args. for the model.

  • optimizer_weights (torch.optim.Optimizer) – The neural network optimizer to use for mask model model, default is torch.optim.Adam.

  • optimizer_weights_kwargs (dict) – Optimizer constructor keyword args. for the mask module.

  • scheduler_model (torch.optim.LRScheduler) – Learning rate scheduler for the model.

  • scheduler_model_kwargs (dict) – LR scheduler constructor keyword args.

  • scheduler_weights (torch.optim.LRScheduler) – Learning rate scheduler for the mask model.

  • scheduler_model_kwargs – LR scheduler constructor keyword args.

forward(x)[source]#

Forward pass implementation for the PINN solver. It returns the function evaluation \(\mathbf{u}(\mathbf{x})\) at the control points \(\mathbf{x}\).

Parameters:

x (LabelTensor) – Input tensor for the SAPINN solver. It expects a tensor \(N \times D\), where \(N\) the number of points in the mesh, \(D\) the dimension of the problem,

Returns:

PINN solution.

Return type:

LabelTensor

loss_phys(samples, equation)[source]#

Computes the physics loss for the SAPINN solver based on given samples and equation.

Parameters:
  • samples (LabelTensor) – The samples to evaluate the physics loss.

  • equation (EquationInterface) – The governing equation representing the physics.

Returns:

The physics loss calculated based on given samples and equation.

Return type:

torch.Tensor

loss_data(input_tensor, output_tensor)[source]#

Computes the data loss for the SAPINN solver based on input and output. It computes the loss between the network output against the true solution.

Parameters:
  • input_tensor (LabelTensor) – The input to the neural networks.

  • output_tensor (LabelTensor) – The true solution to compare the network solution.

Returns:

The computed data loss.

Return type:

torch.Tensor

configure_optimizers()[source]#

Optimizer configuration for the SAPINN solver.

Returns:

The optimizers and the schedulers

Return type:

tuple(list, list)

on_train_batch_end(outputs, batch, batch_idx)[source]#

This method is called at the end of each training batch, and ovverides the PytorchLightining implementation for logging the checkpoints.

Parameters:
  • outputs (torch.Tensor) – The output from the model for the current batch.

  • batch (tuple) – The current batch of data.

  • batch_idx (int) – The index of the current batch.

Returns:

Whatever is returned by the parent method on_train_batch_end.

Return type:

Any

on_train_start()[source]#

This method is called at the start of the training for setting the self adaptive weights as parameters of the mask model.

Returns:

Whatever is returned by the parent method on_train_start.

Return type:

Any

on_load_checkpoint(checkpoint)[source]#

Overriding the Pytorch Lightning on_load_checkpoint to handle checkpoints for Self Adaptive Weights. This method should not be overridden if not intentionally.

Parameters:

checkpoint (dict) – Pytorch Lightning checkpoint dict.

property neural_net#

Returns the neural network model.

Returns:

The neural network model.

Return type:

torch.nn.Module

property weights_dict#

Return the mask models associate to the application of the mask to the self adaptive weights for each loss that compones the global loss of the problem.

Returns:

The ModuleDict for mask models.

Return type:

torch.nn.ModuleDict

property scheduler_model#

Returns the scheduler associated with the neural network model.

Returns:

The scheduler for the neural network model.

Return type:

torch.optim.lr_scheduler._LRScheduler

property scheduler_weights#

Returns the scheduler associated with the mask model (if applicable).

Returns:

The scheduler for the mask model.

Return type:

torch.optim.lr_scheduler._LRScheduler

property optimizer_model#

Returns the optimizer associated with the neural network model.

Returns:

The optimizer for the neural network model.

Return type:

torch.optim.Optimizer

property optimizer_weights#

Returns the optimizer associated with the mask model (if applicable).

Returns:

The optimizer for the mask model.

Return type:

torch.optim.Optimizer