SelfAdaptivePINN#

class SelfAdaptivePINN(problem, model, weight_function=Sigmoid(), optimizer_model=None, optimizer_weights=None, scheduler_model=None, scheduler_weights=None, weighting=None, loss=None)[source]#

Bases: PINNInterface, MultiSolverInterface

Self-Adaptive Physics-Informed Neural Network (SelfAdaptivePINN) solver class. This class implements the Self-Adaptive Physics-Informed Neural Network solver, using a user specified model to solve a specific problem. It can be used to solve both forward and inverse problems.

The Self-Adapive Physics-Informed Neural Network solver aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of a differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

integrating pointwise loss evaluation using a mask :math:m and self-adaptive weights, which allow the model to focus on regions of the domain where the residual is higher.

The loss function to solve the problem is

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N} \sum_{i=1}^{N_\Omega} m \left( \lambda_{\Omega}^{i} \right) \mathcal{L} \left( \mathcal{A} [\mathbf{u}](\mathbf{x}) \right) + \frac{1}{N} \sum_{i=1}^{N_{\partial\Omega}} m \left( \lambda_{\partial\Omega}^{i} \right) \mathcal{L} \left( \mathcal{B}[\mathbf{u}](\mathbf{x}) \right),\]

denoting the self adaptive weights as \(\lambda_{\Omega}^1, \dots, \lambda_{\Omega}^{N_\Omega}\) and \(\lambda_{\partial \Omega}^1, \dots, \lambda_{\Omega}^{N_\partial \Omega}\) for \(\Omega\) and \(\partial \Omega\), respectively.

The Self-Adaptive Physics-Informed Neural Network solver identifies the solution and appropriate self adaptive weights by solving the following optimization problem:

\[\min_{w} \max_{\lambda_{\Omega}^k, \lambda_{\partial \Omega}^s} \mathcal{L} ,\]

where \(w\) denotes the network parameters, and \(\mathcal{L}\) is a specific loss function, , typically the MSE:

\[\mathcal{L}(v) = \| v \|^2_2.\]

See also

Original reference: McClenny, Levi D., and Ulisses M. Braga-Neto. Self-adaptive physics-informed neural networks. Journal of Computational Physics 474 (2023): 111722. DOI: 10.1016/j.jcp.2022.111722.

Initialization of the SelfAdaptivePINN class.

Parameters:
training_step(batch, batch_idx, **kwargs)[source]#

Solver training step. It computes the optimization cycle and aggregates the losses using the weighting attribute.

Parameters:
  • batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

  • batch_idx (int) – The index of the current batch.

  • kwargs (dict) – Additional keyword arguments passed to optimization_cycle.

Returns:

The loss of the training step.

Return type:

torch.Tensor

validation_step(batch, **kwargs)[source]#

The validation step for the Self-Adaptive PINN solver. It returns the average residual computed with the loss function not aggregated.

Parameters:
  • batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

  • kwargs (dict) – Additional keyword arguments passed to optimization_cycle.

Returns:

The loss of the validation step.

Return type:

torch.Tensor

test_step(batch, **kwargs)[source]#

The test step for the Self-Adaptive PINN solver. It returns the average residual computed with the loss function not aggregated.

Parameters:
  • batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

  • kwargs (dict) – Additional keyword arguments passed to optimization_cycle.

Returns:

The loss of the test step.

Return type:

torch.Tensor

loss_phys(samples, equation)[source]#

Computes the physics loss for the physics-informed solver based on the provided samples and equation.

Parameters:
Returns:

The computed physics loss.

Return type:

LabelTensor

loss_data(input, target)[source]#

Compute the data loss for the Self-Adaptive PINN solver by evaluating the loss between the network’s output and the true solution. This method should not be overridden, if not intentionally.

Parameters:
Returns:

The supervised loss, averaged over the number of observations.

Return type:

LabelTensor | torch.Tensor

forward(x)[source]#

Forward pass.

Parameters:

x (torch.Tensor | LabelTensor) – Input tensor.

Returns:

The output of the neural network.

Return type:

torch.Tensor | LabelTensor

configure_optimizers()[source]#

Optimizer configuration.

Returns:

The optimizers and the schedulers

Return type:

tuple[list[Optimizer], list[Scheduler]]

property model#

The model.

Returns:

The model.

Return type:

torch.nn.Module

property weights#

The self-adaptive weights.

Returns:

The self-adaptive weights.

Return type:

torch.nn.Module

property scheduler_model#

The scheduler associated to the model.

Returns:

The scheduler for the model.

Return type:

Scheduler

property scheduler_weights#

The scheduler associated to the mask model.

Returns:

The scheduler for the mask model.

Return type:

Scheduler

property optimizer_model#

Returns the optimizer associated to the model.

Returns:

The optimizer for the model.

Return type:

Optimizer

property optimizer_weights#

The optimizer associated to the mask model.

Returns:

The optimizer for the mask model.

Return type:

Optimizer