CompetitivePINN#

class CompetitivePINN(problem, model, discriminator=None, optimizer_model=None, optimizer_discriminator=None, scheduler_model=None, scheduler_discriminator=None, weighting=None, loss=None)[source]#

Bases: PINNInterface, MultiSolverInterface

Competitive Physics-Informed Neural Network (CompetitivePINN) solver class. This class implements the Competitive Physics-Informed Neural Network solver, using a user specified model to solve a specific problem. It can be used to solve both forward and inverse problems.

The Competitive Physics-Informed Neural Network solver aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of a differential problem:

\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]

minimizing the loss function with respect to the model parameters, while maximizing it with respect to the discriminator parameters:

\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{A}[\mathbf{u}](\mathbf{x}_i))+ \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{B}[\mathbf{u}](\mathbf{x}_i)),\]

where :math:D is the discriminator network, which identifies the points where the model performs worst, and :math:mathcal{L} is a specific loss function, typically the MSE:

\[\mathcal{L}(v) = \| v \|^2_2.\]

See also

Original reference: Zeng, Qi, et al. Competitive physics informed networks. International Conference on Learning Representations, ICLR 2022 OpenReview Preprint.

Initialization of the CompetitivePINN class.

Parameters:
forward(x)[source]#

Forward pass.

Parameters:

x (LabelTensor) – Input tensor.

Returns:

The output of the neural network.

Return type:

LabelTensor

training_step(batch)[source]#

Solver training step, overridden to perform manual optimization.

Parameters:

batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

Returns:

The aggregated loss.

Return type:

LabelTensor

loss_phys(samples, equation)[source]#

Computes the physics loss for the physics-informed solver based on the provided samples and equation.

Parameters:
Returns:

The computed physics loss.

Return type:

LabelTensor

configure_optimizers()[source]#

Optimizer configuration.

Returns:

The optimizers and the schedulers

Return type:

tuple[list[Optimizer], list[Scheduler]]

on_train_batch_end(outputs, batch, batch_idx)[source]#

This method is called at the end of each training batch and overrides the PyTorch Lightning implementation to log checkpoints.

Parameters:
  • outputs (torch.Tensor) – The model’s output for the current batch.

  • batch (list[tuple[str, dict]]) – A batch of data. Each element is a tuple containing a condition name and a dictionary of points.

  • batch_idx (int) – The index of the current batch.

property neural_net#

The model.

Returns:

The model.

Return type:

torch.nn.Module

property discriminator#

The discriminator.

Returns:

The discriminator.

Return type:

torch.nn.Module

property optimizer_model#

The optimizer associated to the model.

Returns:

The optimizer for the model.

Return type:

Optimizer

property optimizer_discriminator#

The optimizer associated to the discriminator.

Returns:

The optimizer for the discriminator.

Return type:

Optimizer

property scheduler_model#

The scheduler associated to the model.

Returns:

The scheduler for the model.

Return type:

Scheduler

property scheduler_discriminator#

The scheduler associated to the discriminator.

Returns:

The scheduler for the discriminator.

Return type:

Scheduler