CompetitivePINN#
- class CompetitivePINN(problem, model, discriminator=None, optimizer_model=None, optimizer_discriminator=None, scheduler_model=None, scheduler_discriminator=None, weighting=None, loss=None)[source]#
Bases:
PINNInterface
,MultiSolverInterface
Competitive Physics-Informed Neural Network (CompetitivePINN) solver class. This class implements the Competitive Physics-Informed Neural Network solver, using a user specified
model
to solve a specificproblem
. It can be used to solve both forward and inverse problems.The Competitive Physics-Informed Neural Network solver aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of a differential problem:
\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]minimizing the loss function with respect to the model parameters, while maximizing it with respect to the discriminator parameters:
\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{A}[\mathbf{u}](\mathbf{x}_i))+ \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{B}[\mathbf{u}](\mathbf{x}_i)),\]where :math:D is the discriminator network, which identifies the points where the model performs worst, and :math:mathcal{L} is a specific loss function, typically the MSE:
\[\mathcal{L}(v) = \| v \|^2_2.\]See also
Original reference: Zeng, Qi, et al. Competitive physics informed networks. International Conference on Learning Representations, ICLR 2022 OpenReview Preprint.
Initialization of the
CompetitivePINN
class.- Parameters:
problem (AbstractProblem) – The problem to be solved.
model (torch.nn.Module) – The neural network model to be used.
discriminator (torch.nn.Module) – The discriminator to be used. If
None
, the discriminator is a deepcopy of themodel
. Default isNone
.optimizer_model (torch.optim.Optimizer) – The optimizer of the
model
. IfNone
, thetorch.optim.Adam
optimizer is used. Default isNone
.optimizer_discriminator (torch.optim.Optimizer) – The optimizer of the
discriminator
. IfNone
, thetorch.optim.Adam
optimizer is used. Default isNone
.scheduler_model (Scheduler) – Learning rate scheduler for the
model
. IfNone
, thetorch.optim.lr_scheduler.ConstantLR
scheduler is used. Default isNone
.scheduler_discriminator (Scheduler) – Learning rate scheduler for the
discriminator
. IfNone
, thetorch.optim.lr_scheduler.ConstantLR
scheduler is used. Default isNone
.weighting (WeightingInterface) – The weighting schema to be used. If
None
, no weighting schema is used. Default isNone
.loss (torch.nn.Module) – The loss function to be minimized. If
None
, thetorch.nn.MSELoss
loss is used. Default isNone
.
- forward(x)[source]#
Forward pass.
- Parameters:
x (LabelTensor) – Input tensor.
- Returns:
The output of the neural network.
- Return type:
- training_step(batch)[source]#
Solver training step, overridden to perform manual optimization.
- loss_phys(samples, equation)[source]#
Computes the physics loss for the physics-informed solver based on the provided samples and equation.
- Parameters:
samples (LabelTensor) – The samples to evaluate the physics loss.
equation (EquationInterface) – The governing equation.
- Returns:
The computed physics loss.
- Return type:
- on_train_batch_end(outputs, batch, batch_idx)[source]#
This method is called at the end of each training batch and overrides the PyTorch Lightning implementation to log checkpoints.
- property neural_net#
The model.
- Returns:
The model.
- Return type:
- property discriminator#
The discriminator.
- Returns:
The discriminator.
- Return type:
- property optimizer_model#
The optimizer associated to the model.
- Returns:
The optimizer for the model.
- Return type:
- property optimizer_discriminator#
The optimizer associated to the discriminator.
- Returns:
The optimizer for the discriminator.
- Return type:
- property scheduler_model#
The scheduler associated to the model.
- Returns:
The scheduler for the model.
- Return type: