CompetitivePINN#
- class CompetitivePINN(problem, model, discriminator=None, loss=MSELoss(), optimizer_model=<class 'torch.optim.adam.Adam'>, optimizer_model_kwargs={'lr': 0.001}, optimizer_discriminator=<class 'torch.optim.adam.Adam'>, optimizer_discriminator_kwargs={'lr': 0.001}, scheduler_model=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_model_kwargs={'factor': 1, 'total_iters': 0}, scheduler_discriminator=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_discriminator_kwargs={'factor': 1, 'total_iters': 0})[source]#
Bases:
PINNInterface
Competitive Physics Informed Neural Network (PINN) solver class. This class implements Competitive Physics Informed Neural Network solvers, using a user specified
model
to solve a specificproblem
. It can be used for solving both forward and inverse problems.The Competitive Physics Informed Network aims to find the solution \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^m\) of the differential problem:
\[\begin{split}\begin{cases} \mathcal{A}[\mathbf{u}](\mathbf{x})=0\quad,\mathbf{x}\in\Omega\\ \mathcal{B}[\mathbf{u}](\mathbf{x})=0\quad, \mathbf{x}\in\partial\Omega \end{cases}\end{split}\]with a minimization (on
model
parameters) maximation ( ondiscriminator
parameters) of the loss function\[\mathcal{L}_{\rm{problem}} = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{A}[\mathbf{u}](\mathbf{x}_i))+ \frac{1}{N}\sum_{i=1}^N \mathcal{L}(D(\mathbf{x}_i)\mathcal{B}[\mathbf{u}](\mathbf{x}_i))\]where \(D\) is the discriminator network, which tries to find the points where the network performs worst, and \(\mathcal{L}\) is a specific loss function, default Mean Square Error:
\[\mathcal{L}(v) = \| v \|^2_2.\]See also
Original reference: Zeng, Qi, et al. “Competitive physics informed networks.” International Conference on Learning Representations, ICLR 2022 OpenReview Preprint.
Warning
This solver does not currently support the possibility to pass
extra_feature
.- Parameters:
problem (AbstractProblem) – The formualation of the problem.
model (torch.nn.Module) – The neural network model to use for the model.
discriminator (torch.nn.Module) – The neural network model to use for the discriminator. If
None
, the discriminator network will have the same architecture as the model network.loss (torch.nn.Module) – The loss function used as minimizer, default
torch.nn.MSELoss
.optimizer_model (torch.optim.Optimizer) – The neural network optimizer to use for the model network , default is torch.optim.Adam.
optimizer_model_kwargs (dict) – Optimizer constructor keyword args. for the model.
optimizer_discriminator (torch.optim.Optimizer) – The neural network optimizer to use for the discriminator network , default is torch.optim.Adam.
optimizer_discriminator_kwargs (dict) – Optimizer constructor keyword args. for the discriminator.
scheduler_model (torch.optim.LRScheduler) – Learning rate scheduler for the model.
scheduler_model_kwargs (dict) – LR scheduler constructor keyword args.
scheduler_discriminator (torch.optim.LRScheduler) – Learning rate scheduler for the discriminator.
- forward(x)[source]#
Forward pass implementation for the PINN solver. It returns the function evaluation \(\mathbf{u}(\mathbf{x})\) at the control points \(\mathbf{x}\).
- Parameters:
x (LabelTensor) – Input tensor for the PINN solver. It expects a tensor \(N \times D\), where \(N\) the number of points in the mesh, \(D\) the dimension of the problem,
- Returns:
PINN solution evaluated at contro points.
- Return type:
- loss_phys(samples, equation)[source]#
Computes the physics loss for the Competitive PINN solver based on given samples and equation.
- Parameters:
samples (LabelTensor) – The samples to evaluate the physics loss.
equation (EquationInterface) – The governing equation representing the physics.
- Returns:
The physics loss calculated based on given samples and equation.
- Return type:
- loss_data(input_tensor, output_tensor)[source]#
The data loss for the PINN solver. It computes the loss between the network output against the true solution.
- Parameters:
input_tensor (LabelTensor) – The input to the neural networks.
output_tensor (LabelTensor) – The true solution to compare the network solution.
- Returns:
The computed data loss.
- Return type:
- on_train_batch_end(outputs, batch, batch_idx)[source]#
This method is called at the end of each training batch, and ovverides the PytorchLightining implementation for logging the checkpoints.
- Parameters:
outputs (torch.Tensor) – The output from the model for the current batch.
batch (tuple) – The current batch of data.
batch_idx (int) – The index of the current batch.
- Returns:
Whatever is returned by the parent method
on_train_batch_end
.- Return type:
Any
- property neural_net#
Returns the neural network model.
- Returns:
The neural network model.
- Return type:
- property discriminator#
Returns the discriminator model (if applicable).
- Returns:
The discriminator model.
- Return type:
- property optimizer_model#
Returns the optimizer associated with the neural network model.
- Returns:
The optimizer for the neural network model.
- Return type:
- property optimizer_discriminator#
Returns the optimizer associated with the discriminator (if applicable).
- Returns:
The optimizer for the discriminator.
- Return type:
- property scheduler_model#
Returns the scheduler associated with the neural network model.
- Returns:
The scheduler for the neural network model.
- Return type:
torch.optim.lr_scheduler._LRScheduler
- property scheduler_discriminator#
Returns the scheduler associated with the discriminator (if applicable).
- Returns:
The scheduler for the discriminator.
- Return type:
torch.optim.lr_scheduler._LRScheduler