ResidualFeedForward#
- class ResidualFeedForward(input_dimensions, output_dimensions, inner_size=20, n_layers=2, func=<class 'torch.nn.modules.activation.Tanh'>, bias=True, transformer_nets=None)[source]#
Bases:
Module
The PINA implementation of feedforward network, also with skipped connection and transformer network, as presented in Understanding and mitigating gradient pathologies in physics-informed neural networks
See also
Original reference: Wang, Sifan, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing 43.5 (2021): A3055-A3081. DOI: 10.1137/20M1318043
- Parameters:
input_dimensions (int) – The number of input components of the model. Expected tensor shape of the form \((*, d)\), where * means any number of dimensions including none, and \(d\) the
input_dimensions
.output_dimensions (int) – The number of output components of the model. Expected tensor shape of the form \((*, d)\), where * means any number of dimensions including none, and \(d\) the
output_dimensions
.inner_size (int) – number of neurons in the hidden layer(s). Default is 20.
n_layers (int) – number of hidden layers. Default is 2.
func (torch.nn.Module) – the activation function to use. If a single
torch.nn.Module
is passed, this is used as activation function after any layers, except the last one. If a list of Modules is passed, they are used as activation functions at any layers, in order.bias (bool) – If
True
the MLP will consider some bias.transformer_nets (list | tuple) – a list or tuple containing the two torch.nn.Module which act as transformer network. The input dimension of the network must be the same as
input_dimensions
, and the output dimension must be the same asinner_size
.
- forward(x)[source]#
Defines the computation performed at every call.
- Parameters:
x (torch.Tensor) – The tensor to apply the forward pass.
- Returns:
the output computed by the model.
- Return type: