Averaging layers#
- class AVNOBlock(hidden_size=100, func=<class 'torch.nn.modules.activation.GELU'>)[source]
Bases:
Module
The PINA implementation of the inner layer of the Averaging Neural Operator.
The operator layer performs an affine transformation where the convolution is approximated with a local average. Given the input function \(v(x)\in\mathbb{R}^{\rm{emb}}\) the layer computes the operator update \(K(v)\) as:
\[K(v) = \sigma\left(Wv(x) + b + \frac{1}{|\mathcal{A}|}\int v(y)dy\right)\]where:
\(\mathbb{R}^{\rm{emb}}\) is the embedding (hidden) size corresponding to the
hidden_size
object\(\sigma\) is a non-linear activation, corresponding to the
func
object\(W\in\mathbb{R}^{\rm{emb}\times\rm{emb}}\) is a tunable matrix.
\(b\in\mathbb{R}^{\rm{emb}}\) is a tunable bias.
See also
Original reference: Lanthaler S. Li, Z., Kovachki, Stuart, A. (2020). The Nonlocal Neural Operator: Universal Approximation. DOI: arXiv preprint arXiv:2304.13221.
- Parameters:
hidden_size (int) – Size of the hidden layer, defaults to 100.
func – The activation function, default to nn.GELU.
- forward(x)[source]
Forward pass of the layer, it performs a sum of local average and an affine transformation of the field.
- Parameters:
x (torch.Tensor) – The input tensor for performing the computation. It expects a tensor \(B \times N \times D\), where \(B\) is the batch_size, \(N\) the number of points in the mesh, \(D\) the dimension of the problem. In particular \(D\) is the codomain of the function \(v\). For example a scalar function has \(D=1\), a 4-dimensional vector function \(D=4\).
- Returns:
The output tensor obtained from Average Neural Operator Block.
- Return type: