Low Rank layer#

class LowRankBlock(input_dimensions, embedding_dimenion, rank, inner_size=20, n_layers=2, func=<class 'torch.nn.modules.activation.Tanh'>, bias=True)[source]

Bases: Module

The PINA implementation of the inner layer of the Averaging Neural Operator.

The operator layer performs an affine transformation where the convolution is approximated with a local average. Given the input function \(v(x)\in\mathbb{R}^{\rm{emb}}\) the layer computes the operator update \(K(v)\) as:

\[K(v) = \sigma\left(Wv(x) + b + \sum_{i=1}^r \langle \psi^{(i)} , v(x) \rangle \phi^{(i)} \right)\]

where:

  • \(\mathbb{R}^{\rm{emb}}\) is the embedding (hidden) size corresponding to the hidden_size object

  • \(\sigma\) is a non-linear activation, corresponding to the func object

  • \(W\in\mathbb{R}^{\rm{emb}\times\rm{emb}}\) is a tunable matrix.

  • \(b\in\mathbb{R}^{\rm{emb}}\) is a tunable bias.

  • \(\psi^{(i)}\in\mathbb{R}^{\rm{emb}}\) and \(\phi^{(i)}\in\mathbb{R}^{\rm{emb}}\) are \(r\) a low rank basis functions mapping.

  • \(b\in\mathbb{R}^{\rm{emb}}\) is a tunable bias.

See also

Original reference: Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2023). Neural operator: Learning maps between function spaces with applications to PDEs. Journal of Machine Learning Research, 24(89), 1-97.

Parameters:
  • input_dimensions (int) – The number of input components of the model. Expected tensor shape of the form \((*, d)\), where * means any number of dimensions including none, and \(d\) the input_dimensions.

  • embedding_dimenion (int) – Size of the embedding dimension of the field.

  • rank (int) – The rank number of the basis approximation components of the model. Expected tensor shape of the form \((*, 2d)\), where * means any number of dimensions including none, and \(2d\) the rank for both basis functions.

  • inner_size (int) – Number of neurons in the hidden layer(s) for the basis function network. Default is 20.

  • n_layers (int) – Number of hidden layers. for the basis function network. Default is 2.

  • func – The activation function to use for the basis function network. If a single torch.nn.Module is passed, this is used as activation function after any layers, except the last one. If a list of Modules is passed, they are used as activation functions at any layers, in order.

  • bias (bool) – If True the MLP will consider some bias for the basis function network.

forward(x, coords)[source]

Forward pass of the layer, it performs an affine transformation of the field, and a low rank approximation by doing a dot product of the basis \(\psi^{(i)}\) with the filed vector \(v\), and use this coefficients to expand \(\phi^{(i)}\) evaluated in the spatial input \(x\).

Parameters:
  • x (torch.Tensor) – The input tensor for performing the computation. It expects a tensor \(B \times N \times D\), where \(B\) is the batch_size, \(N\) the number of points in the mesh, \(D\) the dimension of the problem. In particular \(D\) is the codomain of the function \(v\). For example a scalar function has \(D=1\), a 4-dimensional vector function \(D=4\).

  • coords (torch.Tensor) – The coordinates in which the field is evaluated for performing the computation. It expects a tensor \(B \times N \times d\), where \(B\) is the batch_size, \(N\) the number of points in the mesh, \(D\) the dimension of the domain.

Returns:

The output tensor obtained from Average Neural Operator Block.

Return type:

torch.Tensor

property rank

The basis rank.