Continuous Convolution Interface#

class BaseContinuousConv(input_numb_field, output_numb_field, filter_dim, stride, model=None, optimize=False, no_overlap=False)[source]

Bases: Module

Base Class for Continuous Convolution.

The class expects the input to be in the form: \([B \times N_{in} \times N \times D]\), where \(B\) is the batch_size, \(N_{in}\) is the number of input fields, \(N\) the number of points in the mesh, \(D\) the dimension of the problem. In particular:

  • \(D\) is the number of spatial variables + 1. The last column must contain the field value.

  • \(N_{in}\) represents the number of function components. For instance, a vectorial function \(f = [f_1, f_2]\) has \(N_{in}=2\).

:Note

A 2-dimensional vector-valued function defined on a 3-dimensional input evaluated on a 100 points input mesh and batch size of 8 is represented as a tensor of shape [8, 2, 100, 4], where the columns [:, 0, :, -1] and [:, 1, :, -1] represent the first and second, components of the function, respectively.

The algorithm returns a tensor of shape: \([B \times N_{out} \times N \times D]\), where \(B\) is the batch_size, \(N_{out}\) is the number of output fields, \(N\) the number of points in the mesh, \(D\) the dimension of the problem.

Initialization of the BaseContinuousConv class.

Parameters:
  • input_numb_field (int) – The number of input fields.

  • output_numb_field (int) – The number of input fields.

  • filter_dim (list[int] | tuple[int]) – The shape of the filter.

  • stride (dict) – The stride of the filter.

  • model (torch.nn.Module) – The neural network for inner parametrization. Default is None.

  • optimize (bool) – If True, optimization is performed on the continuous filter. It should be used only when the training points are fixed. If model is in eval mode, it is reset to False. Default is False.

  • no_overlap (bool) – If True, optimization is performed on the transposed continuous filter. It should be used only when the filter positions do not overlap for different strides. Default is False.

Raises:
class DefaultKernel(input_dim, output_dim)[source]

Bases: Module

The default kernel.

Initialization of the DefaultKernel class.

Parameters:
  • input_dim (int) – The input dimension.

  • output_dim (int) – The output dimension.

Raises:
forward(x)[source]

Forward pass.

Parameters:

x (torch.Tensor) – The input data.

Returns:

The output data.

Return type:

torch.Tensor

property net

The neural network for inner parametrization.

Returns:

The neural network.

Return type:

torch.nn.Module

property stride

The stride of the filter.

Returns:

The stride of the filter.

Return type:

dict

property filter_dim

The shape of the filter.

Returns:

The shape of the filter.

Return type:

torch.Tensor

property input_numb_field

The number of input fields.

Returns:

The number of input fields.

Return type:

int

property output_numb_field

The number of output fields.

Returns:

The number of output fields.

Return type:

int

abstract forward(X)[source]

Forward pass.

Parameters:

X (torch.Tensor) – The input data.

abstract transpose_overlap(X)[source]

Transpose the convolution with overlap.

Parameters:

X (torch.Tensor) – The input data.

abstract transpose_no_overlap(X)[source]

Transpose the convolution without overlap.

Parameters:

X (torch.Tensor) – The input data.