Continuous Convolution Block#
- class ContinuousConvBlock(input_numb_field, output_numb_field, filter_dim, stride, model=None, optimize=False, no_overlap=False)[source]
Bases:
BaseContinuousConv
Continuous Convolutional block.
The class expects the input to be in the form: \([B \times N_{in} \times N \times D]\), where \(B\) is the batch_size, \(N_{in}\) is the number of input fields, \(N\) the number of points in the mesh, \(D\) the dimension of the problem. In particular:
\(D\) is the number of spatial variables + 1. The last column must contain the field value. For example for 2D problems \(D=3\) and the tensor will be something like
[first coordinate, second coordinate, field value]
.\(N_{in}\) represents the number of vectorial function presented. For example a vectorial function \(f = [f_1, f_2]\) will have \(N_{in}=2\).
See also
Original reference: Coscia, D., Meneghetti, L., Demo, N. et al. A continuous convolutional trainable filter for modelling unstructured data. Comput Mech 72, 253-265 (2023). DOI https://doi.org/10.1007/s00466-023-02291-1
Initialization of the
ContinuousConvBlock
class.- Parameters:
input_numb_field (int) – The number of input fields.
output_numb_field (int) – The number of input fields.
filter_dim (list[int] | tuple[int]) – The shape of the filter.
stride (dict) – The stride of the filter.
model (torch.nn.Module) – The neural network for inner parametrization. Default is
None
.optimize (bool) – If
True
, optimization is performed on the continuous filter. It should be used only when the training points are fixed. Ifmodel
is ineval
mode, it is reset toFalse
. Default isFalse
.no_overlap (bool) – If
True
, optimization is performed on the transposed continuous filter. It should be used only when the filter positions do not overlap for different strides. Default isFalse
.
Note
If
optimize=True
, the filter can be use either inforward
or intranspose
mode, not both.- Example:
>>> class MLP(torch.nn.Module): ... def __init__(self) -> None: ... super().__init__() ... self. model = torch.nn.Sequential( ... torch.nn.Linear(2, 8), ... torch.nn.ReLU(), ... torch.nn.Linear(8, 8), ... torch.nn.ReLU(), ... torch.nn.Linear(8, 1) ... ) ... def forward(self, x): ... return self.model(x) >>> dim = [3, 3] >>> stride = { ... "domain": [10, 10], ... "start": [0, 0], ... "jumps": [3, 3], ... "direction": [1, 1.] ... } >>> conv = ContinuousConv2D(1, 2, dim, stride, MLP) >>> conv ContinuousConv2D( (_net): ModuleList( (0): MLP( (model): Sequential( (0): Linear(in_features=2, out_features=8, bias=True) (1): ReLU() (2): Linear(in_features=8, out_features=8, bias=True) (3): ReLU() (4): Linear(in_features=8, out_features=1, bias=True) ) ) (1): MLP( (model): Sequential( (0): Linear(in_features=2, out_features=8, bias=True) (1): ReLU() (2): Linear(in_features=8, out_features=8, bias=True) (3): ReLU() (4): Linear(in_features=8, out_features=1, bias=True) ) ) ) )
- forward(X)[source]
Forward pass.
- Parameters:
x (torch.Tensor) – The input tensor.
- Returns:
The output tensor.
- Return type:
- transpose_no_overlap(integrals, X)[source]
Transpose pass in the layer for no-overlapping filters.
- Parameters:
integrals (torch.Tensor) – The weights for the transpose convolution. Expected shape \([B, N_{in}, N]\).
X (torch.Tensor) – The input data. Expected shape \([B, N_{in}, M, D]\).
- Returns:
Feed forward transpose convolution. Expected shape: \([B, N_{out}, M, D]\).
- Return type:
Note
This function is automatically called when
.transpose()
method is used andno_overlap=True
- transpose_overlap(integrals, X)[source]
Transpose pass in the layer for overlapping filters.
- Parameters:
integrals (torch.Tensor) – The weights for the transpose convolution. Expected shape \([B, N_{in}, N]\).
X (torch.Tensor) – The input data. Expected shape \([B, N_{in}, M, D]\).
- Returns:
Feed forward transpose convolution. Expected shape: \([B, N_{out}, M, D]\).
- Return type:
Note
This function is automatically called when
.transpose()
method is used andno_overlap=False