AE

Module for FNN-Autoencoders.

class AE(layers_encoder, layers_decoder, function_encoder, function_decoder, stop_training, loss=None, optimizer=<class 'torch.optim.adam.Adam'>, lr=0.001, l2_regularization=0, frequency_print=10, last_identity=True)[source]

Bases: Reduction, ANN

Feed-Forward AutoEncoder class (AE)

Parameters:
  • layers_encoder (list) – ordered list with the number of neurons of each hidden layer for the encoder

  • layers_decoder (list) – ordered list with the number of neurons of each hidden layer for the decoder

  • function_encoder (torch.nn.modules.activation) – activation function at each layer for the encoder, except for the output layer at with Identity is considered by default. A single activation function can be passed or a list of them of length equal to the number of hidden layers.

  • function_decoder (torch.nn.modules.activation) – activation function at each layer for the decoder, except for the output layer at with Identity is considered by default. A single activation function can be passed or a list of them of length equal to the number of hidden layers.

  • stop_training (list) – list with the maximum number of training iterations (int) and/or the desired tolerance on the training loss (float).

  • loss (torch.nn.Module) – loss definition (Mean Squared if not given).

  • optimizer (torch.optim) – the torch class implementing optimizer. Default value is Adam optimizer.

  • lr (float) – the learning rate. Default is 0.001.

  • l2_regularization (float) – the L2 regularization coefficient, it corresponds to the “weight_decay”. Default is 0 (no regularization).

  • frequency_print (int) – the frequency in terms of epochs of the print during the training of the network.

  • last_identity (boolean) – Flag to specify if the last activation function is the identity function. In the case the user provides the entire list of activation functions, this attribute is ignored. Default value is True.

Example:
>>> from ezyrb import AE
>>> import torch
>>> f = torch.nn.Softplus
>>> low_dim = 5
>>> optim = torch.optim.Adam
>>> ae = AE([400, low_dim], [low_dim, 400], f(), f(), 2000)
>>> # or ...
>>> ae = AE([400, 10, 10, low_dim], [low_dim, 400], f(), f(), 1e-5,
>>>          optimizer=optim)
>>> ae.fit(snapshots)
>>> reduced_snapshots = ae.reduce(snapshots)
>>> expanded_snapshots = ae.expand(reduced_snapshots)
_abc_impl = <_abc._abc_data object>
_build_model(values)[source]

Build the torch model.

Considering the number of neurons per layer (self.layers), a feed-forward NN is defined:

  • activation function from layer i>=0 to layer i+1: self.function[i]; activation function at the output layer: Identity (by default).

Parameters:

values (numpy.ndarray) – the set values one wants to reduce.

expand(g)[source]

Projects a reduced to full order solution.

Param:

numpy.ndarray g the latent variables.

Note

Same as inverse_transform. Kept for backward compatibility.

fit(values)[source]

Build the AE given ‘values’ and perform training.

Training procedure information:
  • optimizer: Adam’s method with default parameters (see, e.g., https://pytorch.org/docs/stable/optim.html);

  • loss: self.loss (if none, the Mean Squared Loss is set by default).

  • stopping criterion: the fulfillment of the requested tolerance on the training loss compatibly with the prescribed budget of training iterations (if type(self.stop_training) is list); if type(self.stop_training) is int or type(self.stop_training) is float, only the number of maximum iterations or the accuracy level on the training loss is considered as the stopping rule, respectively.

Parameters:

values (numpy.ndarray) – the (training) values in the points.

inverse_transform(g)[source]

Projects a reduced to full order solution.

Param:

numpy.ndarray g the latent variables.

reduce(X)[source]

Reduces the given snapshots.

Parameters:

X (numpy.ndarray) – the input snapshots matrix (stored by column).

Note

Same as transform. Kept for backward compatibility.

transform(X)[source]

Reduces the given snapshots.

Parameters:

X (numpy.ndarray) – the input snapshots matrix (stored by column).