ezyrb.reduction.ae.AE
- class AE(layers_encoder, layers_decoder, function_encoder, function_decoder, stop_training, loss=None, optimizer=<class 'torch.optim.adam.Adam'>, lr=0.001, l2_regularization=0, frequency_print=10, last_identity=True)[source]
Feed-Forward AutoEncoder class (AE)
- Parameters:
layers_encoder (list) – ordered list with the number of neurons of each hidden layer for the encoder
layers_decoder (list) – ordered list with the number of neurons of each hidden layer for the decoder
function_encoder (torch.nn.modules.activation) – activation function at each layer for the encoder, except for the output layer at with Identity is considered by default. A single activation function can be passed or a list of them of length equal to the number of hidden layers.
function_decoder (torch.nn.modules.activation) – activation function at each layer for the decoder, except for the output layer at with Identity is considered by default. A single activation function can be passed or a list of them of length equal to the number of hidden layers.
stop_training (list) – list with the maximum number of training iterations (int) and/or the desired tolerance on the training loss (float).
loss (torch.nn.Module) – loss definition (Mean Squared if not given).
optimizer (torch.optim) – the torch class implementing optimizer. Default value is Adam optimizer.
lr (float) – the learning rate. Default is 0.001.
l2_regularization (float) – the L2 regularization coefficient, it corresponds to the “weight_decay”. Default is 0 (no regularization).
frequency_print (int) – the frequency in terms of epochs of the print during the training of the network.
last_identity (boolean) – Flag to specify if the last activation function is the identity function. In the case the user provides the entire list of activation functions, this attribute is ignored. Default value is True.
- Example:
>>> from ezyrb import AE >>> import torch >>> f = torch.nn.Softplus >>> low_dim = 5 >>> optim = torch.optim.Adam >>> ae = AE([400, low_dim], [low_dim, 400], f(), f(), 2000) >>> # or ... >>> ae = AE([400, 10, 10, low_dim], [low_dim, 400], f(), f(), 1e-5, >>> optimizer=optim) >>> ae.fit(snapshots) >>> reduced_snapshots = ae.reduce(snapshots) >>> expanded_snapshots = ae.expand(reduced_snapshots)
- __init__(layers_encoder, layers_decoder, function_encoder, function_decoder, stop_training, loss=None, optimizer=<class 'torch.optim.adam.Adam'>, lr=0.001, l2_regularization=0, frequency_print=10, last_identity=True)[source]
Methods
__init__(layers_encoder, layers_decoder, ...)expand(g)Projects a reduced to full order solution.
fit(values)Build the AE given 'values' and perform training.
Projects a reduced to full order solution.
predict(new_point)Evaluate the ANN at given 'new_points'.
reduce(X)Reduces the given snapshots.
transform(X)Reduces the given snapshots.