Nonlinear Level Set Learning

Module for Nonlinear Level-set Learning (NLL).

References
  • Guannan Zhang, Jiaxin Zhang, Jacob Hinkle. Learning nonlinear level sets for dimensionality reduction in function approximation. NeurIPS 2019, 13199-13208. arxiv: https://arxiv.org/abs/1902.10652

NonlinearLevelSet.load_backward

Load the backward map for inference.

NonlinearLevelSet.load_forward

Load the forward map for inference.

NonlinearLevelSet.plot_loss

Plot the loss function decay.

NonlinearLevelSet.plot_sufficient_summary

Plot the sufficient summary.

NonlinearLevelSet.save_backward

Save the backward map for future inference.

NonlinearLevelSet.save_forward

Save the forward map for future inference.

NonlinearLevelSet.train

Train the whole RevNet.

ForwardNet.customized_loss

Custom loss function.

ForwardNet.forward

Maps original inputs to transformed inputs.

BackwardNet.forward

Maps transformed inputs to original inputs.

class NonlinearLevelSet(n_layers, active_dim, lr, epochs, dh=0.25, optimizer=<class 'torch.optim.adam.Adam'>, scheduler=None)[source]

Bases: object

Nonlinear Level Set class. It is implemented as a Reversible neural networks (RevNet).

Parameters
  • n_layers (int) – number of layers of the RevNet.

  • active_dim (int) – number of active dimensions.

  • lr (float) – learning rate.

  • epochs (int) – number of ephocs.

  • dh (float) – so-called time step of the RevNet. Default is 0.25.

  • optimizer (torch.optim.Optimizer) – optimizer used in the training of the RevNet. Its argument are passed in the dict optim_args when train() is called.

  • scheduler (torch.optim.lr_scheduler._LRScheduler) – scheduler used in the training of the RevNet. Its argument are passed in the dict scheduler_args when train() is called. Default is None.

Variables
  • backward (BackwardNet) – backward net of the RevNet. See BackwardNet class in nll module.

  • forward (ForwardNet) – forward net of the RevNet. See ForwardNet class in nll module.

  • loss_vec (list) – list containg the loss at every epoch.

load_backward(infile, n_params)[source]

Load the backward map for inference.

Parameters
  • infile (str) – filename of the saved net to load. See notes below.

  • n_params (int) – number of input parameters.

Note

A common PyTorch convention is to save models using either a .pt or .pth file extension.

load_forward(infile, n_params)[source]

Load the forward map for inference.

Parameters
  • infile (str) – filename of the saved net to load. See notes below.

  • n_params (int) – number of input parameters.

Note

A common PyTorch convention is to save models using either a .pt or .pth file extension.

plot_loss(filename=None, figsize=(10, 8), title='')[source]

Plot the loss function decay.

Parameters
  • filename (str) – if specified, the plot is saved at filename.

  • figsize (tuple(int,int)) – tuple in inches defining the figure size. Defaults to (10, 8).

  • title (str) – title of the plot.

plot_sufficient_summary(inputs, outputs, filename=None, figsize=(10, 8), title='')[source]

Plot the sufficient summary.

Parameters
  • inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the points in the full input space.

  • outputs (numpy.ndarray) – array n_samples-by-1 containing the corresponding function evaluations.

  • filename (str) – if specified, the plot is saved at filename.

  • figsize (tuple(int,int)) – tuple in inches defining the figure size. Defaults to (10, 8).

  • title (str) – title of the plot.

Raises

ValueError

Warning

Plot only available for active dimensions up to 1.

save_backward(outfile)[source]

Save the backward map for future inference.

Parameters

outfile (str) – filename of the net to save. Use either .pt or .pth. See notes below.

Note

A common PyTorch convention is to save models using either a .pt or .pth file extension.

save_forward(outfile)[source]

Save the forward map for future inference.

Parameters

outfile (str) – filename of the net to save. Use either .pt or .pth. See notes below.

Note

A common PyTorch convention is to save models using either a .pt or .pth file extension.

train(inputs, gradients, outputs=None, interactive=False, target_loss=0.0001, optim_args=None, scheduler_args=None)[source]

Train the whole RevNet.

Parameters
  • inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the points in the full input space.

  • gradients (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the gradient samples wrt the input parameters.

  • outputs (numpy.ndarray) – array n_samples-by-1 containing the corresponding function evaluations. Needed only for the interactive mode. Default is None.

  • interactive (bool) – if True a plot with the loss function decay, and the sufficient summary plot will be showed and updated every 10 epochs, and at the last epoch. Default is False.

  • target_loss (float) – loss threshold. Default is 0.0001.

  • optim_args (dict) – dictionary passed to the optimizer.

  • scheduler_args (dict) – dictionary passed to the scheduler.

Raises

ValueError: in interactive mode outputs must be provided for the sufficient summary plot.

class ForwardNet(n_params, n_layers, dh, active_dim)[source]

Bases: torch.nn.modules.module.Module

Forward net class. It is part of the RevNet.

Parameters
  • n_params (int) – number of input parameters.

  • n_layers (int) – number of layers of the RevNet.

  • dh (float) – so-called time step of the RevNet.

  • active_dim (int) – number of active dimensions.

Variables

omega (slice) – a slice object indicating the active dimension to keep. For example to keep the first two dimension omega = slice(2). It is automatically set with active_dim.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

_backward_hooks = None
_backward_pre_hooks = None
_buffers = None
_forward_hooks = None
_forward_hooks_with_kwargs = None
_forward_pre_hooks = None
_forward_pre_hooks_with_kwargs = None
_is_full_backward_hook = None
_load_state_dict_post_hooks = None
_load_state_dict_pre_hooks = None
_modules = None
_non_persistent_buffers_set = None
_parameters = None
_state_dict_hooks = None
_state_dict_pre_hooks = None
customized_loss(inputs, mapped_inputs, gradients)[source]

Custom loss function.

Parameters
  • inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the points in the full input space.

  • mapped_inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the mapped points in the full input space. They are the result of the forward application.

  • gradients (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the gradient samples wrt the input parameters.

forward(inputs)[source]

Maps original inputs to transformed inputs.

Parameters

inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the points in the original full input space.

Return mapped_inputs

DoubleTensor n_samples-by-n_params with the nonlinear transformed inputs.

Return type

torch.Tensor

training = None
class BackwardNet(n_params, n_layers, dh)[source]

Bases: torch.nn.modules.module.Module

Backward Net class. It is part of the RevNet.

Parameters
  • n_params (int) – number of input parameters.

  • n_layers (int) – number of layers of the RevNet.

  • dh (float) – so-called time step of the RevNet.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

_backward_hooks = None
_backward_pre_hooks = None
_buffers = None
_forward_hooks = None
_forward_hooks_with_kwargs = None
_forward_pre_hooks = None
_forward_pre_hooks_with_kwargs = None
_is_full_backward_hook = None
_load_state_dict_post_hooks = None
_load_state_dict_pre_hooks = None
_modules = None
_non_persistent_buffers_set = None
_parameters = None
_state_dict_hooks = None
_state_dict_pre_hooks = None
forward(mapped_inputs)[source]

Maps transformed inputs to original inputs.

Parameters

mapped_inputs (torch.Tensor) – DoubleTensor n_samples-by-n_params containing the nonlinear transformed inputs.

Return inputs

DoubleTensor n_samples-by-n_params with the points in the original full input space.

Return type

torch.Tensor

training = None