GAROM#

class GAROM(problem, generator, discriminator, loss=None, optimizer_generator=<class 'torch.optim.adam.Adam'>, optimizer_generator_kwargs={'lr': 0.001}, optimizer_discriminator=<class 'torch.optim.adam.Adam'>, optimizer_discriminator_kwargs={'lr': 0.001}, scheduler_generator=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_generator_kwargs={'factor': 1, 'total_iters': 0}, scheduler_discriminator=<class 'torch.optim.lr_scheduler.ConstantLR'>, scheduler_discriminator_kwargs={'factor': 1, 'total_iters': 0}, gamma=0.3, lambda_k=0.001, regularizer=False)[source]#

Bases: SolverInterface

GAROM solver class. This class implements Generative Adversarial Reduced Order Model solver, using user specified models to solve a specific order reduction``problem``.

See also

Original reference: Coscia, D., Demo, N., & Rozza, G. (2023). Generative Adversarial Reduced Order Modelling. DOI: arXiv preprint arXiv:2305.15881..

Parameters:
  • problem (AbstractProblem) – The formualation of the problem.

  • generator (torch.nn.Module) – The neural network model to use for the generator.

  • discriminator (torch.nn.Module) – The neural network model to use for the discriminator.

  • loss (torch.nn.Module) – The loss function used as minimizer, default None. If loss is None the defualt PowerLoss(p=1) is used, as in the original paper.

  • optimizer_generator (torch.optim.Optimizer) – The neural network optimizer to use for the generator network , default is torch.optim.Adam.

  • optimizer_generator_kwargs (dict) – Optimizer constructor keyword args. for the generator.

  • optimizer_discriminator (torch.optim.Optimizer) – The neural network optimizer to use for the discriminator network , default is torch.optim.Adam.

  • optimizer_discriminator_kwargs (dict) – Optimizer constructor keyword args. for the discriminator.

  • scheduler_generator (torch.optim.LRScheduler) – Learning rate scheduler for the generator.

  • scheduler_generator_kwargs (dict) – LR scheduler constructor keyword args.

  • scheduler_discriminator (torch.optim.LRScheduler) – Learning rate scheduler for the discriminator.

  • scheduler_discriminator_kwargs (dict) – LR scheduler constructor keyword args.

  • gamma (float) – Ratio of expected loss for generator and discriminator, defaults to 0.3.

  • lambda_k (float) – Learning rate for control theory optimization, defaults to 0.001.

  • regularizer (bool) – Regularization term in the GAROM loss, defaults to False.

Warning

The algorithm works only for data-driven model. Hence in the problem definition the codition must only contain input_points (e.g. coefficient parameters, time parameters), and output_points.

forward(x, mc_steps=20, variance=False)[source]#

Forward step for GAROM solver

Parameters:
  • x (torch.Tensor) – The input tensor.

  • mc_steps (int) – Number of montecarlo samples to approximate the expected value, defaults to 20.

  • variance (bool) – Returining also the sample variance of the solution, defaults to False.

Returns:

The expected value of the generator distribution. If variance=True also the sample variance is returned.

Return type:

torch.Tensor | tuple(torch.Tensor, torch.Tensor)

configure_optimizers()[source]#

Optimizer configuration for the GAROM solver.

Returns:

The optimizers and the schedulers

Return type:

tuple(list, list)

training_step(batch, batch_idx)[source]#

GAROM solver training step.

Parameters:
  • batch (tuple) – The batch element in the dataloader.

  • batch_idx (int) – The batch index.

Returns:

The sum of the loss functions.

Return type:

LabelTensor