NeuralTangentKernelWeighting#

Module for Neural Tangent Kernel Class

class NeuralTangentKernelWeighting(update_every_n_epochs=1, alpha=0.5)[source]#

Bases: WeightingInterface

A neural tangent kernel scheme for weighting different losses to boost the convergence.

See also

Original reference: Wang, Sifan, Xinling Yu, and Paris Perdikaris. When and why PINNs fail to train: A neural tangent kernel perspective. Journal of Computational Physics 449 (2022): 110768. DOI: 10.1016.

Initialization of the NeuralTangentKernelWeighting class.

Parameters:
  • update_every_n_epochs (int) – The number of training epochs between weight updates. If set to 1, the weights are updated at every epoch. Default is 1.

  • alpha (float) – The alpha parameter.

Raises:

ValueError – If alpha is not between 0 and 1 (inclusive).

weights_update(losses)[source]#

Update the weighting scheme based on the given losses.

Parameters:

losses (dict) – The dictionary of losses.

Returns:

The updated weights.

Return type:

dict