LpLoss#

class LpLoss(p=2, reduction='mean', relative=False)[source]#

Bases: LossInterface

Implementation of the Lp Loss. It defines a criterion to measures the pointwise Lp error between values in the input \(x\) and values in the target \(y\).

If reduction is set to none, the loss can be written as:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left[\sum_{i=1}^{D} \left| x_n^i - y_n^i \right|^p \right],\]

If relative is set to True, the relative Lp error is computed:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \frac{ [\sum_{i=1}^{D} | x_n^i - y_n^i|^p] } {[\sum_{i=1}^{D}|y_n^i|^p]},\]

where \(N\) is the batch size.

If reduction is not none, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]

Initialization of the LpLoss class.

Parameters:
  • p (int) – Degree of the Lp norm. It specifies the norm to be computed. Default is 2 (euclidean norm).

  • reduction (str) – The reduction method for the loss. Available options: none, mean, sum. If none, no reduction is applied. If mean, the sum of the loss values is divided by the number of values. If sum, the loss values are summed. Default is mean.

  • relative (bool) – If True, the relative error is computed. Default is False.

forward(input, target)[source]#

Forward method of the loss function.

Parameters:
Returns:

Loss evaluation.

Return type:

torch.Tensor