Distance-based Losses

Loss functions that belong to the category "distance-based" are primarily used in regression problems. They utilize the numeric difference between the predicted output and the true target as a proxy variable to quantify the quality of individual predictions.

This section lists all the subtypes of DistanceLoss that are implemented in this package.

LPDistLoss

LossFunctions.LPDistLossType
LPDistLoss{P} <: DistanceLoss

The P-th power absolute distance loss. It is Lipschitz continuous iff P == 1, convex if and only if P >= 1, and strictly convex iff P > 1.

\[L(r) = |r|^P\]

source

L1DistLoss

LossFunctions.L1DistLossType
L1DistLoss <: DistanceLoss

The absolute distance loss. Special case of the LPDistLoss with P=1. It is Lipschitz continuous and convex, but not strictly convex.

\[L(r) = |r|\]


              Lossfunction                     Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    3 │\.                     ./│    1 │            ┌------------│
      │ '\.                 ./' │      │            |            │
      │   \.               ./   │      │            |            │
      │    '\.           ./'    │      │_           |           _│
    L │      \.         ./      │   L' │            |            │
      │       '\.     ./'       │      │            |            │
      │         \.   ./         │      │            |            │
    0 │          '\./'          │   -1 │------------┘            │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -3                        3
                 ŷ - y                            ŷ - y
source

L2DistLoss

LossFunctions.L2DistLossType
L2DistLoss <: DistanceLoss

The least squares loss. Special case of the LPDistLoss with P=2. It is strictly convex.

\[L(r) = |r|^2\]


              Lossfunction                     Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    9 │\                       /│    3 │                   .r/   │
      │".                     ."│      │                 .r'     │
      │ ".                   ." │      │              _./'       │
      │  ".                 ."  │      │_           .r/         _│
    L │   ".               ."   │   L' │         _:/'            │
      │    '\.           ./'    │      │       .r'               │
      │      \.         ./      │      │     .r'                 │
    0 │        "-.___.-"        │   -3 │  _/r'                   │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -2                        2
                 ŷ - y                            ŷ - y
source

LogitDistLoss

LossFunctions.LogitDistLossType
LogitDistLoss <: DistanceLoss

The distance-based logistic loss for regression. It is strictly convex and Lipschitz continuous.

\[L(r) = - \ln \frac{4 e^r}{(1 + e^r)^2}\]


              Lossfunction                     Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    2 │                         │    1 │                   _--'''│
      │\                       /│      │                ./'      │
      │ \.                   ./ │      │              ./         │
      │  '.                 .'  │      │_           ./          _│
    L │   '.               .'   │   L' │           ./            │
      │     \.           ./     │      │         ./              │
      │      '.         .'      │      │       ./                │
    0 │        '-.___.-'        │   -1 │___.-''                  │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -4                        4
                 ŷ - y                            ŷ - y
source

HuberLoss

LossFunctions.HuberLossType
HuberLoss <: DistanceLoss

Loss function commonly used for robustness to outliers. For large values of d it becomes close to the L1DistLoss, while for small values of d it resembles the L2DistLoss. It is Lipschitz continuous and convex, but not strictly convex.

\[L(r) = \begin{cases} \frac{r^2}{2} & \quad \text{if } | r | \le \alpha \\ \alpha | r | - \frac{\alpha^3}{2} & \quad \text{otherwise}\\ \end{cases}\]


              Lossfunction (d=1)               Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    2 │                         │    1 │                .+-------│
      │                         │      │              ./'        │
      │\.                     ./│      │             ./          │
      │ '.                   .' │      │_           ./          _│
    L │   \.               ./   │   L' │           /'            │
      │     \.           ./     │      │          /'             │
      │      '.         .'      │      │        ./'              │
    0 │        '-.___.-'        │   -1 │-------+'                │
      └────────────┴────────────┘      └────────────┴────────────┘
      -2                        2      -2                        2
                 ŷ - y                            ŷ - y
source

L1EpsilonInsLoss

LossFunctions.L1EpsilonInsLossType
L1EpsilonInsLoss <: DistanceLoss

The $ϵ$-insensitive loss. Typically used in linear support vector regression. It ignores deviances smaller than $ϵ$, but penalizes larger deviances linarily. It is Lipschitz continuous and convex, but not strictly convex.

\[L(r) = \max \{ 0, | r | - \epsilon \}\]


              Lossfunction (ϵ=1)               Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    2 │\                       /│    1 │                  ┌------│
      │ \                     / │      │                  |      │
      │  \                   /  │      │                  |      │
      │   \                 /   │      │_      ___________!     _│
    L │    \               /    │   L' │      |                  │
      │     \             /     │      │      |                  │
      │      \           /      │      │      |                  │
    0 │       \_________/       │   -1 │------┘                  │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -2                        2
                 ŷ - y                            ŷ - y
source

L2EpsilonInsLoss

LossFunctions.L2EpsilonInsLossType
L2EpsilonInsLoss <: DistanceLoss

The quadratic $ϵ$-insensitive loss. Typically used in linear support vector regression. It ignores deviances smaller than $ϵ$, but penalizes larger deviances quadratically. It is convex, but not strictly convex.

\[L(r) = \max \{ 0, | r | - \epsilon \}^2\]


              Lossfunction (ϵ=0.5)             Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    8 │                         │    1 │                  /      │
      │:                       :│      │                 /       │
      │'.                     .'│      │                /        │
      │ \.                   ./ │      │_         _____/        _│
    L │  \.                 ./  │   L' │         /               │
      │   \.               ./   │      │        /                │
      │    '\.           ./'    │      │       /                 │
    0 │      '-._______.-'      │   -1 │      /                  │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -2                        2
                 ŷ - y                            ŷ - y
source

PeriodicLoss

LossFunctions.PeriodicLossType
PeriodicLoss <: DistanceLoss

Measures distance on a circle of specified circumference c.

\[L(r) = 1 - \cos \left( \frac{2 r \pi}{c} \right)\]

source

QuantileLoss

LossFunctions.QuantileLossType
QuantileLoss <: DistanceLoss

The distance-based quantile loss, also known as pinball loss, can be used to estimate conditional τ-quantiles. It is Lipschitz continuous and convex, but not strictly convex. Furthermore it is symmetric if and only if τ = 1/2.

\[L(r) = \begin{cases} -\left( 1 - \tau \right) r & \quad \text{if } r < 0 \\ \tau r & \quad \text{if } r \ge 0 \\ \end{cases}\]


              Lossfunction (τ=0.7)             Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
    2 │'\                       │  0.3 │            ┌------------│
      │  \.                     │      │            |            │
      │   '\                    │      │_           |           _│
      │     \.                  │      │            |            │
    L │      '\              ._-│   L' │            |            │
      │        \.         ..-'  │      │            |            │
      │         '.     _r/'     │      │            |            │
    0 │           '_./'         │ -0.7 │------------┘            │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -3                        3
                 ŷ - y                            ŷ - y
source

LogCoshLoss

LossFunctions.LogCoshLossType
LogCoshLoss <: DistanceLoss

The log cosh loss is twice differentiable, strongly convex, Lipschitz continous function.

\[L(r) = log ( cosh ( x ))\]


           Lossfunction                     Derivative
      ┌────────────┬────────────┐      ┌────────────┬────────────┐
  2.5 │\                       /│    1 │                 .-------│
      │".                     ."│      │                |        │
      │ ".                   ." │      │               /         │
      │  ".                 ."  │      │_           . "         _│
    L │   ".               ."   │   L' │         /"              │
      │    '\.           ./'    │      │       ."                │
      │      \.         ./      │      │       |                 │
    0 │        "-. _ .-"        │   -1 │------"                  │
      └────────────┴────────────┘      └────────────┴────────────┘
      -3                        3      -3                        3
                 ŷ - y                            ŷ - y
source
Note

You may note that our definition of the QuantileLoss looks different to what one usually sees in other literature. The reason is that we have to correct for the fact that in our case $r = \hat{y} - y$ instead of $r_{\textrm{usual}} = y - \hat{y}$, which means that our definition relates to that in the manner of $r = -1 * r_{\textrm{usual}}$.