Loss functions for classification

Bayes consistent loss functions: Zero-one loss (gray), Savage loss (green), Logistic loss (orange), Exponential loss (purple), Tangent loss (brown), Square loss (blue)

In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to).[1] Given as the space of all possible inputs (usually ), and as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function which best predicts a label for a given input .[2] However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same to generate different .[3] As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as

where is a given loss function, and is the probability density function of the process that generated the data, which can equivalently be written as

Within classification, several commonly used loss functions are written solely in terms of the product of the true label and the predicted label . Therefore, they can be defined as functions of only one variable , so that with a suitably chosen function . These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing . Selection of a loss function within this framework impacts the optimal which minimizes the expected risk, see empirical risk minimization.

In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically,

The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values for , and the fourth because . The term within brackets is known as the conditional risk.

One can solve for the minimizer of by taking the functional derivative of the last equality with respect to and setting the derivative equal to 0. This will result in the following equation

[citation needed][clarification needed]

which is also equivalent to setting the derivative of the conditional risk equal to zero.

Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0–1 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by

where indicates the Heaviside step function. However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem.[4] As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem.[5] Some of these surrogates are described below.

In practice, the probability distribution is unknown. Consequently, utilizing a training set of independently and identically distributed sample points

drawn from the data sample space, one seeks to minimize empirical risk

as a proxy for expected risk.[3] (See statistical learning theory for a more detailed description.)

  1. ^ Rosasco, L.; De Vito, E. D.; Caponnetto, A.; Piana, M.; Verri, A. (2004). "Are Loss Functions All the Same?" (PDF). Neural Computation. 16 (5): 1063–1076. CiteSeerX 10.1.1.109.6786. doi:10.1162/089976604773135104. PMID 15070510. S2CID 11845688.
  2. ^ Shen, Yi (2005), Loss Functions For Binary Classification and Class Probability Estimation (PDF), University of Pennsylvania, retrieved 6 December 2014
  3. ^ a b Rosasco, Lorenzo; Poggio, Tomaso (2014), A Regularization Tour of Machine Learning, MIT-9.520 Lectures Notes, vol. Manuscript
  4. ^ Piyush, Rai (13 September 2011), Support Vector Machines (Contd.), Classification Loss Functions and Regularizers (PDF), Utah CS5350/6350: Machine Learning, retrieved 4 May 2021
  5. ^ Ramanan, Deva (27 February 2008), Lecture 14 (PDF), UCI ICS273A: Machine Learning, retrieved 6 December 2014

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne