Cantelli's inequality

In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds.[1][2][3] The inequality states that, for

where

is a real-valued random variable,
is the probability measure,
is the expected value of ,
is the variance of .

Applying the Cantelli inequality to gives a bound on the lower tail,

While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928,[4] it originates in Chebyshev's work of 1874.[5] When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality.

  1. ^ Boucheron, Stéphane (2013). Concentration inequalities : a nonasymptotic theory of independence. Gábor Lugosi, Pascal Massart. Oxford: Oxford University Press. ISBN 978-0-19-953525-5. OCLC 829910957.
  2. ^ "Tail and Concentration Inequalities" by Hung Q. Ngo
  3. ^ "Concentration-of-measure inequalities" by Gábor Lugosi
  4. ^ Cantelli, F. P. (1928), "Sui confini della probabilita," Atti del Congresso Internazional del Matematici, Bologna, 6, 47-5
  5. ^ Ghosh, B.K., 2002. Probability inequalities related to Markov's theorem. The American Statistician, 56(3), pp.186-190

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne