Continuous Order Statistics Cdf I 1 n Y Approach Yi

Distribution function associated with the empirical measure of a sample

The green curve, which asymptotically approaches heights of 0 and 1 without reaching them, is the true cumulative distribution function of the standard normal distribution. The grey hash marks represent the observations in a particular sample drawn from that distribution, and the horizontal steps of the blue step function (including the leftmost point in each step but not including the rightmost point) form the empirical distribution function of that sample. (Click here to load a new graph.)

The green curve, which asymptotically approaches heights of 0 and 1 without reaching them, is the true cumulative distribution function of the standard normal distribution. The grey hash marks represent the observations in a particular sample drawn from that distribution, and the horizontal steps of the blue step function (including the leftmost point in each step but not including the rightmost point) form the empirical distribution function of that sample. ( Click here to load a new graph. )

In statistics, an empirical distribution function (commonly also called an empirical Cumulative Distribution Function, eCDF) is the distribution function associated with the empirical measure of a sample.[1] This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.

Definition [edit]

Let (X 1, …, X n ) be independent, identically distributed real random variables with the common cumulative distribution function F(t). Then the empirical distribution function is defined as[2] [3]

F ^ n ( t ) = number of elements in the sample t n = 1 n i = 1 n 1 X i t , {\displaystyle {\widehat {F}}_{n}(t)={\frac {{\mbox{number of elements in the sample}}\leq t}{n}}={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t},}

where 1 A {\displaystyle \mathbf {1} _{A}} is the indicator of event A . For a fixed t , the indicator 1 X i t {\displaystyle \mathbf {1} _{X_{i}\leq t}} is a Bernoulli random variable with parameter p = F(t); hence n F ^ n ( t ) {\displaystyle n{\widehat {F}}_{n}(t)} is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that F ^ n ( t ) {\displaystyle {\widehat {F}}_{n}(t)} is an unbiased estimator for F(t).

However, in some textbooks, the definition is given as F ^ n ( t ) = 1 n + 1 i = 1 n 1 X i t {\displaystyle {\widehat {F}}_{n}(t)={\frac {1}{n+1}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t}} [4] [5]

Mean [edit]

The mean of the empirical distribution is an unbiased estimator of the mean of the population distribution.

E n ( X ) = 1 n ( i = 1 n x i ) {\displaystyle E_{n}(X)={\frac {1}{n}}\left(\sum _{i=1}^{n}{x_{i}}\right)}

which is more commonly denoted x ¯ {\displaystyle {\bar {x}}}

Variance [edit]

The variance of the empirical distribution times n n 1 {\displaystyle {\tfrac {n}{n-1}}} is an unbiased estimator of the variance of the population distribution, for any distribution of X that has a finite variance.

Var ( X ) = E [ ( X E [ X ] ) 2 ] = E [ ( X x ¯ ) 2 ] = 1 n ( i = 1 n ( x i x ¯ ) 2 ) {\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[(X-\operatorname {E} [X])^{2}\right]\\[4pt]&=\operatorname {E} \left[(X-{\bar {x}})^{2}\right]\\[4pt]&={\frac {1}{n}}\left(\sum _{i=1}^{n}{(x_{i}-{\bar {x}})^{2}}\right)\end{aligned}}}

Mean squared error [edit]

The mean squared error for the empirical distribution is as follows.

MSE = 1 n i = 1 n ( Y i Y i ^ ) 2 = Var θ ^ ( θ ^ ) + Bias ( θ ^ , θ ) 2 {\displaystyle {\begin{aligned}\operatorname {MSE} &={\frac {1}{n}}\sum _{i=1}^{n}(Y_{i}-{\hat {Y_{i}}})^{2}\\[4pt]&=\operatorname {Var} _{\hat {\theta }}({\hat {\theta }})+\operatorname {Bias} ({\hat {\theta }},\theta )^{2}\end{aligned}}}

Where θ ^ {\displaystyle {\hat {\theta }}} is an estimator and θ {\displaystyle \theta } an unknown parameter

Quantiles [edit]

For any real number a {\displaystyle a} the notation a {\displaystyle \lceil {a}\rceil } (read "ceiling of a") denotes the least integer greater than or equal to a {\displaystyle a} . For any real number a, the notation a {\displaystyle \lfloor {a}\rfloor } (read "floor of a") denotes the greatest integer less than or equal to a {\displaystyle a} .

If n q {\displaystyle nq} is not an integer, then the q {\displaystyle q} -th quantile is unique and is equal to x ( n q ) {\displaystyle x_{(\lceil {nq}\rceil )}}

If n q {\displaystyle nq} is an integer, then the q {\displaystyle q} -th quantile is not unique and is any real number x {\displaystyle x} such that

x ( n q ) < x < x ( n q + 1 ) {\displaystyle x_{({nq})}<x<x_{({nq+1})}}

Empirical median [edit]

If n {\displaystyle n} is odd, then the empirical median is the number

x ~ = x ( n / 2 ) {\displaystyle {\tilde {x}}=x_{(\lceil {n/2}\rceil )}}

If n {\displaystyle n} is even, then the empirical median is the number

x ~ = x n / 2 + x n / 2 + 1 2 {\displaystyle {\tilde {x}}={\frac {x_{n/2}+x_{n/2+1}}{2}}}

Asymptotic properties [edit]

Since the ratio (n + 1)/n approaches 1 as n goes to infinity, the asymptotic properties of the two definitions that are given above are the same.

By the strong law of large numbers, the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} converges to F(t) as n → ∞ almost surely, for every value of t :[2]

F ^ n ( t ) a.s. F ( t ) ; {\displaystyle {\widehat {F}}_{n}(t)\ {\xrightarrow {\text{a.s.}}}\ F(t);}

thus the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t :[6]

F ^ n F sup t R | F ^ n ( t ) F ( t ) | a.s. 0. {\displaystyle \|{\widehat {F}}_{n}-F\|_{\infty }\equiv \sup _{t\in \mathbb {R} }{\big |}{\widehat {F}}_{n}(t)-F(t){\big |}\ {\xrightarrow {\text{a.s.}}}\ 0.}

The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} and the assumed true cumulative distribution function F . Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic.

The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} has asymptotically normal distribution with the standard n {\displaystyle {\sqrt {n}}} rate of convergence:[2]

n ( F ^ n ( t ) F ( t ) ) d N ( 0 , F ( t ) ( 1 F ( t ) ) ) . {\displaystyle {\sqrt {n}}{\big (}{\widehat {F}}_{n}(t)-F(t){\big )}\ \ {\xrightarrow {d}}\ \ {\mathcal {N}}{\Big (}0,F(t){\big (}1-F(t){\big )}{\Big )}.}

This result is extended by the Donsker's theorem, which asserts that the empirical process n ( F ^ n F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} , viewed as a function indexed by t R {\displaystyle \scriptstyle t\in \mathbb {R} } , converges in distribution in the Skorokhod space D [ , + ] {\displaystyle \scriptstyle D[-\infty ,+\infty ]} to the mean-zero Gaussian process G F = B F {\displaystyle \scriptstyle G_{F}=B\circ F} , where B is the standard Brownian bridge.[6] The covariance structure of this Gaussian process is

E [ G F ( t 1 ) G F ( t 2 ) ] = F ( t 1 t 2 ) F ( t 1 ) F ( t 2 ) . {\displaystyle \operatorname {E} [\,G_{F}(t_{1})G_{F}(t_{2})\,]=F(t_{1}\wedge t_{2})-F(t_{1})F(t_{2}).}

The uniform rate of convergence in Donsker's theorem can be quantified by the result known as the Hungarian embedding:[7]

lim sup n n ln 2 n n ( F ^ n F ) G F , n < , a.s. {\displaystyle \limsup _{n\to \infty }{\frac {\sqrt {n}}{\ln ^{2}n}}{\big \|}{\sqrt {n}}({\widehat {F}}_{n}-F)-G_{F,n}{\big \|}_{\infty }<\infty ,\quad {\text{a.s.}}}

Alternatively, the rate of convergence of n ( F ^ n F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of n F ^ n F {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} :[7]

Pr ( n F ^ n F > z ) 2 e 2 z 2 . {\displaystyle \Pr \!{\Big (}{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }>z{\Big )}\leq 2e^{-2z^{2}}.}

In fact, Kolmogorov has shown that if the cumulative distribution function F is continuous, then the expression n F ^ n F {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} converges in distribution to B {\displaystyle \scriptstyle \|B\|_{\infty }} , which has the Kolmogorov distribution that does not depend on the form of F .

Another result, which follows from the law of the iterated logarithm, is that [7]

lim sup n n F ^ n F 2 ln ln n 1 2 , a.s. {\displaystyle \limsup _{n\to \infty }{\frac {{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }}{\sqrt {2\ln \ln n}}}\leq {\frac {1}{2}},\quad {\text{a.s.}}}

and

lim inf n 2 n ln ln n F ^ n F = π 2 , a.s. {\displaystyle \liminf _{n\to \infty }{\sqrt {2n\ln \ln n}}\|{\widehat {F}}_{n}-F\|_{\infty }={\frac {\pi }{2}},\quad {\text{a.s.}}}

Confidence intervals [edit]

Empirical CDF, CDF and Confidence Interval plots for various sample sizes of Normal Distribution

As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, F ( x ) {\displaystyle F(x)} , with probability 1 α {\displaystyle 1-\alpha } is specified as

Empirical CDF, CDF and Confidence Interval plots for various sample sizes of Cauchy Distribution

F n ( x ) ε F ( x ) F n ( x ) + ε  where ε = ln 2 α 2 n . {\displaystyle F_{n}(x)-\varepsilon \leq F(x)\leq F_{n}(x)+\varepsilon \;{\text{ where }}\varepsilon ={\sqrt {\frac {\ln {\frac {2}{\alpha }}}{2n}}}.}

As per the above bounds, we can plot the Empirical CDF, CDF and Confidence intervals for different distributions by using any one of the Statistical implementations. Following is the syntax from Statsmodel for plotting empirical distribution.

Empirical CDF, CDF and Confidence Interval plots for various sample sizes of Triangle Distribution

Statistical implementation [edit]

A non-exhaustive list of software implementations of Empirical Distribution function includes:

  • In R software, we compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an "ecdf" object.
  • In MATLAB we can use Empirical cumulative distribution function (cdf) plot
  • jmp from SAS, the CDF plot creates a plot of the empirical cumulative distribution function.
  • Minitab, create an Empirical CDF
  • Mathwave, we can fit probability distribution to our data
  • Dataplot, we can plot Empirical CDF plot
  • Scipy, using scipy.stats we can plot the distribution
  • Statsmodels, we can use statsmodels.distributions.empirical_distribution.ECDF
  • Matplotlib, we can use histograms to plot a cumulative distribution
  • Seaborn, using the seaborn.ecdfplot function
  • Plotly, using the plotly.express.ecdf function
  • Excel, we can plot Empirical CDF plot

See also [edit]

  • Càdlàg functions
  • Count data
  • Distribution fitting
  • Dvoretzky–Kiefer–Wolfowitz inequality
  • Empirical probability
  • Empirical process
  • Estimating quantiles from a sample
  • Frequency (statistics)
  • Kaplan–Meier estimator for censored processes
  • Survival function
  • Q–Q plot

References [edit]

  1. ^ A modern introduction to probability and statistics : understanding why and how. Michel Dekking. London: Springer. 2005. p. 219. ISBN978-1-85233-896-1. OCLC 262680588. {{cite book}}: CS1 maint: others (link)
  2. ^ a b c van der Vaart, A.W. (1998). Asymptotic statistics . Cambridge University Press. p. 265. ISBN0-521-78450-6.
  3. ^ PlanetMath Archived May 9, 2013, at the Wayback Machine
  4. ^ Coles, S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, p. 36, Definition 2.4. ISBN 978-1-4471-3675-0.
  5. ^ Madsen, H.O., Krenk, S., Lind, S.C. (2006) Methods of Structural Safety. Dover Publications. p. 148-149. ISBN 0486445976
  6. ^ a b van der Vaart, A.W. (1998). Asymptotic statistics . Cambridge University Press. p. 266. ISBN0-521-78450-6.
  7. ^ a b c van der Vaart, A.W. (1998). Asymptotic statistics . Cambridge University Press. p. 268. ISBN0-521-78450-6.

Further reading [edit]

  • Shorack, G.R.; Wellner, J.A. (1986). Empirical Processes with Applications to Statistics. New York: Wiley. ISBN0-471-86725-X.

External links [edit]

  • Media related to Empirical distribution functions at Wikimedia Commons

davishendes70.blogspot.com

Source: https://en.wikipedia.org/wiki/Empirical_distribution_function

0 Response to "Continuous Order Statistics Cdf I 1 n Y Approach Yi"

Enviar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel