Standard Errors

Standard errors are the standard deviations of estimated coefficients.


Description

In the classical OLS model,, estimated coefficients are:

Standard errors are the standard deviations of these coefficients.


Classical

Univariate

In the univariate case, standard errors are classically specified as:

unispec1.svg

Supposing the population Var(ε) is known and is constant, this can be simplified.

unispec2.svg

Lastly, rewrite the denominator in terms of Var(X).

unispec3.svg

Var(ε) is unknown, so this term is estimated as:

uniest1.svg, uniest2.svg

Var(\hat{\epsilon}) = \frac{1}{n-1}\sum_{i=1}n(\hat{\epsilon}_i2)

1 degree of freedom is lost in assuming homoskedasticity of errors, i.e. homosked.svg

k degrees of freedom are lost in assuming independence of errors and k independent variables, which is necessarily 1 in the univariate case, i.e.: ind.svg

This arrives at estimation as:

uniest3.svg

Multivariate

The classical multivariate specification is expressed in terms of (b-β), as:

Var(\mathbf{b} | \mathbf{X}) = E\Bigl[(\mathbf{b}-\mathbf{\beta})(\mathbf{b}-\mathbf{\beta})^T \Big| \mathbf{X}\Bigr]

That term is rewritten as (XTX)-1Xu.

Var(\mathbf{b} | \mathbf{X}) = E\Bigl[\bigl((\mathbf{X}T\mathbf{X}){-1}\mathbf{X}T\mathbf{u}\bigr)\bigl((\mathbf{X}T\mathbf{X}){-1}\mathbf{X}T\mathbf{u}\bigr){T} \Big| \mathbf{X}\Bigr] = E\Bigl[(\mathbf{X}T\mathbf{X}){-1}\mathbf{X}T\mathbf{u}\mathbf{u}T\mathbf{X}(\mathbf{X}T\mathbf{X})^{-1} \Big| \mathbf{X}\Bigr]

Var(\mathbf{b} | \mathbf{X}) = (\mathbf{X}T\mathbf{X}){-1}\mathbf{X}T E\bigl[\mathbf{u}\mathbf{u}T\big|\mathbf{X}\bigr]\mathbf{X}(\mathbf{X}T\mathbf{X}){-1}

E[uuT|X] is not a practical matrix to work with. But if homoskedasticity and independence are assumed, i.e.:

E\bigl[\mathbf{u}\mathbf{u}^T\big|\mathbf{X}\bigr] = Var(\mathbf{\beta})\mathbf{I}_n

...then this simplifies to:

Var(\mathbf{b} | \mathbf{X}) = Var(\mathbf{\beta}) (\mathbf{X}T\mathbf{X}){-1}

Var(β) is unknown, so the estimate is:

\hat{Var}(\mathbf{b} | \mathbf{X}) = \frac{1}{1-k} \mathbf{u}T\mathbf{u} (\mathbf{X}T\mathbf{X})^{-1}


Robust

In the presence of heteroskedasticity of errors, the above simplifications cannot apply. In the univariate case, use the original estimator.

This is mostly interesting in the multivariate case, where E[uuT|X] is still not practical. The assumptions made, when incorrect, lead to...

Eicker-Huber-White heterskedasticity consistent errors (HCE) assume that errors are still independent but allowed to vary, i.e. Σ = diag(ε1,...εn). Importantly, this is not a function of X, so the standard errors can be estimated as:

\hat{Var}(\mathbf{b} | \mathbf{X}) = (\mathbf{X}T\mathbf{X}){-1}\mathbf{X}T \mathbf{\Sigma} \mathbf{X}(\mathbf{X}T\mathbf{X})^{-1}

Note however that heterskedasticity consistent errors are not always appropriate. To reiterate, for OLS, classical estimators are not biased even given heteroskedasticity; if the model changes with introduction of robust standard errors, there must be a specification error. Furthermore, heterskedasticity consistent errors are asymptotically unbiased; they can be biased for small n.