|
Size: 3105
Comment: Rewrite for clarity
|
Size: 3133
Comment: Moving vector pages
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 3: | Line 3: |
| '''Mahalanobis distance''' is a [[LinearAlgebra/Distance|Euclidean distance]] that is transformed through a [[LinearAlgebra/Basis#Change_of_Basis|change of basis]] to normalize variance. | '''Mahalanobis distance''' is a [[Calculus/Distance#Euclidean_distance|Euclidean distance]] that is transformed through a [[LinearAlgebra/Basis#Change_of_Basis|change of basis]] to normalize variance. |
| Line 13: | Line 13: |
| Mahalanobis distance is equivalent to [[LinearAlgebra/Distance|Euclidean distance]] with a change in [[LinearAlgebra/Basis|basis]]. | Mahalanobis distance is equivalent to [[Calculus/Distance#Euclidean_distance|Euclidean distance]] with a change in [[LinearAlgebra/Basis|basis]]. |
Mahalanobis Distance
Mahalanobis distance is a Euclidean distance that is transformed through a change of basis to normalize variance.
Contents
Description
Mahalanobis distance is equivalent to Euclidean distance with a change in basis.
Euclidean distance is commonly formulated as (x-y)T(x-y) (or if the reference point is the origin, just xTx), but an equivalent formulation looks like xTITIx.
A change of basis can be effected by swapping the identity matrix with some other A: xTATAx.
Graphing
In a two-dimensional graph, plotting the points with a Euclidean distance of 1 around the origin results in a unit circle. The change of basis described by A transforms the circle into an ellipsoid.
Note that if A is diagonal, the ellipsoid will be axis-aligned (i.e., appear to be stretched along the x or y axes).
Usage
Mahalanobis distances are appropriate for calculating variance-normalized distances, as for test statistics. The change of basis is established by the covariance matrix, notated as Σ. More specifically, using the standard deviation matrix (√Σ = Σ0.5).
The variance-normalized distance from a distribution to an estimate in a single dimension can be calculated with, e.g., the Z-statistic: (x̂-μX)/σX. (Henceforward measurements are normalized: x = x̂-μX.) This can be repeated for any number of dimensions. If variance is unit and independent across dimensions, then the joint distance from the multivariate distribution can be calculated (for two dimensions) like: √(xTx + yTy) = √((x-y)T(x-y)). But variances are not unit and do correlate, as described by the covariate matrix. The change of basis must 'undo' this distribution, ergo the inverse of the standard deviation matrix (√(Σ-1) = Σ-0.5) should be used for A.
Note that a covariance matrix is...
always square symmetric, so ΣT = Σ
always positive semi-definite, so...
Σ0.5 can always be evaluated
the determinant is bound by |Σ| >= 0, so...
either |Σ| = 0 or Σ is invertible
After substitution, using the symmetric rule, and simplifying exponents through the product rule, ATA becomes Σ-1. In summary, the variance-normalized distance is calculated like: √((x-y)TΣ-1(x-y))
