Differences between revisions 3 and 11 (spanning 8 versions)
Revision 3 as of 2024-01-29 20:33:21
Size: 1810
Comment: Introduction
Revision 11 as of 2026-01-21 16:29:34
Size: 5173
Comment: Link
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
Matrices are characterized by their '''eigenvalues''' and '''eigenvectors'''.
Line 12: Line 13:
'''''A'''x'' is a linear transformation: '''''A''''' maps the input vector ''x'' to the output vector ''b''. The square matrix '''''A''''' is a [[LinearAlgebra/LinearMapping|mapping]] of vectors. There are certain input vectors that map to a linear combination of themself, i.e. '''''A''': a -> λa''.
Line 14: Line 15:
For some linear transformations (i.e. some, but not all, matrices '''''A'''''), there are '''certain input vectors''' ''x'' where the output vector ''b'' is also just a linear transformation of the input vector ''x''. In other words, '''''A''''' mapped ''x'' to a scaled version of ''x''. That '''scaling factor''' is notated ''λ''. Another way to think of this is that there are certain input vectors which maintain their direction through a transformation. For vectors in these directions, the transformation only involves some stretching factor ''λ''.
Line 16: Line 17:
For example, rotation around the ''y'' axis in 3 dimensions by ''θ'' degrees is calculated with: These certain input vector are called '''eigenvectors'''. Each eigenvector has a corresponding scaling factor ''λ'', called an '''eigenvalue'''.

Consider the [[LinearAlgebra/RotationMatrix|matrix representing rotation in the y-axis]]:
Line 26: Line 29:
The ''y'' axis, and more importantly any vector that only moves on the ''y'' axis, will not be transformed by this '''''A'''''. (And because the linear transformation involves no scaling, ''λ = 1''.) Given this as '''''A''''', the vector ''[0 1 0]'' (and any linear transformation of it) maps to itself. The members of this infinite set of vectors are ''all'' eigenvectors of '''''A'''''. (And because there is no scaling, their corresponding eigenvalues ''λ'' are all 1.)
Line 28: Line 31:
The '''certain input vectors''' are the '''eigenvectors''' of '''''A'''''. The '''scaling factors''' are the '''eigenvalues''' of '''''A'''''. Eigenvectors and eigenvalues often include complex numbers.

{{{
julia> using LinearAlgebra

julia> A = [0 0 1; 0 1 0; -1 0 0]
3×3 Matrix{Int64}:
  0 0 1
  0 1 0
 -1 0 0

julia> eigvals(A)
3-element Vector{ComplexF64}:
 0.0 - 1.0im
 0.0 + 1.0im
 1.0 + 0.0im

julia> eigvecs(A)
3×3 Matrix{ComplexF64}:
 0.707107-0.0im 0.707107+0.0im 0.0+0.0im
      0.0-0.0im 0.0+0.0im 1.0+0.0im
      0.0-0.707107im 0.0+0.707107im 0.0+0.0im
}}}

Note that in the above, eigenvectors are returned as an eigenvector matrix. This is usually notated as '''''S'''''.
Line 34: Line 61:
== Definition == == Description ==
Line 38: Line 65:
Unless '''''A''''' is '''defective''', there should be ''n'' unique pairs of eigenvectors and eigenvalues. If there is a repeated eigenvalue, there may not be ''n'' independent eigenvectors. Given a matrix of size ''n x n'', either there are ''n'' unique pairs of eigenvectors and eigenvalues, or the matrix is '''defective'''.
Line 44: Line 71:
Only square matrices have eigenvectors.
Line 46: Line 75:
The sum of the eigenvalues is the trace (sum of diagonal). The product of the eigenvalues is the [[LinearAlgebra/Determinants|determinant]]. The [[LinearAlgebra/Trace|trace]] is the sum of eigenvalues. The [[LinearAlgebra/Determinant|determinant]] is the product the eigenvalues.

A [[LinearAlgebra/Diagonalization|diagonal matrix]] is a trivial case because...
 * the columns are its eigenvectors
 * the numbers in the diagonal are its eigenvalues

This also means that any diagonalizable matrix of size ''n x n'' must have ''n'' unique pairs of eigenvectors and eigenvalues, and cannot be defective.

----



== Calculation ==

=== Conventional Method ===

Because eigenvalues are characterized by ''|'''A''' - λ'''I'''| = 0'', they can be solved for by:

 * subtracting ''λ'' from each value on the diagonal
 * formulating the determinant for this difference
 * setting the formulation for 0
 * solving for ''λ''

In a simple ''2 x 2'' matrix, this looks like:

{{{
| A - λI | = 0

│ ┌ ┐ ┌ ┐ │
│ │ a b│ -│ λ 0│ │ = 0
│ │ c d│ │ 0 λ│ │
│ └ ┘ └ ┘ │

│ ┌ ┐ │
│ │ a-λ b│ │ = 0
│ │ c d-λ│ │
│ └ ┘ │

(a-λ)(d-λ) - bc = 0
}}}

This leads to the '''characteristic polynomial''' of '''''A'''''; solving for the roots, as through either factorization or the quadratic formula, gives the eigenvalues.

Because eigenvectors are characterized by...

'''''A'''x = λx''

'''''A'''x - λx = 0''

''('''A''' - λ'''I''')x = 0''

...eigenvectors can be solved for given eigenvalues using substitution.

In a simple ''2 x 2'' matrix, this looks like:

{{{
( A - λI ) x = 0

/ ┌ ┐ ┌ ┐ \ ┌ ┐ ┌ ┐
│ │ a b│ -│ λ 0│ │ │ u│ = │ 0│
│ │ c d│ │ 0 λ│ │ │ v│ │ 0│
\ └ ┘ └ ┘ / └ ┘ └ ┘

┌ ┐ ┌ ┐ ┌ ┐
│ a-λ b│ │ u│ = │ 0│
│ c d-λ│ │ v│ │ 0│
└ ┘ └ ┘ └ ┘

┌ ┐ ┌ ┐ ┌ ┐
│ a-λ b│ │ u│ = │ 0│
│ c d-λ│ │ v│ │ 0│
└ ┘ └ ┘ └ ┘

(a-λ)u + (b)v = 0
(c)u + (d-λ)v = 0
}}}



=== Shortcut ===

This method is only applicable to ''2 x 2'' matrices.

 * Because the trace is equal to the sum of eigenvalues, it follows that 1/2 of the trace is also the mean of the eigenvalues.
 * Because the characteristic polynomial must be quadratic, the eigenvalues must be evenly spaced from the center (i.e., the mean). Given the above mean as ''m'' and an unknown distance as ''d'', the eigenvalues must be ''(m-d)'' and ''(m+d)''.
 * By definition of the determinant, ''|'''A'''| = (m-d)(m+d) = m^2^ - d^2^''. This can be solved for ''d''.

Altogether,

{{attachment:shortcut.svg}}

Eigenvalues and Eigenvectors

Matrices are characterized by their eigenvalues and eigenvectors.


Introduction

The square matrix A is a mapping of vectors. There are certain input vectors that map to a linear combination of themself, i.e. A: a -> λa.

Another way to think of this is that there are certain input vectors which maintain their direction through a transformation. For vectors in these directions, the transformation only involves some stretching factor λ.

These certain input vector are called eigenvectors. Each eigenvector has a corresponding scaling factor λ, called an eigenvalue.

Consider the matrix representing rotation in the y-axis:

┌                 ┐
│  cos(θ) 0 sin(θ)│
│       0 1      0│
│ -sin(θ) 0 cos(θ)|
└                 ┘

Given this as A, the vector [0 1 0] (and any linear transformation of it) maps to itself. The members of this infinite set of vectors are all eigenvectors of A. (And because there is no scaling, their corresponding eigenvalues λ are all 1.)

Eigenvectors and eigenvalues often include complex numbers.

julia> using LinearAlgebra

julia> A = [0 0 1; 0 1 0; -1 0 0]
3×3 Matrix{Int64}:
  0  0  1
  0  1  0
 -1  0  0

julia> eigvals(A)
3-element Vector{ComplexF64}:
 0.0 - 1.0im
 0.0 + 1.0im
 1.0 + 0.0im

julia> eigvecs(A)
3×3 Matrix{ComplexF64}:
 0.707107-0.0im       0.707107+0.0im       0.0+0.0im
      0.0-0.0im            0.0+0.0im       1.0+0.0im
      0.0-0.707107im       0.0+0.707107im  0.0+0.0im

Note that in the above, eigenvectors are returned as an eigenvector matrix. This is usually notated as S.


Description

Eigenvalues and eigenvectors are the pairs of λ and x that satisfy Ax = λx and |A - λI| = 0.

Given a matrix of size n x n, either there are n unique pairs of eigenvectors and eigenvalues, or the matrix is defective.

Properties

Only square matrices have eigenvectors.

Adding nI to A does not change its eigenvectors and adds n to the eigenvalues.

The trace is the sum of eigenvalues. The determinant is the product the eigenvalues.

A diagonal matrix is a trivial case because...

  • the columns are its eigenvectors
  • the numbers in the diagonal are its eigenvalues

This also means that any diagonalizable matrix of size n x n must have n unique pairs of eigenvectors and eigenvalues, and cannot be defective.


Calculation

Conventional Method

Because eigenvalues are characterized by |A - λI| = 0, they can be solved for by:

  • subtracting λ from each value on the diagonal

  • formulating the determinant for this difference
  • setting the formulation for 0
  • solving for λ

In a simple 2 x 2 matrix, this looks like:

|    A   -  λI   | = 0

│ ┌    ┐  ┌    ┐ │
│ │ a b│ -│ λ 0│ │ = 0
│ │ c d│  │ 0 λ│ │
│ └    ┘  └    ┘ │

│ ┌        ┐ │
│ │ a-λ   b│ │ = 0
│ │   c d-λ│ │
│ └        ┘ │

(a-λ)(d-λ) - bc = 0

This leads to the characteristic polynomial of A; solving for the roots, as through either factorization or the quadratic formula, gives the eigenvalues.

Because eigenvectors are characterized by...

Ax = λx

Ax - λx = 0

(A - λI)x = 0

...eigenvectors can be solved for given eigenvalues using substitution.

In a simple 2 x 2 matrix, this looks like:

(   A    -  λI   )   x  =   0

/ ┌    ┐  ┌    ┐ \ ┌  ┐   ┌  ┐
│ │ a b│ -│ λ 0│ │ │ u│ = │ 0│
│ │ c d│  │ 0 λ│ │ │ v│   │ 0│
\ └    ┘  └    ┘ / └  ┘   └  ┘

┌        ┐ ┌  ┐   ┌  ┐
│ a-λ   b│ │ u│ = │ 0│
│   c d-λ│ │ v│   │ 0│
└        ┘ └  ┘   └  ┘

┌        ┐ ┌  ┐   ┌  ┐
│ a-λ   b│ │ u│ = │ 0│
│   c d-λ│ │ v│   │ 0│
└        ┘ └  ┘   └  ┘

(a-λ)u + (b)v = 0
(c)u + (d-λ)v = 0

Shortcut

This method is only applicable to 2 x 2 matrices.

  • Because the trace is equal to the sum of eigenvalues, it follows that 1/2 of the trace is also the mean of the eigenvalues.
  • Because the characteristic polynomial must be quadratic, the eigenvalues must be evenly spaced from the center (i.e., the mean). Given the above mean as m and an unknown distance as d, the eigenvalues must be (m-d) and (m+d).

  • By definition of the determinant, |A| = (m-d)(m+d) = m2 - d2. This can be solved for d.

Altogether,

shortcut.svg


CategoryRicottone

LinearAlgebra/EigenvaluesAndEigenvectors (last edited 2026-02-03 23:52:07 by DominicRicottone)