首页 > 其他分享 >linear algebra- chapter 6- Eigenvalue and eigenvector

linear algebra- chapter 6- Eigenvalue and eigenvector

时间:2022-12-01 21:11:25浏览次数:44  
标签:chapter Eigenvalue linear ... eigenvectors eigenvalues Lambda eigenvector lambda

linear algebra- chapter 6- Eigenvalue and eigenvector

目录


prologue

What we will learn here is divided into two parts:

  1. eigenvalue and eigenvector bring onto the steady state and equilibrium because of \(Ax=\lambda x\)
  2. change comes into the picture in terms of time, such as differential and difference equations.

1.eigenvector and eigenvalue

It's easy to start with an eigenvector, to say the least.

key point: The direction of the product of A and eigenvector x stays the same as the eigenvector x self is.

Algebraic expression: \(Ax=\lambda x\)

1.1 obtain eigenvector and eigenvalue

\[Ax = \lambda x \]

\[(A-\lambda I)x = 0 \]

Eigenvectors fill out the nullsapce of \(A-\lambda X\).

We also find that if there is a non-zero solution in \(N(A-\lambda X)\),\(A-\lambda X\) must be singular, which is equivalent to \(det(A-\lambda X)=0.\) denoted by "characteristic polynomial."

After finding the eigenvalues,\(A-\lambda X=0\) can help us find the eigenvectors.

1.2 determinant and trace

For a *n by n matrix \(A\),its all eigenvalues are represented by a run of \(\lambda_1,...,\lambda_n.\)

  1. \(\lambda_1+...+\lambda_n= \sum_{i=1}^{n}a_{ii}\) the sum of all the diagonal entries (denoted by trace.)
  2. \(\lambda_1*...*\lambda_n=\det(A)\)

These two rules won't help a lot during we find the eigenvalues, but they are helpful in checking the eigenvalues we obtain.

1.3 imaginary eigenvalues

1.4 eigenvalue and eigenvector of AB and A+B

eigenvalue of AB = \(\lambda_a * \lambda_b\) if and only if A and B share all the eigenvectors. also equal to \(AB=BA.\)

\(A+B\) is same as what \(AB\) does. if and only if \(AB = BA.\)

2.diagonalization

Given an n by n matrix A with n different eigenvectors and eigenvalues,n by n matrix X consists of all the eigenvectors and one n by n diagonal matrx \(\Lambda\) consists of all the eigenvalues coming in the same order as the corresponding eigenvector in X is.

\(A = X\Lambda X^{-1}\)
->we factor X into \(X \quad and \quad \Lambda.\)

2.1 proof of diagonalization

Key point:all here are about the definition of eigenvectors and eigenvalues.\(Ax=\lambda x\)

\(AX\):every column in X is one of the eigenvectors of A,\(AX\) can also be interpreted as

\([Ax_1...Ax_i...Ax_n]\)->\([\lambda x_1...\lambda_i x_i...\lambda_n x_n]\)

\(\Lambda X\):the diagonal of \(\Lambda\) is each eigenvalue \(\lambda\),the off-diagonal entries are all zero.\(\Lambda X\) can be interpreted as
\([\Lambda x_1...\Lambda x_i...\Lambda] x_n]\)->\([\lambda x_1...\lambda x_i...\lambda x_n]\)

We will soon find that \(AX=\Lambda X\).

And \(A\) can be diagonalized implies there are n independent eigenvectors and n different eigenvalues,in this case,X is invertible.

conclusion:
Any matrix A
(1) without any repeated eigenvalues
(2) with n different eigenvalues
(3) with n independent eigenvectors
(4) eigenvector matrix X is invertible!
can be diagonalized.

2.2 invertibility and diagonalizability

invertibility: lead into zero or non-zero eigenvectors

diagonalizability: lead to whether or not different eigenvalues and independent eigenvectors.

3. similar matrix

If A can be factored into \(X\Lambda X^{-1}\), then modify \(X\) to get a whole family of matrices similar to matrix A.

=>So matrices are similar because they share n different eigenvectors.

And if there are B and C matrices, B is invertible and C may not be diagonalized.

All the matrices \(A=BCB^{-1}\) are similar .They share the eigenvectors of \(C\).

Proof:
Assume \(Cx=\lambda x\),the new eigenvector is \(Bx\).

$BCB^{-1} \cdot (Bx) = BCIx=BCx=\lambda (Bx) $

4.Fibonacci number

The difference equation and the power of one matrix are covered here!

As known to all of us,F(k+1)=F(k)+F(k-1) if k >= 2,and F(2)=F(1)+F(0)=1+0=1.

\(u_k=(F_k+1,F_k)\)

\(A= \begin{bmatrix} 1 & 1\\ 1 & 0\\ \end{bmatrix} \implies\)

\[u_{k+1}=Au_k\tag{1} \]

\[u_k=A^k\cdot u_0 \]

Here we go with the power of matrix A:\(A^k=X\Lambda^kX^{-1}\)

4.1 fast fabonacci

One convenient and fast approach rises here, representing \(u_0\) by means of the linear combination of eigenvectors.

For A
\(\lambda_1=\frac{1+\sqrt{5}}{2},\lambda_2=\frac{1-\sqrt{5}}{2}\)
\(x_1=(\lambda_1,1),x_2=(\lambda_2,1)\)

\[\implies u_0=\frac{x_1-x_2}{\lambda_1-\lambda_2} \]

\[u_k =A^k \cdot u_0=\frac{\lambda_1^k\cdot x_1-\lambda_2^k\cdot x_2}{\lambda_1-\lambda_2} \]

标签:chapter,Eigenvalue,linear,...,eigenvectors,eigenvalues,Lambda,eigenvector,lambda
From: https://www.cnblogs.com/UQ-44636346/p/16942696.html

相关文章