Generalized eigenvectors

Generalized eigenvectors were not covered in the course material, but were discussed in the supplemental resource - a video from Serrano.Academy.
I have two questions:

  1. How do we find generalized eigenvectors?
  2. And is it always true that a generalized eigenvector consists of a stretch (which of the eigenvalues corresponds to the stretch in high-dimensional space?) plus each of the normal eigenvectors (without any stretch applied to them)? If it’s not true, what is?

Generalized eigenvectors are connected to the Jordan decomposition of a matrix.

For question 1, a generalized eigenvector of a n\times n matrix A corresponding to an eigenvalue \lambda is a nonzero vector \mathbf{x} satisfying {(A − λI)}^p \mathbf{x} = \mathbf{0} for some positive integer p.

Regarding question 2, for a complex matrix, if an eigenvalue \lambda has algebraic multiplicity k, then there are k linearly independent generalized eigenvectors for \lambda. In particular, there exists a basis for \mathbb{C}^n consisting of generalized eigenvectors of A. So, for a complex matrix, while a basis of eigenvectors might not always exists, you can always find a basis of generalized eigenvectors.

1 Like

Thanks! Please check if I understand correctly:

  1. Here we solve for both vector x and some positive integer p. If it’s not correct, what does p mean then?
  2. Assuming we only deal with real matrices. For a defective real matrix there exists only one eigenvalue and it’s integer value corresponds to the number of generalized eigenvectors. E.g. λ = 123 means there is 123 linearly independent generalized eigenvectors for a given defective real matrix? If so it contradicts with what I could assume from the supplemental video (with a timecode): Eigenvectors and Generalized Eigenspaces - YouTube. Here we find λ = 2 and one basis of eigenvector plus one generalized eigenvector, so I’d assume number of generalized eigenvectors is λ - 1. Where do I get it wrong?
    My better guess would be that the number of generalized eigenvectors is matrix rank minus number of bases of eigenvectors. So if we have 3x3 complete defective matrix and found one basis of eigenvector for it, then there are also 2 generalized eigenvectors.
    However I wonder how the generalized eigenvectors work in the transformed space: in the video we applied a stretch to a generalized eigenvector and added a basis of eigenvector that we had (u * λ + v), where u is a generalized eigenvector, λ is eigenvalue and v is eigenbasis. If we have eigenvalue of 123, does it mean that we will add whatever amount of unstretched eigenvectors to the generalized eigenvector stretched by eigenvalue to find where the generalized eigenvector points to in the transformed space? (u * λ + v_1 + v_2 + … + v_n)

It’s not true that the eigenvalue \lambda must be an integer. What is true is that the “multiplicity” of the eigenvalue is a positive integer m. So the basis will consist of an eigenvector plus m-1 generalized eigenvectors. The elements of the basis form a chain of vectors u_1, u_2, \ldots, u_m that satisfy

Au_1=\lambda u_1 (meaning that u_1 is the eigenvector) (u_1 is stretched by \lambda)
Au_2=\lambda u_2 + u_1 (meaning u_2 is a generalized eigenvector) (u_2 is stretched by \lambda and sheared by u_1)
Au_3=\lambda u_3 + u_2 (meaning u_3 is a generalized eigenvector) (u_3 is stretched by \lambda and sheared by u_2)

Au_m=\lambda u_m + u_{m-1} (meaning u_m is a generalized eigenvector) (u_m is stretched by \lambda and sheared by u_{m-1})

When you use basis u_1, u_2, \ldots, u_m, the matrix A is the Jordan matrix J(m, \lambda). For example when m=4, the matrix J(4, \lambda) is

\begin{matrix} \lambda & 1 & 0 & 0 \\ 0 & \lambda & 1 & 0 \\ 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & \lambda \end{matrix}

with \lambda's along the diagonal, 1’s above the diagonal, and 0’s everywhere else.

You can also have blocks of Jordan matrices. For example, for n=4, you could have

J(\lambda,1)\oplus J(\lambda,1) \oplus J(\lambda,1) \oplus J(\lambda,1), which is \begin{matrix} \lambda & 0 & 0 & 0 \\ 0 & \lambda & 0 & 0 \\ 0 & 0 & \lambda & 0 \\ 0 & 0 & 0 & \lambda \end{matrix}

or J(\lambda,1)\oplus J(\lambda,3) , which is \begin{matrix} \lambda & 0 & 0 & 0 \\ 0 & \lambda & 1 & 0 \\ 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & \lambda \end{matrix}

or J(\lambda,2)\oplus J(\lambda,2) , which is \begin{matrix} \lambda & 1 & 0 & 0 \\ 0 & \lambda & 0 & 0 \\ 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & \lambda \end{matrix}

or J(\lambda,4) , which is \begin{matrix} \lambda & 1 & 0 & 0 \\ 0 & \lambda & 1 & 0 \\ 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & \lambda \end{matrix}

Again all this is for complex matrices. The story is a bit more complicated for real matrices.

1 Like

Woah, thank you very much! I guess I just don’t have to worry about it for now :sweat_smile: