- in this example, when eigenvalue is 0, the eigenvector is [0,1], so why in the lab, it says “There is nothing wrong with this, 𝜆 can be equal to 0 ! In this case, this just means that anything that lies on the y-axis will be sent to zero, since it has no component in the x-direction.”
- So the original eigenvector [1,0] and [0,1] didn’t change after the transformation. Is the graph wrong since the yellow line didn’t cover [0,1]?
I would say that the graph is a bit hard to interpret: it’s not exactly clear from the graph what they are saying about T(v_2). It kind of looks like they are saying that T(v_2) = T(v_1) = (1, 0), but what they are really trying to say is:
T(v_2) = (0,0)
Maybe you could interpret the graph to say that, but it’s not so clear.
But because the eigenvector didn’t change which is still [1,0] and [0,1], so why T(v2)=(0,0) instead of [0,1]?
Because T(v_2) is the result of applying the transformation to v_2, which is [0,1], right?
What do you get if you do this multiplication:
A_projection
\cdot v_2
At the bottom of the screenshot, it says "eigenvector of matrix_A_projection is [[1,0],[0,1]]. So I thought it means T(v2) is [0,1]
No, it means that v_2 is [0,1]. And that v_1 is [1,0].
T is a function: the “transformation” function expressed by the original “projection” matrix, meaning that:
T(v) = A_projection
\cdot v
for any vector v. So if you apply that idea, you get:
T(v_1) = [1,0]
T(v_2) = [0,0]
Which is the point of the graph they show there.