To add comments or start new threads please go to the full version of: Eigenvalue Decomposition

zach
Hello all. This is my first post here. Hope someone can help. Thank you guys in advance.

Here is the question:

I have a n-by-n matrix A, whose eigenvalues are all real, distinct. And the matrix is positive semi-definite. It has linearly independent eigenvectors V1...Vn. Now I have known part of them, let's say V1...V3. How can I get a basis for span{V4...Vn} without calculating V4...Vn?

Sincerely,
Zach
rpenner
Will this work? (It's math, so you usually have to prove each step is valid and the result is as desired)

A. Start with any orthonormal basis
B. For each Eigenvector, for each element of the basis, remove the component parallel to the eigenvector from each member of the basis
B_k := B_k - [ (V_j dot B_k) / ( norm(B_k) norm(V_j) ] V_j (unless B_k was already 0)
C. Replace the basis elements the smallest norms with the eigenvectors
zach
QUOTE (rpenner+Aug 9 2011, 12:25 AM)
Will this work? (It's math, so you usually have to prove each step is valid and the result is as desired)

A. Start with any orthonormal basis
B. For each Eigenvector, for each element of the basis, remove the component parallel to the eigenvector from each member of the basis
B_k := B_k - [ (V_j dot B_k) / ( norm(B_k) norm(V_j) ] V_j (unless B_k was already 0)
C. Replace the basis elements the smallest norms with the eigenvectors

Thank you for the reply.

It looks pretty similar to Gram Schmidt process. Could you please clarify the range of your k and j? I can't get your idea.
rpenner
j ranges from 1 to m over your eigenvectors.
k ranges from 1 to n over your work-in-progress basis

Then you remove the m basis vectors with the least remaining norm, and replace them with the eigenvectors, so you should have n linearly independent vectors.
zach
QUOTE (rpenner+Aug 9 2011, 05:54 AM)
j ranges from 1 to m over your eigenvectors.
k ranges from 1 to n over your work-in-progress basis

Then you remove the m basis vectors with the least remaining norm, and replace them with the eigenvectors, so you should have n linearly independent vectors.

Thank you. But I am still not getting your idea. The eigenvectors are not necessarily orthogonal, what do you mean by "remove the component parallel to it"?

Could you please give a working example? Let's say

A=[1 1 -1
0 2 1
0 0 3]

whose eigenvalues and eigenvectors are:
lamda1=1, V1=[1 0 0]'
lamda2=2, V2=[1 1 0]'
lamda3=3, V3=[0 1 1]'

If I only know lamda1 and V1 now, how can I get a basis for span{V2,V3} without calculating V2 and V3? Could you please illustrate your process a little bit by using this numerical example?

Thanks again and I appreciate your help!
PhysOrg scientific forums are totally dedicated to science, physics, and technology. Besides topical forums such as nanotechnology, quantum physics, silicon and III-V technology, applied physics, materials, space and others, you can also join our news and publications discussions. We also provide an off-topic forum category. If you need specific help on a scientific problem or have a question related to physics or technology, visit the PhysOrg Forums. Here you’ll find experts from various fields online every day.
To quit out of "lo-fi" mode and return to the regular forums, please click here.