I come from a physics background and am very interested in the use of PCA/SVD not only for “dimensionality reduction”, but also for selecting the “most important feature vectors” which in many cases might capture the essence of a model… just as eigenvectors do for the principle axes of rotation of a complex shape, for example.
Is that a different way of looking at PCA/SVD than is traditionally taken in ML?
1 Like
Hey @Brad_Banko,
In my opinion, “selecting the most important features” is pretty much analogous to “reducing the dimensionality of the dataset”, unless and until, while reducing the dimensionality, you opt in for the less important features, which I am sure you won’t do 
However, if you are hinting towards the difference that “selecting the most important features” only choose from the existing features and “dimensionality reduction” may even synthesize new features based on the existing features, then I have always had a soft spot for synthesizing new features, since, it may be possible, that selecting the 6 most important features out of 10 original features may give you only a 85-90% variance representation, but 6 synthesized features may give you up to a 95% variance representation.
As for the below question;
I am not really sure what answer to give you here, so let me mention some resources which may or may not provide you with the answer to this question;
Let me know if these help.
Cheers,
Elemento
Hello @Brad_Banko, Physics here. 
I think it is not very much different, because “most important feature vectors” is the necessary logical step to get to “dimensionality reduction”. It is just whether you say it out or not. In ML, I think people usually don’t say it out, because “dimensionality reduction” is the actionable, but if they need to explain PCA, I think they will all talk about “most important feature vectors” or eigenvectors.
Raymond
Thank you. Maybe it is mainly a matter of semantics after all. Interesting.
You are welome @Brad_Banko