Can someone please help me understand why SVD is not one of the correct answers to this question in the practice quiz?
Drift detection techniques in unsupervised settings typically suffer from the curse of dimensionality. Which of the following techniques is an appropriate solution to mitigate the effects of this curse? (Check all that apply)
1. SVD (Singular Value Decomposition)
2. NMF (Non Negative Matrix Factorization)
3. PCA (Principal components analysis)
4. K-means
Access to mentor lunge subcategory is restricted to staff and other mentors.
A learner becomes a mentor after completing the specialization and the staff reaching out to them (based on the learner activity on the forum).
Hello @chetna
I agree as @balaji.ambresh and you have pointed out, SVD can be used for dimensionality reduction.
I think the reason it has been omitted is that it has indirectly different underlying principles and approaches (It focuses on finding a low-rank approximation of the original matrix, which can be useful for reducing noise and redundancy in the data but does not directly tackle the sparsity and distance-related challenges associated with high-dimensional data.)
Unlike NMF and PCA which are used directly in capturing relevant features
That is why I think PCA and NMF are typically more appropriate and commonly used for mitigating the effects of the curse of dimensionality in such scenarios.