Why do we learn feature importance that we've learned?

Hi! The whole lab C3_W5_Lab_2_Permutation_Importance was about feature importance, but we learned and used it in the lab C2_W2_Lab_3_Feature_Selection. So, what was sense of this? Have we learned something new about it? Maybe that we can fined the best features between few models at once?

In the ungraded lab C2_W2_Lab_3_Feature_Selection, feature selection methods use statistical tests or greedy approaches to find a subset of features based on a scoring strategy on an estimator. In this approach, data is not modified when fed into the estimator.

Permutation importance shows how sensitive an estimator is to the exact values (as described here) to understand feature importances. In this approach, data is modified when fed into the estimator. To learn more, do visit the scikit learn link provided in the linked topic.

As far a different feature selection techniques is concerned, the methods to use depend on the size / type of the underlying dataset (eg: permutation importance is appliable only on tabular dataset) and eventually the performance on validation set.

1 Like