Convolutions Technique


Is convolutions operations considered a refactor analysis technique?

I have no idea. Please define “refactor analysis”. If I google that, all I get are articles about “code refactoring”, which has nothing to do with convolutions.

Yep, I have misspelled it. It is about dimension reductions. I am not sure if we can consider the convolutions operations to be that in DL.

(updated link)

Dimensionality reduction is a different thing altogether. The form of that which I am familiar with is PCA (Principal Component Analysis), which Prof Ng covers in his original Stanford Machine Learning course.

I did a quick scan of the article that you linked and it does mention PCA and some other things like the KMO test, but I don’t see any mention of anything resembling convolutions.

Updated the link above. Unfortunately, I had many misconception about ML and DL topics that i acquired though my online search and was corrected by P. Andrew’s courses.

Please feel free to disregard if you feel that this is outside of the scope of this course.

Yes, you have to be careful when you just search for articles on the internet. There are a lot of people writing blog posts and articles on Medium who don’t really have any particular expertise.

The second article that you found is completely different than the first one. The second one is showing how 1 x 1 convolutions can be used to reduce (or expand) the number of output channels. That is also a different thing than what is meant by “dimensionality reduction” in general. It is just a very specific type of convolution.

If you’d like to know more about 1 x 1 (pointwise) convolutions, I suggest you stay tuned for DLS Course 4. Prof Ng covers pointwise and depthwise convolutions in Week 2 of C4.

If you want to know more about dimensionality reduction, please take a look at Prof Ng’s material on PCA from Stanford Machine Learning. You can probably find it on YouTube.