I have read that filter values are LEARNED (learned weights) - this makes sense. I have also seen in class discussion of how different features are detected by filter values, such as verticals or horizontals. So my question is - are filter values specified or learned or both?
LikeReplyUnfollow this post
The values are learned. They may (or may not) represent characteristics that also make sense to what your human visual system detects, such as verticals or horizontals.
@avnish, Performing convolution with specified filters has been around for a long time in several different domains. Horizontal and vertical edge detection, contrast enhancement, noise reduction, etc all have well defined objectives, and filters that perform them can be designed using thought experiments based on their mathematical properties alone. These cases are still around and that type of filter is still widely used for them. The problem comes when you want to perform more complicated pattern matches. How could you mathematically discriminate cat from non-cat, tumor from non-tumor, a vigorous plant from one that needs more irrigation, or how to locate the corners of a bounding box? I think it is cases like these where filter parameters cannot be generally described but can be learned from large datasets that make neural nets so interesting.
TMosh and AI_curious
Thx for both responses. I totally get that filter parameter values are learned - that is the foundation of deep learning. Based on the “known” filters, such as edge detection, my initial impression was that somehow the deep learning also figured out those same known filters. But now I am realizing that the learned filters have no direct correlation with those “known” filters at all. Those value work in that model, and that is it. Bottom line - understanding known filters helps to understand the convolution model, but the learned filters should not be compared with known filters in terms of their general application.