Hi!
A convolution requires one or more filters. Each filter has a pattern. When we use the Conv2D function and specify say 32 filters, how are the filter patterns defined by the Conv2D function?
I hope this question makes sense
Thanks,
Juan
Hi!
A convolution requires one or more filters. Each filter has a pattern. When we use the Conv2D function and specify say 32 filters, how are the filter patterns defined by the Conv2D function?
I hope this question makes sense
Thanks,
Juan
Have you solved the 1st assignment in week 1 course 4?
Hi! Yes, I did. I will re-visit it right now. Thank you!
Hi @balaji.ambresh , Iâve checked the 1st assignment in week 1 course 4. It is clear the âmechanicsâ of the convolution process, however I donât see the answer to my question. May be I overlooked something again.
In a Conv2D we define x number of filters. A given filter can be like so:
1-0-1
1-0-1
1-0-1
Other filter can be like so:
1-1-1
0-0-0
1-1-1
And so on. So: If we tell Conv2D to use, say, 32 filters, what patterns of filters is the function using?
Not sure if this is clear and makes sense.
Thanks,
Juan
Filter values are randomly initialized and then updated based on training data.
Thank you @balaji.ambresh
I was under the impression that there were a set of pre-defined filters - had not understood that filters could be randomly generated patterns.
I also understand now that, after a random initialization, the filters are updated based on the training data, and I guess they tend to specialize on a particular feature, depending on the image section they are working on.
Thank you for the clarification!
Juan
Yes, the examples Prof Ng gives of a predefined filter to detect vertical edges is just that: an example to demonstrate how convolutions work and how they can detect features. But that style of âhard-codedâ filters is âold schoolâ and not how things are done anymore. Now you just start with randomly initialized filters (for âsymmetry breakingâ) and then just run the training and let back propagation learn the filter values that actually work for your particular task.
@paulinpaloalto and @balaji.ambresh, thank you very much, very clear now!
+1 on what @balaji.ambresh and @paulinpaloalto have written above. Note that certain classes of well understood image processing tasks, like contrast enhancement, sharpening, blurring etc static, pre-defined filters or kernels are likely still used. The challenge that deep learning addresses is when the objective is to recognize a cat, or a field that needs irrigation, or a flaw in a manufactured part, for which kernels are much more difficult to prescribe. For these type of problems the ability to learn kernels from large data sets is transformative.