I noticed Robert mentioning the base model in the design phase, can anyone discuss this model in detail about how this model architecture was planned or designed. I can see the nose has a convolution layer with different filters…
I will appreciate this discussion as I already know yolo algorithm was used which I have understanding.
You can take a look at the model for more details in the
Thank you for replying. Would that give me understanding on understanding model architecture for each layer was used with a filter of particular choice of units? as I have gone through that files and also optional labs. But wanted to know how every part of animal features were selected with particular units.
I hope now you got my point of interest!!!
Hmm…I don’t think that’s something that can be easily explained here. You might want to take DLS or MLS to understand how these models operate, the architecture decision making and all. I’m not much aware of how “white box” these models were, but as I understand, neural networks are “black boxes”, they decide themselves how they want to categorise “features” for animals within them.
If you could pin point me to the lab and the model you are talking about, I could take a look.
And yes, looking in the files will give you an understanding of the model architecture.
I have taken DLS and MLS specialisation to understand how model architecture is planned or made. But I was not aware that they decide themselves on how they want to categorise features for animals within them. This is new for me.
Feature selection based on my understanding is choosing the best features from the relevant or original data. I think when you are mentioning they decide themselves is random selection of relevant features but that is also done from subset of original features. So I wanted to know the thought process behind that model architecture. Like in case as discussed in the video, random clicks were taken from a particular place where the animal features would be obscured, so were those image considered as noise or selected in the original features.
My bad, I misunderstood what you were asking. Yes, you are right about feature selection here.
You were talking about features in data, I was talking about features in an image.
But then again, feature selection, if I’m not wrong, is only applicable to a structured data i.e tables and etc. For example, you have 10 columns of patient data, and you want to know which ones are the most important in predicting a disease. This is does not applies to images, which is what the model is used on here.
Anyways, if you want to look at the models, you can do
model.summary() to look at all of the layers of the model. You can also read the original paper on it, which would best describe the architectural decisions the authors made and why.
Similarly, the resources are available at the top of the labs and in the resource sections from where the inspirations have been taken.