Keep_prob for input layer

Hi Team,

Can you please help me understand if keep_prob as a dropout parameter can be used as an alternative for general bagging or boosting models during the training process??

Thanks,
Harsh

Hi Harsh, Was wondering which algorithm you’re working with. Keep prob is used to set the dropout percentage in neural nets. We use that to reduce overfitting. It works like bagging/ensemble learning.

1 Like

Hello Bishnu,

Thanks for the reply. I’m looking to get a general view of keep_prob with respect to bagging and boosting, thinking that it can be used in a similar way. But, if it depends from algorithm to algorithm, can you please let me know more details about it? or any article or document where it is more elaborated.

Thanks,
Harsh

I am not sure how DLS Course 2 covers this subject, but the following 2 links may be useful.

1 Like

“Bagging” and “Boosting” are not covered anywhere in DLS. I am not personally familiar with either of those topics, so can’t help there. But as with everything these days “Google is your friend”. :nerd_face:

As Bishnu mentioned above, dropout is a regularization technique, which is primarily useful for reducing overfitting. Prof Ng explains dropout and how to apply it in some detail in the lectures in C2 W1. If you want to know more about dropout, you can read the original paper on the subject from Prof Geoff Hinton’s group.

1 Like