Regularization on test time

Hello,

In the Week 2 Quiz, there was this question:
The regularization hyperparameter must be set to zero during testing to avoid getting random results? for which the answer was False
→ This has me confused as I thought that regularization is applied only on the training set and once the optimal weights are decided, there is no need for applying regularization on the test set. I believe this is the case for the regression algorithms in Machine Learning. If so, why is this different for Neural Netwoks?

You’re right that regularization is only applied at training time, not at test time. So the only theory I can guess is that you must somehow be misinterpreting what the options are on the question here. E.g. if it is dropout regularization that they are talking about, you would set keep_prob to 1, not 0, right?

1 Like

Thanks for the swift response!
The question said ‘regularization hyperparameter’ so I assumed they were talking about lambda and not dropout.
Also, here is the explanation provided with the answer: (see attached screenshot)
The regularization parameter affects how the weights change during training, this means during backpropagation. It has no effect during the forward propagation that is when predictions for the test are made
This explanation would also make sense when talking about lambda, wouldn’t it? In which case it doesn’t matter what value you set lambda to as it plays no role in making predictions on the test set. Correct me if I’m wrong.

Yes, that must mean that they are talking about L2 regularization and in that case the value of \lambda has nothing to do with the predictions that are made: it only affects the cost function. But you can argue that the question should be a bit more specific because in the case of dropout, it does actually affect forward propagation, so you would need to set keep_prob = 1 in that case.

But at least we have an explanation now …