Regularization - Prog Assign 2

In the Programming assignment titled Regression in the function backward_propagation_with_dropout shouldn’t the following

dZ2 = np.multiply(dA2, np.int64(A2 > 0))

read

dZ2 = np.multiply(dA2, np.int64(Z2 > 0))

Answer doesn’t change because since it is ReLU A2>0 and Z2>0 are equal however, since we are computing d/dZ2 I feel it should be np.int64(Z2 > 0)

Is that correct?

@aloklal99,
This looks like a question for the Deep learning specialization (DLS), not the machine learning specialization (MLS), so I’m moving it to the appropriate category.

I don’t have access to the DLS assignments, but from your description, it sounds like you are right. If you are calculating the derivative of the ReLU function wrt z, the ReLU function is g(z) = max(0, z), and the derivative should be 0 if z < 0 and 1 if z > 1.

I’ll leave it for a DLS mentor to confirm and put in a request for the DLS staff to update the assignment

I agree with @aloklal99.

image

Changed the category and week tag, and leaving it for other mentors’ comments and further action.

@mubsi, can you investigate this (i.e. what is the correct partial derivative for backpropagation with ReLU)?

1 Like