Dropout regularization programming exercise issue

let us suppose X is a vector initialized using this command : X = np.random.rand(4,1).

If keep_prob is 0.8 then as per the discussion in the programming sections following code will result in 80% of 1’s and 20% of 0’s.

X = (X < keep_prob).as type(int)

But this is not doing exactly as said the numbers of 1s and 0’s varies. How to get that percentage of zeros and ones?

The point is that the behavior is random, right? So you won’t necessarily get exactly 80% of ones on any given run. The values are statistical and will vary. There may also by “quantization error”, meaning that the number of neuron output values happens not to be exactly divisible by 0.8.

The problem here is that the code sets the random seed just for ease of grading, so you won’t actually see that variability here. But it would be easy to construct your own test case and demonstrate the statistical behavior.

A couple of other points to notice here:

  1. The mask value is not a vector: it is the same shape as A^{[l]}, which means that different samples within the given batch or minibatch are not handled the same way.

  2. You can also get the same statistical 80% behavior by writing it this way:

X = (X > (1 - keep_prob)).astype(int)

but the actual neurons being “zapped” will be different with that implementation. The grader here will not accept that implementation. You have to use the < version that you show above.