In lecture it says that dropout zeros certain neurons randomly on each iteration and we are usign keep_prob for that. That is if keep_prob is 0.8, it means that 80% of neurons are retained and 20% are set as zero. But actual implementation was some what different, that is even when keep_prob is 0.8, 20% of values in matrix was not zero. I am adding code and their corresponding values below:

import numpy as np
np.random.seed(1)
D = np.random.rand(10, 1)
print(D)
D = (D < 0.8).astype(int)
print(D)

Here after applying keep_prob all the values are 1, actually 2 of the values must be 0, right?.
Then only after multiplying it with A vector some of the neurons will be zero, isn’t it?

Given the random nature of the process, that could happen. You have no guarantee that the droped neurons would be exactly 20%, it could be more or less depending on randomness.

There have been some discussions about it in this forum, so I encourage you to take a look at those. Some posts that could be helpful are below, but I’m sure there’s more.

Exactly! It’s all statistical. Try your experiment several times in a row without resetting the random seed between and watch what happens. Here’s such an experiment:

np.random.seed(42)
keep_prob = 0.8
for ii in range(20):
D = np.random.rand(10,1)
D = (D < keep_prob).astype(float)
print(f"{ii}: mean(D) = {np.mean(D)}")