Np.random.rand() vs np.random.randn()

Hey, so in all previous programming assignments, I have been using np.random.randn(shape), however in the gradient checking programming assignment, when I used np.random.randn() out of memory, the test didn’t pass. I then read carefully to find it asked us to use np.random.rand(),
What’s the difference?

Is it just that the test didn’t pass because it was configured with np.random.rand() but in real life for use, np.random.randn() will work just as well, or am I missing something?

Thanks!

3 Likes

Hi @Jaskeerat, the main difference between both functions is the distribution of the generated random numbers:
numpy.random.rand() produces numbers in a uniform distribution


and numpy.random.randn() produces numbers in a normal distribution

When used for weight initialization, randn() helps most the weights to Avoid being close to the extremes.
An intuitive way to see it is, for example, if you take the sigmoid() function. You’ll remember that the slope near 0 or near 1 is extremely small, so the weight near those extremes will converge much more slowly to the solution.
sigmoid
Do you refer to the Regularization exercise? the Gradient checking exercise does not require using a random function.
If you refer to the Regularization exercise, there the rand() function is being used to create the Dropout matrix, and not for weight initialization (the first time I did the exercise, I also erroneusly used randn() :wink:)

12 Likes

Hi Javier!
I am totally clear on the importance of randn, thank you for the explainer!

Just 1 doubt: You mentioned that “You’ll remember that the slope near 0 or near 1 is extremely small”.

However, isn’t the slope maximum at 0?

1 Like

This helped me, I was using the wrong function for course 2, week 1, dropout ex 6.1.
Thanks

1 Like