Hi,
I noticed that there is a mismatch beetween the shape of the W1 matrix in the function description and the shape of the matrix that is used for testing. I used .shape to be able to run the notebook until the end (and get my points ) but it might be worth correcting for less experienced coders.
Yes, the “docstring” there is incorrect. Following that will not end well, as you discovered.
But the other point is the exercise has you apply dropout at two of the layers and it is never a good assumption that all layers have the same number of neurons.
Thanks @paulinpaloalto , understood.
“Hard-coding” is never a good idea, just on general principles, unless you have no other choice. In this case, there’s no reason to do it in any case: just use the shape of the given activation matrix.
Hard coding is learning the hard way, I will remember that ! Thanks @paulinpaloalto , I am having fun