I’ve been debugging a little more to isolate the problem: I’m getting
Z1 = [[-0.00616586 0.00206261 0.0034962 ]
[-0.05229879 0.02726335 -0.02646868]
[-0.0200999 0.00368691 0.02884556]
[ 0.02153008 -0.01385323 0.02600471]]
and in the test code I see the expected Z1 is:
expected_Z1 = np.array([[ 1.7386459 , 1.74687437, 1.74830797],
[-0.81350569, -0.73394355, -0.78767559],
[ 0.29893918, 0.32272601, 0.34788465],
[-0.2278403 , -0.2632236 , -0.22336567]])
So my problem start right at the begining. How can be the expected Z1 so big if W1 it is initialized with random values * 0.01 and b1 to 0?
Z1 = np.dot(W1, X) + b1 is expected to be small, right? What am I not understanding?
Hi @Edu4rd , if the problem is there from the beginning, can you check whether the initialization of the parameters is correct? It might be that you hard-coded a value, or that the initialization through the parameters function has got an error or typo in it.
Note that in the test case for forward propagation, the inputs are just provided for you as part of the test case. There is no guarantee that they came from the “standard” initialization routine. Your code looks correct, so if even Z1 is already wrong I would suspect you are not using the actual parameter values that are being passed to your function. Worth a look anyway …
Are you maybe calling the “init” routine from inside your forward_propagation code? If so, that would be a mistake and might explain what is going on …
I am seeing an unusual error with forward propagation error.
Using the np.dot function to multiply W1 and X, and it seems to indicate they are the wrong data type(?)
Here’s the error message:
Hi @sdidde , the TypeError means your data are strings (U32) instead of numbers (floats). So the best way is to check and print out W1 and X, that might help you finding out where the problem is. Hope this helps you finding the solution.
I got this error but could you explain it more clearly about init inside the forward prop because I see that the exercise say nothing about init or anything else, please
In one of the previous discussions on this thread, the mistake that the person had made was calling the initialization routine from inside the forward propagation routine. That is a mistake: the initialization is called before you call forward propagation. Of course there are lots of other mistakes that are possible here. If the above doesn’t shed any light, then please show us the actual error output that you are getting.
That all sounds right. There aren’t that many moving parts here. Are you sure you included the bias terms for the computations of Z1 and Z2? And that you used tanh for layer 1 and sigmoid for layer 2? Other than that, there shouldn’t be much to go wrong.
I added print statements to show the intermediate values of Z1, A1 and Z2. There’s what I get for that test case:
Yes, those are radically different values. Are you sure that you aren’t calling initialize_parameters within your forward_propagation code? Or maybe you are not referencing the actual parameter value passed in for the X and are getting some global variable instead.
thank you very much for your support, I figure it out, the variables is nearly the same to previous run and the function run the previous but the current
It’s great news that you found the solution. Yes, they do the same test twice: once directly in that cell and then again when they call the test function from public_tests.py. I’m not sure why they duplicate the same test twice. It would have been better SQA methodology have more diversity of testing.