My Unit Test does not work for the ForwardPropagation Method

Hello. This is the message from the unit test for the forward propagation method. Note that it ran correctly before the unit test.

Test case “change_weights_check”. Wrong array A.
Expected:
[[-0.00245289 0.00824937 -0.00665357 0.00132569 -0.00880105 -0.00386336
0.01769506 0.00525675 0.00295275 0.00674929 0.0158391 0.00846196
0.00845636 -0.00871683 -0.01341474 0.00094482 0.00719453 -0.01248855
-0.01124121 -0.00693175 0.00585243 -0.00407756 0.00406077 0.00205962
-0.00448089 -0.00032455 -0.0069261 0.00191725 0.0047034 0.00247886]]
Got:
[[ 0.00570642 -0.01919142 0.01547893 -0.0030841 0.02047485 0.00898776
-0.04116598 -0.01222935 -0.00686931 -0.01570163 -0.03684826 -0.01968599
-0.01967297 0.02027892 0.0312082 -0.00219805 -0.01673744 0.0290535
0.02615168 0.01612611 -0.01361516 0.00948609 -0.00944703 -0.00479152
0.0104244 0.00075505 0.01611297 -0.00446031 -0.01094205 -0.00576685]].

ValueError Traceback (most recent call last)
in
----> 1 w3_unittest.test_forward_propagation(forward_propagation)

~/work/w3_unittest.py in test_forward_propagation(target_forward_propagation)
286
287 for test_case in test_cases:
→ 288 result = target_forward_propagation(test_case[“input”][“X”], test_case[“input”][“parameters”])
289
290 try:

in forward_propagation(X, Y)
18 # Implement Forward Propagation to calculate Z.
19 ### START CODE HERE ### (~ 2 lines of code)
—> 20 Z = np.matmul(W,X) + b
21 Y_hat = Z
22

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 1)

1 Like

What are the sizes of W and X?
You can instrument this in your code with:

print("W is:",W.shape)
print("X is:",X.shape)
1 Like

When I run the code before the unit test, it successfully runs and correctly shows W is shape (1,1) and X is shape (1,30). In the unit test, it shows these values but later also shows the W is shape (1,1) and X is shape (2,5).

W is: (1, 1)
X is: (1, 30)
W is: (1, 1)
X is: (1, 30)
Test case “change_weights_check”. Wrong array A.
Expected:
[[-0.00245289 0.00824937 -0.00665357 0.00132569 -0.00880105 -0.00386336
0.01769506 0.00525675 0.00295275 0.00674929 0.0158391 0.00846196
0.00845636 -0.00871683 -0.01341474 0.00094482 0.00719453 -0.01248855
-0.01124121 -0.00693175 0.00585243 -0.00407756 0.00406077 0.00205962
-0.00448089 -0.00032455 -0.0069261 0.00191725 0.0047034 0.00247886]]
Got:
[[ 0.00570642 -0.01919142 0.01547893 -0.0030841 0.02047485 0.00898776
-0.04116598 -0.01222935 -0.00686931 -0.01570163 -0.03684826 -0.01968599
-0.01967297 0.02027892 0.0312082 -0.00219805 -0.01673744 0.0290535
0.02615168 0.01612611 -0.01361516 0.00948609 -0.00944703 -0.00479152
0.0104244 0.00075505 0.01611297 -0.00446031 -0.01094205 -0.00576685]].
W is: (1, 1)
X is: (2, 5)

ValueError Traceback (most recent call last)
in
----> 1 w3_unittest.test_forward_propagation(forward_propagation)

~/work/w3_unittest.py in test_forward_propagation(target_forward_propagation)
286
287 for test_case in test_cases:
→ 288 result = target_forward_propagation(test_case[“input”][“X”], test_case[“input”][“parameters”])
289
290 try:

in forward_propagation(X, Y)
20 print(“W is:”, W.shape)
21 print(“X is:”, X.shape)
—> 22 Z = np.matmul(W,X) + b
23 Y_hat = Z
24

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 1)

That doesn’t seem possible. There must always be the same number of weights in W as there are features in X.

1 Like

Yes, exactly. I do not know why that is happening either to be honest. The shape (2,5) is a random shape for X and I am not sure what to change as the previous methods were graded as correct.

1 Like

Why would the shape be random? That seems unlikely in the design of an assignment.

1 Like

Hello, I faced the same issue and I’m quite sure that this must be a bug in the unit test.
I had the issue you discribed when using the following code to calculate Z:

Z = W*X + b

This code solved the issue (both produce the exact same output):

Z = np.dot(W, X) + b
1 Like

Those two expressions only give the same results if W is a scalar.

1 Like

Hello again. I have been stuck on this assignment for two weeks, so I would really appreciate more than just a hint. I have tried using np.matmul and incorporating W.tranpose rather than just W. The issue is clearly occurring only in the unit test. Without this issue solved, I cannot move forward in this course. Someone please help. I understand what is going wrong, but I do not know how to fix it as the unit test is having the issue, not the code that is executed for the example in this problem.

1 Like

To add on to my last comment, I have tried switching W and X and using transposes. I looked through other people’s comments and we have the same code, and for them everything worked. While the code in previous exercises may be different, I have received full credit for those parts.

1 Like

I’m not a mentor for that course, so I don’t have first-hand experience with the materials.

See your private messages for a suggestion.

1 Like