Single Perceptron Neural Networks for Linear Regression Assignment
getting below error exercise 4 in 2.3 the loop
shapes (1,1) and (30,1) not aligned: 1 (dim 1) != 30 (dim 0)
please guide me
getting below error exercise 4 in 2.3 the loop
shapes (1,1) and (30,1) not aligned: 1 (dim 1) != 30 (dim 0)
please guide me
Can you please tell me the line where you are getting the error?
Is it when computing Z? or in the assert?
Normally the error will say the line.
ValueError Traceback (most recent call last)
in
----> 1 Y_hat = forward_propagation(X, parameters)
2
3 print(Y_hat)
in forward_propagation(X, parameters)
19 ### START CODE HERE ### (~ 2 lines of code)
20 Z = np.dot(w, X) + b
—> 21 Y_hat = forward_propagation(X.T, parameters)
22 ### END CODE HERE ###
23
in forward_propagation(X, parameters)
18 # Implement Forward Propagation to calculate Z.
19 ### START CODE HERE ### (~ 2 lines of code)
—> 20 Z = np.dot(w, X) + b
21 Y_hat = forward_propagation(X.T, parameters)
22 ### END CODE HERE ###
<array_function internals> in dot(*args, **kwargs)
ValueError: shapes (1,1) and (30,1) not aligned: 1 (dim 1) != 30 (dim 0)
I AM GETTING ALL ERRORS WHEN I GO FOR GRADING BUT IT IS SHOWING ALL TESTS PASSED
ERROR
There was a problem compiling the code from your notebook. Details:
matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 1)
I HOPE THERE Z A PROBLEM WITH COURSERA …
Dear @Somasekhar_Donthu
Please find my comments:
However, everything is possible.
Can you please try using np.matmul(W,X) + b instead of the dot product and tell me if you are getting the same error?
The initial code defines W with upper case, however, I see you are using it with lower case. Can you please confirm if you change the source code? If so, can you please define W and b like this:
W = parameters[“W”]
b = parameters[“b”]
Not using np.matmul() was the issue