Final model for Logistic_Regression_with_a_Neural_Network_mindset

Hi, I have this error please, waiting for your advice here

--------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-84-9408a3dffbf6> in <module>
      1 from public_tests import *
      2 
----> 3 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
    113     y_test = np.array([0, 1, 0])
    114 
--> 115     d = target(X, Y, x_test, y_test, num_iterations=50, learning_rate=0.01)
    116 
    117     assert type(d['costs']) == list, f"Wrong type for d['costs']. {type(d['costs'])} != list"

<ipython-input-83-2d7f53637f65> in model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_cost)
     35     # YOUR CODE STARTS HERE
     36     w, b = initialize_with_zeros(X_test.shape[1])
---> 37     params, grads, costs = optimize(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False)
     38     w = params["w"]
     39     b = params["b"]

<ipython-input-15-306ca0cc307a> in optimize(w, b, X, Y, num_iterations, learning_rate, print_cost)
     36         # YOUR CODE STARTS HERE
     37 
---> 38         grads, cost = propagate(w, b, X, Y)
     39         # YOUR CODE ENDS HERE
     40 

<ipython-input-68-fe7d4b0b6f25> in propagate(w, b, X, Y)
     29     # cost = ...
     30     # YOUR CODE STARTS HERE
---> 31     A = sigmoid((np.dot(w.T,X)+b))
     32     #cost = -1/m * np.sum( np.dot(np.log(A), Y.T) + np.dot(np.log(1-A), (1-Y.T)))
     33     cost = (- 1 / m) * np.sum(Y * np.log(A) + (1 - Y) * (np.log(1 - A)))

ValueError: operands could not be broadcast together with shapes (7,3) (1,7) 

Why do you initialize using your test set?

Thanks for the reply, yes I am confused here, I try all parameters but same error, please advise

It doesn’t matter in practice, because X_train and X_test have the same number of pixels, but usually you take the shapes from X_train.

Don’t hardcode the number of iterations, use the num_iterations argument instead. The same goes for learning_rate and print_cost.

How have you implemented
# BACKWARD PROPAGATION (TO FIND GRAD)
#(≈ 2 lines of code)
# dw = …
# db = …

1 Like

Thanks yes I am.

    dw =  ( 1 / m ) *   np.dot( X, ( A - Y ).T )    
    db =  ( 1 / m ) * ( np.sum(      A - Y )   ) 

Download and send me your notebook as a private message. I don’t see the error yet.

1 Like

Hi, I found the error, I have forget to put w, b in the below code

Thanks a lot for your help which lead me to find the issue

w, b = initialize_with_zeros(X_train.shape[0])
    params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = print_cost)

2 Likes