Hi all,
I had posted this question on coursera forums but was asked to post here and hence apologies for the repeated post.
The model() function, that we are expected to implement that combines all the bits of code written earlier into one function, takes X_train and Y_train as inputs. However, the call to optimize() within this function makes use of X and Y (atleast in my workbook).
I changed this to X_train and Y_train. Please correct me if I am wrong
After doing so, I am getting the following error when running model_test()
I am reusing all the existing functions that actually passed validation. So I am not sure if this is because of the change I made to optimize() or something else.
Hi @dsiyer, welcome to the DLS community! It seems odd that the call to optimize() would use X and Y. Could you indicate where in the code you have made this update, and what version of the notebook you are using (under instructions, Update)?
Hi sjfischer,
In model(), the code below is what I am talking about:
params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations=100, learning_rate=0.009, print_cost=False) # Was originally X and Y
I also noticed that in this exercise when we have to implement the predict() function, there is a call to “import numpy as np” is missing because there are the following lines of code outside where I am expected to add code:
m = X.shape[1]
Y_prediction = np.zeros((1, m)) #np has not been imported
w = w.reshape(X.shape[0], 1)
I dont understand how to get the version number of my Jupyter notebook. I do not understand what “Instructions, Update?” means?
You are hardcoding some of the parameters when calling optimize (e.g. num_iterations), you should use the values received by the model function i.e. num_iterations is an input to model which you should use when calling optimize
I was getting the same issue, and when removed the hardcoded values from num_iterations and learning_rate, the issue was resolved. Thanks for the feedback
That invocation of optimize is wrong: you did not specify values for learning rate, number of iterations or the print flag, so you end up getting the default values of those parameters as declared in the function definition of optimize.
I get from a print statement in the optimize function:
num_iterations 50
learning_rate 0.01
But still I get an assertion error:
AssertionError: Wrong values for d[‘w’]. [[-0.15608203]
[ 0.15736202]
[ 0.21618227]
[-0.22969068]] != [[ 0.08639757]
[-0.08231268]
[-0.11798927]
[ 0.12866053]]
I assume that, the correct values for num_iterations and learning_rate are passed to optimize. But I still don’t get where the mistake is.
That should be the correct call, but you are still hard-coding the value of the print flag. It shouldn’t harm your results, but it’s a mistake.
Those are the correct iterations and learning rate for that model test case, so there must be something wrong. If your optimize function passed its test case, then it must be something in model. E.g. are you sure you used the variable params to retrieve the w and b values later in model?