I have passed all the tests from the previous steps, so I guess my function definitions are correct. This is the logic I have followed:
Initialize w and b with a dimension of X_train.shape[0]
Get params, grads and costs using the optimize function I defined, with arguments w, b, X_train and Y_train
Retrieve w and b from the params dictionary
Call the predict function with the updated params w and b, both on X_test and X_train
I think I have performed all the steps propertly, however the w values calculated by the gradient descent were not as expected. As I have said, I passed all the function tests until this point.
I faced the exact same error.
It turned out that I was not passing the number of iterations, the learning rate and the print_cost=False parameters while calling the optimize function.
Thank you very much, I had the very exact problem. I thought the predefined values on optimize function were the same as the ones in the model definition. I passed those parameters when calling the function and I passed all the tests.
I have the same errors, I tried the use the same values of num_iterations, learning_rate, print_cost, as in the optimize function but nothing changed.
I still have the error. Is that the way to do it or I am doing something else wrong?
Hi @Chinwe, it is not about using the same values but using the parameters that the function is receiving, for example, model function receives as parameter num_iterations you have to use num_iterationswhen calling optimize, not a hardcoded value but the parameter itself.
In a simpler example let’s assume you have two functions A and B:
in the beginning, the “learning_rate” defined as 0.5 and the answer is based on it, but if you copy and paste your code (like me ) from optimize function in Exercise 6, you force it to pass a different “learning_rate”, and you will get different results.
Example 6
optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)