I have read similar threads saying not to “hard code” the function parameters but I’m not exactly sure what to modify.
The optimize function from exercise 6 is defined by default as: def optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False):
And the model function from exercise 8 is defined by default as: def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
Also, my code for exercise 8 is:
I am getting exactly the same error when I was trying this exercise earlier today. I also noticed that the optimize() function in ex 8 is using X and Y instead of X_train and Y_train.
I removed the initial values from the definition of optimize at both places but now the error shown is “model() missing 1 required positional argument: ‘print_cost’”
Are you sure it got resolved just by removing the initial values?
Me too, got the exact same issue here. I am looking into it, and hope can figure it out soon. It is comforting to see there are other fellow classmates having the same issue, making me feel less just on my own in this.
The mistake that @DDigits made in the original post on this thread is that they did not pass any of the “optional” parameters to optimize, when they called it from model. So that means you will get the default values of the learning rate and number of iterations that we declared in the definition of the optimize function. That’s a bug, because it means you are ignoring the actual values that were passed in at the top level to the model function.
Of course that is only one of many possible mistakes that can be made here. Other common ones are referencing global values for the X and Y parameters passed to optimize, instead of the actual values passed into model.
Another is “hard-coding” any of the optional parameters on the call to optimize.
The problem can be solved by deleting the values assigned to num_iterations and learning_rate in optimize() in exercise 6, and call optimize() with the values in model() in exercise 8 instead.