C1_W1_Lab05: cost_function, gradient_function

Where are the parameters (cost_function, gradient_function) in def gradient_descent() come from?

Hello @cajumago, I am not sure if I understand your question correctly, but when you call the function using this line (as did in the lab):

w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha, 
                                                    iterations, compute_cost, compute_gradient)

You specify compute_cost to be the input for cost_function and compute_gradient for gradient_function.

compute_cost and compute_gradient are functions defined in earlier code cells, so you can see them if you scroll the notebook up.

Let me know if you were asking about something else. :slight_smile:


Hola @rmwkwok,

100% clear!
Thank you for answering. Cheers!

Glad to hear that! And you are welcome @cajumago!

Hey Raymond,

I’ve been testing different values to run learning parameters using batch gradient descent, specifically with homes prices average here around I live, and I found myself with follow error: RuntimeWarning: overflow encountered in double_scalars

Google it and found that an alpha 10**-8 works since it converges to a steady value. Am I right? Is this a good process?

Thanks in advance!

Hello @cajumago, I think your direction is right - lowering alpha can prevent the cost to diverge (to overflow). Since you are testing learning parameter values, I suggest you to print out the cost value, say every 100 iteration. If alpha is too large, then you may see that the cost keeps going up (diverge). When it becomes too large, you will see the overflow warning.

If you are trying to establish a “safe range” for alpha, then my another suggestion is, always normalize all of your features so that they all share very similar scales. You can do this for all ML problems in the future. The reason behind the need for normalization is explained here.

Lastly, good to know that you are testing the learning parameters. :slight_smile:


Thank you so much for your valuable feedback, @rmwkwok!