Parameters w and b in linear regression

Pls what is the best way of finding the appropriate parameters w and b in linear regression in code? Is it that I will be getting to pick random numbers all along? Pls who can answer this? Thanks.

Hi!

The best values for w and b are found by minimizing a cost function using an optimization algorithm such as gradient descent from lecture.

I suggest reviewing the videos and lab from Course 1 Week 1’s section “Train the model with gradient descent”.

Okay. Thanks I’ll check again.

1 Like

Hello SamReiswig​:blush:. I am still confused. I have checked again including the optional lab. In the code that was used to compute cost and gradient descent, the parameters w and b were pre-chosen before running the code. My problem is how do we arrive at this pre-chosen parameters w and b

You could see that w_init and b_init were chosen before running the cost. My issue remains how we chose it and if there are any rules to consider before pre-choosing parameter w and b

I don’t know of any rules for choosing the initial w or b values.

Hi @Abdulraqib_Omotosho, it’s gradient descent’s job to update the parameters to a set of optimal values. There is no strict rules in choosing those parameters, but usually we initialize bias terms to zero, and initialize other weights randomly. If you have to ask about a rule relevant to weights initialization, it’s actually not about the weights itself, but to scale the features (covered in week2).

Cheers,
Raymond

Thank you @rmwkwok . I understand much better now.