In the Lab 2 with the sigmoid function the `w`

is set as `w_in = np.zeros((1))`

. Why we set w = [0.]? And how it works? Because w*x will give 0? Or no?

Hi @D1ZER99 ,

Initializing weight to zero is common when we don’t know what values to use. The parameters are updated after the gradient calculation at each iteration.

This lab assignment follows the lecture showing how the model obtains a set of parameters where the cost is at the minimum, so setting the initial weight to zero is a good introduction.

Hi @Kic,

Thank you for your answer. For some reason I’ve just forgotten that `w`

is updated in the gradient descent after each iteration. Now it is clear for my.

Ty