I am trying to understand in neural network model in layers especially first one how algorithm decide to take which values for w in equation z = w*x+b and b also. x is input and usually sigmoid(z) or other activation function is output. But how algorithm decides values for w and b for neurons?
Welcome to our community.
The algorithm starts with a random value for all the weights. b can be initialized to a random value or 0. From there on, the Gradient descent Algorithm will modify the values of w and b for each iteration, until the convergence criterion is met (or the number of stipulated iterations is completed).
The weight values are updated during training the NN. The method is called “backpropagation”, if you want to research it online.
This course doesn’t discuss the NN training process in detail. It’s provided for you automatically when using a ML package like TensorFlow.
Thank you guys for response. I understood now.