Hi, I have a general question about setting weights and biases for the hidden layers.
The assignment was no problem and I’m grateful that Andrew Ng had us write the same code both in Numpy and Tensorflow, but one thing I don’t understand is how the values of weights and biases in the hidden layers are established.
How did we get the values for the weights and biases to begin with? I know they’re stored in
print (model.layers[2].weights)
but where did those specific values come from?
In general, the question is … when programmers create a neural network, do they have to assign a unique weight and bias to each neuron in each hidden layer at initialization?
TensorFlow does this for us, so we don’t have to.
In TensorFlow, weights and biases for a network are initialized randomly by default .
1 Like
Hello @Rod_Bennett,
In supplementary to @Mujassim_Jamal’s answer, I just want to draw your attention to these parameters that are used to initialize a tensorflow layer:
Basically for every trainable layer that carries trainable weights, you will see these “initializer” options and they usually come with default choices like 'glorot_unifrom'
and 'zeros'
, and they dictate how those weights are initialized. Since they have default choices, you don’t have to configure them explicitly or you don’t have to supply those parameters explicitly when creating those layers. However, whenever you wonder how a layer’s trainable weights and bias are initialized, go to that layer’s documentation page and look for the parameters whose names end with “initializer”.
Cheers,
Raymond
1 Like