I am very new to TensorFlow and trying to understand the basics. Can someone help me understand the purpose of Weights in TensorFlow and how does it help the neuron. I saw the following example:
Every connection between units in a NN layer unit is a weight. The basic equation for a NN unit is that its output is the product of all of the inputs with each input’s weight, plus a bias unit, then passed through some activation function - like ReLU or sigmoid.
Weights are inherent to every machine learning method. The weight values are what’s actually being learned.
What I understand is that the input layer (with features) acts as input to the next layer. Are the weights/bias introduced randomly or there is a connection between each input and it’s weight. And how does the weights assist in learning.
Consider the neural network as the successive layers. There will always be at least 2 - a hidden layer, and an output layer.
Each adjacent pair of layers is connected by a matrix of weights. Every pair of units in two adjacent layers has one weight value.
So for example if you have 400 inputs and 25 hidden lidden layer units, the weight matrix that connects them will have (400 * 25) = 10,000 weight values.
The initial values of the weights are small random numbers.
Thanks for the detailed reply. I better understand it now.
One question though. I just read that weights are " the parameters that are adjusted during the training process to minimize the difference between the actual output and the target output" . Is it mandatory to have weights. What if we didn’t use weights at all.