Can someone explain what weights and biases are?

Weights and biases were not explained in the lecture videos but I cam across this topic while doing the Optional Practice Lab C2_W1_Lab01_Neurons_and_Layers. Here I across this code and markup:

Now let’s look at the weights and bias. These weights are randomly initialized to small numbers and the bias defaults to being initialized to zero.

In [6]: w, b= linear_layer.get_weights()
print(f"w = {w}, b={b}")

Ou [6] : w = [[-1.03]], b=[0.]

A linear regression model (1) with a single input feature will have a single weight and bias. This matches the dimensions of our linear_layer above.
The weights are initialized to random values so let’s set them to some known values.

In [7]:  set_w = np.array([[200]])
set_b = np.array([100])
# set_weights takes a list of numpy arrays
linear_layer.set_weights([set_w, set_b])
print(linear_layer.get_weights())

Ou[7] : [array([[200.]], dtype=float32), array([100.], dtype=float32)]

Anyway, please explain to me what are they and why do we need them? And how is the code working here

I am guessing it might be the parameters w and b, because we are using linear regression, where w = weight and b = bias. If this is it then why are we loading / instantiating them like this?

In high school, we studied the equation of line as y = mx + c, where m is the slope and c is the y-intercept. In machine learning, we have y = wx + b. This w (weight) and b (bias) are parameters, as you said. Maybe reading my this article help you understand it more.

1 Like

In a neural network, both “bias” and “weight” are essential components that play crucial roles in the network’s ability to learn and make predictions.

Bias serves as a type of “baseline activation” that helps the network account for any input distribution variations that might not be captured by the weights alone. Without bias, the model might be forced to pass through the origin (0,0) in the input space, limiting the network’s ability to fit more complex data.

The weights are crucial because they allow the neural network to learn the underlying patterns and relationships in the data. By adjusting the weights, the network can fit complex nonlinear functions, making it capable of learning from and generalizing to new data.

Bias and weights are used in neural networks for the following reasons:

  1. Modeling Flexibility: The combination of weights and biases enables neural networks to approximate complex functions and capture intricate patterns in the data. This flexibility is one of the main reasons why neural networks can be powerful tools for various machine learning tasks.
  2. Learning from Data: During the training process, the model adjusts the weights and biases to minimize the difference between its predictions and the actual outputs. This learning process allows the network to adapt and improve its performance on the task at hand.

In summary, biases and weights are fundamental elements in a neural network that allow the model to learn from data, make accurate predictions, and model complex relationships in the data. They contribute to the network’s ability to generalize and perform well on various machine learning tasks.

Hi, @Annie_Chakraborty thank you for your questions.

what are they and why do we need them?

weights and biases are parameters to be adjusted to minimize the difference between the predictions and the true labels from our training data. Without them, we will not have a way to tune the model to learn and represent properly the data.

why are we loading / instantiating them like this?

In tensorflow/keras, after defining the model by e.g:

model = keras.Sequential()
model.add(keras.layers.Dense(units=1, input_shape=(1,)))
model.compile(optimizer=‘sgd’, loss=‘mse’)
model.fit(input_data, output_labels, epochs=1000, verbose=0)
The following part will retrieve the learned weights and biases, after the training:

weights, biases = model.layers[0].get_weights()

get_weights() returns both weights and biases after the model was trained.