My understanding from the video is, Weights are scalar (single value) and one weight corresponds to one neuron. But in python implementation why are weights vectors ?

For example, W1_1 was taken as

[1

2]

Why ? and what does it mean ?

My understanding from the video is, Weights are scalar (single value) and one weight corresponds to one neuron. But in python implementation why are weights vectors ?

For example, W1_1 was taken as

[1

2]

Why ? and what does it mean ?

There is a weight value for each feature. Multiple features means multiple weights. They are represented as a vector.

When writing code, it is best to handle any situation.

1 Like

You may also watch the this video in C2 W1 starting from around 1:30 for an example of multiple weights in a neuron.

Any dense layer will have many neurons in it. For suppose we can take a dense layer with 25 neurons, where each neuron in that layer has its own w and b(bias). For suppose layer1 has 3 neurons then w1_1,w2_1 and w3_1 together make up a vector. Hence W is a vector.

And internally your input is a vector which is 2d. Hence each w1_1, w2_1 etcâ€¦ are also converted to 2d (vector representation) for uniformity.

Pls correct me if i am wrong.

A few points I think we can clarify:

- Any dense layer can have one or more neurons
- Each neuron in a layer has no or one bias b, and a set of weights. The number of weights is equal to the number of neurons in the last layer.
- When the â€ślastâ€ť layer has 3 neurons, then â€śthisâ€ť layerâ€™s neuron has a 1D vector of weights of length 3. If â€śthisâ€ť layer has 25 neurons, then â€śthisâ€ť layer has a 2D matrix of neurons of size 3 x 25.

Raymond