My understanding from the video is, Weights are scalar (single value) and one weight corresponds to one neuron. But in python implementation why are weights vectors ?
For example, W1_1 was taken as
[1
2]
Why ? and what does it mean ?
My understanding from the video is, Weights are scalar (single value) and one weight corresponds to one neuron. But in python implementation why are weights vectors ?
For example, W1_1 was taken as
[1
2]
Why ? and what does it mean ?
There is a weight value for each feature. Multiple features means multiple weights. They are represented as a vector.
When writing code, it is best to handle any situation.
You may also watch the this video in C2 W1 starting from around 1:30 for an example of multiple weights in a neuron.
Any dense layer will have many neurons in it. For suppose we can take a dense layer with 25 neurons, where each neuron in that layer has its own w and b(bias). For suppose layer1 has 3 neurons then w1_1,w2_1 and w3_1 together make up a vector. Hence W is a vector.
And internally your input is a vector which is 2d. Hence each w1_1, w2_1 etc… are also converted to 2d (vector representation) for uniformity.
Pls correct me if i am wrong.
A few points I think we can clarify:
Raymond