Let us assume I have an ANN with : -

- input layer having 5 neurons
- hidden layer [1] having 3 neurons
- hidden layer [2] having 2 neurons

how do we calculate dot product of two layers which have different dimensions, i.e, 3 and 2 neurons?

Let us assume I have an ANN with : -

- input layer having 5 neurons
- hidden layer [1] having 3 neurons
- hidden layer [2] having 2 neurons

how do we calculate dot product of two layers which have different dimensions, i.e, 3 and 2 neurons?

Hello @vink Thanks for your post.

Well i will do my best so that i can answer your question.

First of all as you say let’s assume you have two layers:

- Hidden Layer [1] with 3 neurons: so let’s denote the activations of these neurons as
**‘a1’**,**‘a2’**, and**'a3**. - Hidden Layer [2] with 2 neurons: and same as we did with first layer denote the activations of these neurons as
**‘b1’**and**‘b2’**

You will have the weight matrices associated with each layer:

- Weight matrix W1: This is a 2D matrix connects the input layer (5 neurons) to Hidden layer [1] (3 neurons) so the dimensions will be (5 x 3)
- Weight matrix W2: This is also a 2D matrix connects Hidden layer [1] (3 neurons) to Hidden layer [2] (2 neurons) so the dimensions will be (3 x 2)

*To know more about these things you can enroll in Deep Learning Specialization to understand more*

I hope you get it now and feel free to ask for more clarification

Best Regards,

Jamal

From " input layer having 5 neurons", you mean 5 features, right? If so, and let’s say we have 100 samples, then X is a 5 by 100 (or 100 by 5) matrix, W1 will be a 3 by 5 (or 5 by 3) matrix, and W2 will be a 2 by 3 (or 3 by 2) matrix.