Computing A2 after computing A1

In the image below, we computer A1 (a_out) as [[1,0,1]]

How is this used to computer A2?
I assume that A1 the new vector applied to Layer 2.
In Layer 2 we have only 1 neuron, so there will be only one W vector?

Perhaps, I’m tired, but I can’t to the next calculation.
Can you please give me an example calculation?

Hello @JPGuittard

The shape of the weights in the layer 2 depends on both its number of neurons (which is 1 as you have pointed out) and the number of neurons in the previous layer (which is 3). So the shape of the weights in layer 2 will be (3, 1).

On the other hand, the shape of layer 2’s bias term is (1, 1). The first 1 is always 1 no matter what, but the second 1 is 1 because there is only one neuron in the layer 2.

The slide does not have an example weight/bias array for layer 2, and it was not meant to demonstrate the computed result of A^{[2]}, so naturally you can’t compute it just based on the slide. For example, you might set W^{[2]} to [[1], [2], [3]] and B^{[2]} to [[4]] and A^{[2]} should become [[8]].

Take care :wink: :wink:

Cheers,
Raymond