Sizes of input and output layers

I have already submitted the graded assignment with all test cases passed. However I’m not sure I understand everything I’ve done. Specifically I have 2 questions.

One question I have is in exercise 2 where we are to calculate the sizes of the input and output layers. I don’t understand why they both should be 1 instead of m=30.

Can someone please help me understand? Thanks

As to my other question, the details are in a separate thread I posted yesterday. Thanks

1 Like

Hi @plaudev.

This is a good question because it could really be confusing when you first encounter it.

The explanation is more about the terminologies and structure of the neural network. Particularly, the size of the input/output “layer” of a neural network which refers to the number of nodes in that layer.

The confusion comes from the magic/power of linear algebra where you could input several values at the same time using a matrix/array and get multiple output values at the same time. In theory, we input one value per node at a time. But in practice, we want to speed up computation so we input all values (m=30 examples) at the same time using linear algebra.

So remember that when you are asked about the size of input and output layers of a neural network, it is referring to the architecture of the neural network. Such that even if you have bigger data (increased number of examples m), your architecture will not change.

I hope this helps.

thank you, jonrico. much appreciated

1 Like