C2_W1_A1: three-layer neural network description

Greetings!

How do I read “LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID” correctly to understand the structure of the three-layer neural network?

I do understand LINEAR and RELU, but a three-layer neural network should look like “RELU->RELU->SIGMOID”…

Thank you.

For this assignment, look at def forward_propagation(X, parameters): inside init_utils.py that does this for you.

In the general sense, Dense layer has activation parameter that applies a transformation function after performing affine operation (WX + b).

1 Like

The notation is a little bit misleading. I would have written that as:

[Linear with ReLU activation]->
[Linear with ReLU activation]->
[Linear with sigmoid activation]

1 Like

What does Linear mean here?
Is this transformation of previous-layer activations before passing to the current-layer activation function?

OK, I see it now.
A bit forensic investigation always helps :grinning_face:

So the “LINEAR->RELU” layer first does affine operation (WX + b) and then applies ReLu activation function.

In this case I would suggest to slightly change the notation to “(LINEAR->RELU)->(LINEAR->RELU)->(LINEAR->SIGMOID)” or “(LINEAR | RELU) → (LINEAR | RELU) → (LINEAR | SIGMOID)”.

Does it make sense?

2 Likes

Correct :100:

Yes…

Thanks for your suggestion.