Week 4 - L_layer_model

Here’s a previous thread which gives the “dimensional analysis” for this test case. That’s always the way to start, so that you understand what should be happening at each layer. Once you’ve seen that, then what we see is:

Because you’ve specified “sigmoid” as the activation, that (we hope) means you’ve fallen out of the loop over the hidden dimensions and are processing the output layer. But at the output layer you should have these dimensions:

W3 is 1 x 3 and A2 is 3 x 4

But somehow you’ve got the wrong shape for W: you’re using the shape of W2. So how did that happen? Clue: what is the value of l (lower case ell) when you fall out of the loop?

1 Like