I’ve read the treads but still do not get what I have to do
Thks for a clear answer… this notebook is already very confusing
While defining L_model_forward
function, when you call linear_activation_forward
for the last layer, sigmoid one, you have to use the W and b of the last layer, not the hidden layer, right? Hint: We use l for hidden layers and L for last layer.
Best,
Saif.
If Saif’s suggestions are not enough to get you there, the best way to start diagnosing a case like this is to understand the “dimensional analysis”, which will show you what the shapes are that you should be seeing at each layer. Here’s a thread which walks you through that for this test case.