Hey guys, so i’ve been writing my own code about the Deep NN, without tensorflow but with numpy, but then i stumble upon that the value for AL is greater than 1 or less than 0. Can i share my code in here?
Hey @CourseraFan,
If your code is not related to any of the assignments, then I believe you can share your code here. But if you have the slightest doubt, then I would suggest to just DM your code. Based on your description, I guess your code is related to one of the assignments, so please DM your code only. For that, you can click on my name and click “Message”.
Regards,
Elemento
Oh okay. Now i actually have fixed it. But i was wondering what is the formula for relu_backward and sigmoid_backward? Like what’s behind that? I can’t understand without knowing what does relu_backward and sigmoid_backward doing. Can i know? Thanks in Advance.
Hey @CourseraFan,
In the lab files for C1 W4 A1, you will find a file by the name of dnn_utils.py
. In that file, you can take a look at how relu_backward
and sigmoid_backward
are implemented. If you face an issue in understanding the implementations, feel free to let us know.
Regards,
Elemento
Oh yea, thank you for your answer!
You might also want to sanity check that implementation with your understanding of what backward is all about, that being first derivatives. For example compare sigmoid()
with sigmoid_backward()
and make sure you see that mathematical relationship.
Yea, after a while, i started to get confused about whats the code doing. Anyone can help me to understand the relu_backward and sigmoid_backward?
That motivates what is happening. Details of actually deriving the derivatives of the activation functions are not in that video, but you can find those out on the web if you need to see the calculus step by step. To follow them you should be familiar with concepts from calculus like chain rule and substitution
This site, for example, seems to have a pretty thorough explanation of sigmoid
, including the explicit steps to produce the first derivative: