Hello,
I am curious why the forward_propagation function returns an A2 separately?
Thank you in advance.
Jiali
A2 is a layer output to be used by next step. (You see, in nn_model(), A2 is immediately used for cost calculation.) Another output, cache, is basically for backward_propagation to calculate gradients.
Yes, thank you.
I thought cache[“A2”] is the same as A2 in the forward_propagation function. Could I use cache[“A2”] for the input of nn_model()? For Exercise 9 these two inputs result in different answer.
Could I use cache[“A2”] for the input of nn_model()?
We usually do not do that. If you hardcode it to read a particular layer’s output like A2 from a dictionary, then you can not enhance any network layers. The last output may be A3, A4 or whatever. (In this sense, using A2 in nn_model() may be misleading, though…)
For Exercise 9, I may not catch your point. This exercise is to set 0 or 1 based on output from forward_propagation. Do you mean that your output from forward_propagation(), i.e, A2 and cache[“A2”] have different value ? For this particular exercise which has fixed number of layers, those should be same.
Thank you very much.
I got your point for Exercise 4.
For Exercise 9 I think I may have some internet issue yesterday. Today the result is the same.