what is the linear_cache in this excercise
Hello Akshara,
Welcome to the community.
If you go through the notebook, you will find that the related details have already been provided there for your convenience.
What happens is:
For every forward function, there is a corresponding backward function. This is why at every step of your forward module you will be storing some values in a cache. These cached values are useful for computing gradients.
In the backpropagation module, you can then use the cache to calculate the gradients. Don’t worry, this assignment will show you exactly how to carry out each of these steps!
Just for your convenience, the two functions- linear and activation where these caches are stored, have been grouped into one function. Here in exercise 4, we are implementing a function that does the linear forward step followed by the activation forward step. And that’s why you are not able to identify the individual linear function at this phase.
Yes, as Rashmi says, you can see the logic that creates the various cache values if you look through the forward propagation code. They do that part for you in the “template” code.
Thank you! I actually found it just after I posted the question