W4_A1_cache's functionality in backpropagation

Greetings Everyone!
I am confused about how “cache” works in these assignments. Also its role in back propagation is somewhere confusing me. Cab someone please help me understand this?
Also, what does the function sigmoid and relu return as activation cache?

Hi @Gaurav_Kataria ,

Welcome to the community! This is your first post :slight_smile:

‘cache’ is like a storage, a repository in memory, for variables.

As you already know, the training of a NN is a cycle of a forward propagation followed by a backward propagation. These propagations traverse the network layer by layer. On each layer, we have some (W)eights and some (b)iases as well as some (A)ctivations from the previous layer.

While we do forward prop, we have then W, b and A for each layer. And these are needed later when doing the backward prop for the cycle. So we need to store these different values somewhere, right? That’s when “cache” comes handy.

As we move forward layer by layer, we store in ‘cache’ the different (W, b, A) that we get layer by layer.

Then as we move backward in the backprop, we retrieve from cache the corresponding (W, b, A) to do the required calculations to update the different weights and biases.

So in summary, cache is a memory repository where we store variables.

I hope this sheds light on your question.

Thanks,

Juan