Problems with cache

Course 1, week 4, Lab: Building your Deep Neural Network: Step by Step

Hi everybody !,

I´m reworking this lab in my own way. I want to define the sigmoid and ReLU activation functions inside of the linear_activation_forward() function. I suposse I shouldn´t call activation_cache in the line defining A, but then I not sure how to call the activation cache at the end in the line cache = …

How can I do that ?

How about just:
cache = (Z, A)

Then you don’t need “linear_cache” or “activation_cache” at all.

Hi!, thanks for the answer.

Sorry, I don´t understand why you say that I don´t need linear_cache.

I understood that linear_cache is necessary in the next function L_model_forward (X, parameters) where all parameters and values of A and Z are stored.

Is not correct to write the following in the last two lines?:

image

I tried it and I din´t have an error but I am not sure if it has sense.

I guess that depends on what your linear_transformation() function returns.

The way you were setting A and Z and activation_cache did not work the way it appeared you were hoping for.

My definition of linear_transformation is similar to the one given in the Lab, but I am using tensor flow. And then inside linear_activation_forward function I want to define sigmoud and relu functions. But I am confused with the cache.

In order to understand the cache, you need to look at the back propagation logic. The purpose of the caches is to save the values that back prop will need later. Take a look at how they are used in the existing logic: they play no role in forward propagation, but are used in back prop.

1 Like

That’s not a good way to define a python “function”. They should not be encapsulated inside each other.

If you actually mean that you want to implement a sigmoid and ReLU calculation inside the linear_transformation() function, that’s fine.

I got it!, thank you.

Perfect, yes,I wanted to implement the sigmoid calculation inside the function. Thanks again.