Course 1, week 4, Lab: Building your Deep Neural Network: Step by Step
Hi everybody !,
I´m reworking this lab in my own way. I want to define the sigmoid and ReLU activation functions inside of the linear_activation_forward() function. I suposse I shouldn´t call activation_cache in the line defining A, but then I not sure how to call the activation cache at the end in the line cache = …
Sorry, I don´t understand why you say that I don´t need linear_cache.
I understood that linear_cache is necessary in the next function L_model_forward (X, parameters) where all parameters and values of A and Z are stored.
Is not correct to write the following in the last two lines?:
I tried it and I din´t have an error but I am not sure if it has sense.
My definition of linear_transformation is similar to the one given in the Lab, but I am using tensor flow. And then inside linear_activation_forward function I want to define sigmoud and relu functions. But I am confused with the cache.
In order to understand the cache, you need to look at the back propagation logic. The purpose of the caches is to save the values that back prop will need later. Take a look at how they are used in the existing logic: they play no role in forward propagation, but are used in back prop.