Lambda function when Creating the model

when creating the model, for RNN case in C4_W3_Lab1_RNN, the lambda function was " tf.keras.layers.Lambda(lambda x: x * 100)" where we multipled by 100.

But for the CNN + LTSM case in C4_W4_Lab1_LTSM, it is multipled by 400.
tf.keras.layers.Lambda(lambda x: x *400)])

So can anyone explain why we use this multiplication lambda function, and why this is increased for CNN+LTSM case?

Thanks.

When training a model, all steps in the forward pass are tracked to compute gradients of loss in the backward pass. Please read about GradientTape if you are interested in learning more about this.

Using the final lambda layer and multiplying by a constant is equivalent to dividing all features by the constant before feeding to the input layer. Read this link to understand the importance of feature scaling.

As far as the constants 100 and 400 are concerned, the staff must’ve picked these hyperparameters with experimentation on model loss with different scaling values. My recommendation is to always perform feature scaling before feeding data to the model.

1 Like