My idea was that multiplying by 100 would provide some kind of percentage in the output in terms of probability. But what is the goal of multiplying by 200 ?
There is no probability involved in this time series problem.
Guess it’s a demo of a lambda layer.
You don’t need the last lambda layer at all. @DLAI-QA-Team FYI
Dear @Manu ,
Aside from the question of “why 200?” I want to correct an incorrect presumption. Multiplying by 100 had nothing to do with probability. The reason for the lambda layer previously, and its factor of 100, was to re-scale the values that were output by the RNN layer. The RNN output is like the output of hyperbolic tangent function, with all values lying between -1 and +1, as described in one of the videos in Course 4, Week 3.
Because the values in our series were positive values much greater than one, the 100 factor rescaled the RNN output to be similar in order of magnitude to our data.
In my own code with the RNN, I tested a lambda function that was (x + 1) * 50.
@balaji.ambresh submits that the lambda layer (with its factor of 200) is not needed at all in this exercise, which has LSTMs, not RNNs, and I will certainly believe him. I am ignorant (so far!) of its purpose when used on an LSTM output.