Lambda Layers-lab C4_W3_Lab_1_RNN

In general, it’s good to restrict NN inputs to small range for the sake of faster converge. See this post for an example.

Let’s consider the 1st exercise in course 1 (housing price prediction based on number of rooms). Go ahead and set the ys to the actual price i.e. 50000 + num_rooms * 50000 and build the model, with a lambda layer say tf.keras.layers.Lambda(lambda x: x * 50000). Model loss is going to be nan during training.

I recommend standardizing features instead of using lambda layers or changing output activation of lstm to relu instead of tanh.