So in the lecture notes the instructor uses this type of architecture with a Lambda function that expands dimentions. Why does he use it?
model = keras.models.Sequential([
**keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),**
** input_shape=[None]),**
keras.layers.SimpleRNN(20, return_sequences=True),
keras.layers.SimpleRNN(20),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 100.0)
])
While on Week 4 it stops using it. I’ve tried to add the Lambda layer before the Conv1D and it still works.
What is exactly is the purpose of this lambda layer (expand dim) and when should I add /not add it
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])