Why do we expand dimentions with a Lambda?

So in the lecture notes the instructor uses this type of architecture with a Lambda function that expands dimentions. Why does he use it?

model = keras.models.Sequential([
 **keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),**
** input_shape=[None]),**
 keras.layers.SimpleRNN(20, return_sequences=True),
 keras.layers.Lambda(lambda x: x * 100.0)

While on Week 4 it stops using it. I’ve tried to add the Lambda layer before the Conv1D and it still works.
What is exactly is the purpose of this lambda layer (expand dim) and when should I add /not add it

model = tf.keras.models.Sequential([
 tf.keras.layers.Conv1D(filters=32, kernel_size=5,
 strides=1, padding="causal",
 input_shape=[None, 1]),
 tf.keras.layers.LSTM(32, return_sequences=True),
 tf.keras.layers.Lambda(lambda x: x * 200)

The purpose of lambda layer (as far as I remember now) is to create a layer with custom processing, not a layer that you can find in a framework. Where you add it, i think it depends on what you want to do.