Order of layers in the sequential model

Greetings!
I am trying to complete the Week3 assignment of Course 3 of TensorFlow Developer specialization.

I am unable to figure the order in which different layers of the sequential model must be assembled. Obviously, the number of output nodes in the previous layer must match the number of input nodes in the current layer. It is not clear how to figure this out.
I know I can see the output shape when I print model.summary. However, the shape of expected inputs is not available.

Any advice would be appreciated. Thank you!

There is no such constraint:

the number of output nodes in the previous layer must match the number of input nodes in the current layer

You only have to have number of nodes equal to number of classes, in the case of a multi-class classification problem.

As far as order of layers is concerned, please go through the lectures and the ungraded labs to notice the order of Embedding layer and the final Dense layer. Rest of the layers can be present in any order.

Thank you for your quick response! (This is something I love about DeepLearning.AI courses.)
yes, I looked at the models in three ungraded labs. The have the structures listed (copied and pasted for easy reference) below. As you can see none of them include the following combination suggested in the assignment. When I add Conv1D and GlobalMaxPooling1D between the pre-listed embedding layer and a bidirectional (LSTM) layer, I get the error:

Input 0 of layer "bidirectional_2" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 128)

    • Conv1D
    • Dropout
    • GlobalMaxPooling1D
    • MaxPooling1D
    • LSTM
    • Bidirectional(LSTM)

First ungraded lab

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm_dim)),
    tf.keras.layers.Dense(dense_dim, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

Second ungraded lab

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm1_dim, return_sequences=True)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(lstm2_dim)),
    tf.keras.layers.Dense(dense_dim, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

Third ungraded lab

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(tokenizer.vocab_size, embedding_dim),
    tf.keras.layers.Conv1D(filters=filters, kernel_size=kernel_size, activation='relu'),
    tf.keras.layers.GlobalMaxPooling1D(),
    tf.keras.layers.Dense(dense_dim, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

When I mentioned any order, you’ll still have to add layers whose shapes are compatible.

While layers like tf.RepeatVector and tf.Reshape can be used, use them when the rest of the model makes sense.

1 Like

Yes, that is the question I have. How do I figure out the input and output shapes of each layers?

Have you tried print(model.summary()) ?