Sequential from Keras Question

Hello. I have question about following code:

model = tf.keras.Sequential(
        tf.keras.layers.Dense(units = 2, input_dim = 1, activation = 'sigmoid', name = 'L1'),
        tf.keras.layers.Dense(units = 2, input_dim = 1, activation = 'sigmoid', name = 'L2'),
        tf.keras.layers.Dense(units = 2, input_dim = 1, activation = 'sigmoid', name = 'L3')

Does this creates:

  • 1 Neural Network with 3 Layers - but in this case, the output size of the previous layer should be equal to the input size of the following layer
  • 3 Neural Network with 1 Layer each - this would mean that we just created 3 separated layers (each independent of one another - i.e. each in a separate neural network)

If the answer is the second option (which I think it is), then why would we create 3 separate layers (i.e. 3 seperate Neural Networks with 1 Layer each) using Sequential (i.e. putting those layers in the list)? It would be more logical and convenient and less confusing, to just create 3 layers, without using Sequential class.


1 Like

It’s one NN with three layers of units.

Hi @farees
It’s the first option. Neural Network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. To simply put, the entire model = tf.keras.Sequential is a neural network with 3 hidden layers. input_dim is not an input size but the input dimension of your layer, in this case, your tensor is 1 dimension. unit is the output size of each layer (None, 2)

I understand that. I am just saying if we have code that I wrote above, and if we take a look at Layer 1 and Layer 2, we can see that Layer 1 has units=2 (i.e. 2 outputs) and Layer 2 has input_dim = 1. That makes no sense, but still it is possible to write. I suppose here input_dim of Layer 2 is automatically overwritten, and assigned value of 2, so in every layer after Layer 1, we don’t need this parameter, since it will be overwritten and value of number of units of previous Layer will be assigned to that variable. Right?

The thing that makes this confusing is the input_dim = 1 parameters in each Dense layer. You are right that these aren’t needed. In the sequential model, it will automatically figure out the input dimension of the next layer for you based on the output of the previous layer.

The model will need an initial input shape which it can figure out by what is passed into it, or it can be defined as the first step of the model, as an input layer like you see in the Coffee Roasting practice lab and in the graded lab for this week’s assignment, like this:


Or, you could leave out this Input layer and add an input_dim = 400 in the first layer as in your example above. But, as you discovered, this info isn’t needed in subsequent layers and is just ignored. It’s best to leave the input_dim parameter out completely to avoid confusion.

For more info, you can read the “Specifying the input shape in advance” section of Sequential model