Alternative way of defining model architecture

I am looking at the workbook C2_W3_Lab1 and I am a bit of surprised of the alternative way of defining a models architecture. What I am used to and what was covered in the course was defining the the layers as a list inside the tf.keras.models.Sequential function.


model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation=‘relu’, input_shape=(150, 150, 3)),
tf.keras.layers.Dense(512, activation=‘relu’),
tf.keras.layers.Dense(1, activation=‘sigmoid’)

In the workbook the model architecture is defined in to following way:

x = layers.Flatten()(last_output)

Add a fully connected layer with 1,024 hidden units and ReLU activation

x = layers.Dense(1024, activation=‘relu’)(x)

Add a dropout rate of 0.2

x = layers.Dropout(0.2)(x)

Add a final sigmoid layer for classification

x = layers.Dense (1, activation=‘sigmoid’)(x)

I understand conceptually that the layers are being added one on top of the other, but I don’t understand the syntax. I am confused by the sequential parentheses. Can somebody explain to me how this works?

The first one is Sequential and the second one is Functional API. From the website, The Keras functional API is a way to create models that are more flexible than the keras.Sequential API. The functional API can handle models with non-linear topology, shared layers, and even multiple inputs or outputs.

In case you’re curious about how the Sequential and Functional APIs are implemented in python:

For the Sequential API, we have model = tf.keras.models.Sequential([...]). We pass in a list of layers into the constructor of the Sequential object, and the API builds the AI model by connecting all the layers in the list.

For the Functional API, we have more control since we can manually connect each layer individually. The back-to-back parentheses syntax you see is based off the callable object feature in python, or the __call__() function.

For example, we are connecting a dense layer to an input layer with this line:
dense_layer = layers.Dense(1024, activation=‘relu’)(input_layer)

You can actually break this down (does the same thing) into:
dense_layer_instance = layers.Dense(1024, activation=‘relu’)
dense_layer = dense_layer_instance(input_layer)

Under the hood, the input_layer and dense_layer variables are actually “specs” of the input/output tensors for that layer (containing metadata such as the shapes of the tensors), and these specs can be used to build the AI model/graph.

Thanks for the explanation. Callable objects was the missing link for me. I appreciate it