Using sequential API C1W4

Hi everyone. Out of curiosity instead of using subclassing API to create a VGG network in course 1, week 4 assignment I tried recreating it using Sequential API. I can’t figure out why the model doesn’t learn anything this way( e.g. accuracy stays at about 50%). Here’s my code.

Somewhat surprisingly, the model learns if I leave only 64 and 128 conv layers, removing all other convolutions and max pooling layers.

Could someone please explain what is wrong here?

from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense
from tensorflow.keras.utils import plot_model
import tensorflow_datasets as tfds
import tensorflow as tf

model = tf.keras.Sequential([
Conv2D(64, 3, activation = ‘relu’, padding = ‘same’, input_shape = (224,224,3)),
Conv2D(64, 3, activation = ‘relu’, padding = ‘same’),
MaxPool2D(2, 2),
Conv2D(128, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(128, 3, activation = ‘relu’, padding = ‘same’),
MaxPool2D(2, 2),
Conv2D(256, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(256, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(256, 3, activation = ‘relu’, padding = ‘same’),
MaxPool2D(2, 2),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
MaxPool2D(2, 2),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
Conv2D(512, 3, activation = ‘relu’, padding = ‘same’),
MaxPool2D(2, 2),
Flatten(),
Dense(256, activation = ‘relu’),
Dense(2, activation = ‘softmax’)
])
model.compile(optimizer=‘adam’, loss=‘sparse_categorical_crossentropy’, metrics=[‘accuracy’])
#model.summary()
dataset = tfds.load(‘cats_vs_dogs’, split=tfds.Split.TRAIN, data_dir=‘data/’)
def preprocess(features):
# Resize and normalize
image = tf.image.resize(features[‘image’], (224, 224))
return tf.cast(image, tf.float32) / 255., features[‘label’]

dataset = dataset.map(preprocess).batch(32)
model.fit(dataset, epochs=10)