Both are from the exercise (Convolution_model_Application), in the def HappyModel() function:
From exercise 1:
Exercise 1 - happyModel
Implement the happyModel function below to build the following model: ZEROPAD2D → CONV2D → BATCHNORM → RELU → MAXPOOL → FLATTEN → DENSE. Take help from tf.keras.layers
Also, plug in the following parameters for all the steps:
ZeroPadding2D: padding 3, input shape 64 x 64 x 3
Conv2D: Use 32 7x7 filters, stride 1
BatchNormalization: for axis 3
ReLU
MaxPool2D: Using default parameters
Flatten the previous output.
Fully-connected (Dense) layer: Apply a fully connected layer with 1 neuron and a sigmoid activation.
Hint:
It is a mistake to be using the data_format argument to the Conv2D layer. Use input_shape, but even that is probably not required. It’s not described in the documentation for the specific layer functions, but it is inherited from the Model class.
Yeah… Once again thank you very much mentor. These exercises tend to be frustrating and tiring, but with the right amount of support from the students, we are able to overcome the difficulties and move on to the next exercises.