Course 2 week 2 assignment

Hi Everyone,

I am trying to do the assignment for the Convolutional neural networks course (course #2, week #2)

So far have little luck on actually reaching the required 80% of accuracy for training and validation sets in 15 iterations.

Is the cell with the number of iterations should remain unchanged, or can I set some more reasonable number like 100 (and add the callback as well)?

I don’t see the “graded cell” comment, but also I don’t see any instructions to modify the cell

Has anyone managed to reach the result in 15 iterations?
Is it supposed to be 15 iterations?

Please read this topic.

You need neither change the iterations nor use callbacks. Just play with the architecture of the model. It is mentioned that " you should use at least 3 convolution layers to achieve the desired performance." I just check my solution which I did some time ago, I used 4 conv layers and 4 maxpool (in this order: conv → maxpool → cov → maxpool), then 1 flatten and 6 dense layers. You can play with yours.

Thank you for the tip, I tried to do:

conv2d_11 (Conv2D) (None, 148, 148, 128) 3584
max_pooling2d_9 (MaxPooling2D) (None, 74, 74, 128) 0
conv2d_12 (Conv2D) (None, 72, 72, 256) 295168
max_pooling2d_10 (MaxPooling2D) (None, 36, 36, 256) 0
conv2d_13 (Conv2D) (None, 34, 34, 512) 1180160
max_pooling2d_11 (MaxPooling2D) (None, 17, 17, 512) 0
conv2d_14 (Conv2D) (None, 15, 15, 512) 2359808
max_pooling2d_12 (MaxPooling2D) (None, 7, 7, 512) 0
conv2d_15 (Conv2D) (None, 5, 5, 512) 2359808
flatten_2 (Flatten) (None, 12800) 0
dense_14 (Dense) (None, 1024) 13108224
dense_15 (Dense) (None, 1024) 1049600
dense_16 (Dense) (None, 1024) 1049600
dense_17 (Dense) (None, 1024) 1049600
dense_18 (Dense) (None, 1024) 1049600
dense_19 (Dense) (None, 1024) 1049600
dense_20 (Dense) (None, 1) 1025

I also use

train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
fill_mode=‘nearest’)

train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,
batch_size=4,
class_mode=‘binary’,
target_size=(150, 150))

the training result still seems to be lacking…

every iteration takes forever, I ran out of colab credits yesterday, trying it again today

Epoch 1/15
5625/5625 [==============================] - 286s 50ms/step - loss: 0.6941 - accuracy: 0.4974 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 2/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.6933 - accuracy: 0.4996 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 3/15
5625/5625 [==============================] - 285s 51ms/step - loss: 0.6932 - accuracy: 0.5076 - val_loss: 0.6938 - val_accuracy: 0.5000
Epoch 4/15
5625/5625 [==============================] - 289s 51ms/step - loss: 0.6780 - accuracy: 0.5861 - val_loss: 0.6638 - val_accuracy: 0.6140
Epoch 5/15
5625/5625 [==============================] - 286s 51ms/step - loss: 0.6594 - accuracy: 0.6443 - val_loss: 0.6549 - val_accuracy: 0.5564
Epoch 6/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6414 - accuracy: 0.6717 - val_loss: 0.5643 - val_accuracy: 0.7296
Epoch 7/15
5625/5625 [==============================] - 284s 50ms/step - loss: 0.6244 - accuracy: 0.6902 - val_loss: 1.0983 - val_accuracy: 0.7240
Epoch 8/15
5625/5625 [==============================] - 284s 51ms/step - loss: 0.6083 - accuracy: 0.7037 - val_loss: 0.5676 - val_accuracy: 0.7384
Epoch 9/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.5976 - accuracy: 0.7140 - val_loss: 0.5288 - val_accuracy: 0.7504
Epoch 10/15
5625/5625 [==============================] - 282s 50ms/step - loss: 0.5920 - accuracy: 0.7188 - val_loss: 0.5927 - val_accuracy: 0.7552
Epoch 11/15
5625/5625 [==============================] - 281s 50ms/step - loss: 0.5923 - accuracy: 0.7297 - val_loss: 0.5593 - val_accuracy: 0.7548
Epoch 12/15
5625/5625 [==============================] - 286s 51ms/step - loss: 0.5929 - accuracy: 0.7339 - val_loss: 0.5858 - val_accuracy: 0.6976
Epoch 13/15
5625/5625 [==============================] - 296s 53ms/step - loss: 0.5991 - accuracy: 0.7272 - val_loss: 0.5166 - val_accuracy: 0.7588
Epoch 14/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6027 - accuracy: 0.7287 - val_loss: 0.6152 - val_accuracy: 0.6896
Epoch 15/15
5625/5625 [==============================] - 278s 49ms/step - loss: 0.6036 - accuracy: 0.7215 - val_loss: 0.5121 - val_accuracy: 0.7628

So far I can’t see how can I get from 50% accuracy to > 80% accuracy in less than 15 iterations…

Based on the network structure I have, maybe you could provide some more hints I could use?

Just wanted to clarity, this is not for competition, but rather to just get the checkmark on the assignment. I tried to do the machine learning courses 5 years ago, there I had the same problem - you just have to guess the “lucky” numbers, which I find very frustrating. You literally have a greater chance of winning the jackpot in the “6 out of 52” lottery, than getting the “pass” in similar assignment, especially when they crank up the accuracy…

I don’t think this code is correct. Haven’t you seen the ungraded labs?

Also, try different batch sizes.

Somehow by luck managed to guess the right numbers, thank you for the hints.