I am having trouble reaching the accuracy goal for this quiz.
I have tried a variety of models, especially using three sets of Conv2D + MaxPooling2D before two dense layers. I have tried adding padding to preserve the size of each output layer, but this hasn’t helped much. I am also using the ‘Adam’ optimizer, which has better results than others. I tried adding additional sets of these layers (
tf.keras.layers.Conv2D(64, (3,3), activation='relu')) without any luck. I’m not sure what to try from here. None of the examples for Week1 had this much complexity, and I have reviewed all of the sample code and started by trying to replicate what was done there. I’m looking for hints or suggestions to point me in the right direction. Currently I am using
batch_size=1500 and doing 15 epochs.
What about trying different batch size?
When creating the generators, it says that the expected output is the following:
Found 22498 images belonging to 2 classes.
Found 2500 images belonging to 2 classes.
So I am using
batch_size=1500 for training and
batch_size=166 in order to get this result. This should be a valid answer given that it matches what the quiz author expected
train_val_generators function, you set batch_size, right? Play with it…
I’m just confused because in other comments people were saying to change the batch size to 30 and then people were saying not to modify this value.
Okay I’m trying new values (batch_size=900, batch_size=100, epochs=25
I don’t think you need to change the epochs. It is set to 15. And, 900 is a very big number but let’s see what you will get.
This worked but needed 27 epochs
Try modifying batch_size=100 for both train_datagen and validation_datagen.
For me, considering the layered architecture I used, it worked in less than 15 epochs.