Learner Error: Grader ran out of memory

Are you creating a really large model or does this post help?

i’m not creating a large model. My model has only 3 convolution layers and 2 dense layers.

Please click my name and message your notebook as an attachment.

The number of parameters in your model is 339,832,129. This can be observed using print(model.summary()) and noting the number of trainable parameters.
There’s a dense layer with 339738752 parameters which is likely causing your submission to fail.

There’s room for model to get smaller.
One tip is to follow every conv layer with a tf.keras.layers.MaxPooling2D layer. When tried on your model, the number of parameters went down to 4,828,481. The dense layer before prediction layer now has 4735104 parameters.

Now that you know how to reduce model size, please optimize till you pass the grader thresholds.

1 Like

thanks for your help.

Hi! I have the same issue, and I’m sure my model is not too big (a pooling layer after every of the three convolution layers) and I also seem to be invoking the model.fit without causing a recursive call. Yet the grader runs out of memory.

Please click my name and message your notebook as an attachment.