I think it is common to use the labs for a given (weekly) lesson to solve the assignments. Especially the new thing that is being taught.
When I got to the assignment I started by crafting a solution that used the previous prototypes. I ran into some troubles:
The Input directive in defined in the obvious solution for defining the create_augmentation_model() is not seen by the unit test(s) when stacking the layers when defining create_model() function. AND if you abstracted out the non-augmentation layers into their own function and then also stacked that layer in the create_model() function, following the prototype introduced in lab 1, it would also generate an error for not seeing the Rescaling layer. In short the unit test(s) doesn’t seem to know how to look more deeply into the generated model object to see these methods are being defined. That is a failure of the course materials. A simple solution is to stack it all explicitly in the model = tf.keras.models.Sequential([]) statement, but this violates the implied lesson of modularity demonstrated in the labs.
I too initially had problems not reaching the 80%/80% required. I fumbled around with this and wasted some time but I also learned new relationships about the layers and, I hope, re-enforced the lesson. AS A HINT I can tell you the lesson is about augmentation and so when you don’t get 80%/80% it makes sense to go back to that key point, versus the non-augmentation layers, to find out how to solve the problem. That said, examining the details of the optimizer can also be very helpful.
The training and validation accuracy graph is really erratic as it pertains to the Validation accuracy and loss. I would really like someone to answer here or add to the lesson why that is, since it is different than what we’ve seen before.