Hello!
I’m starting my own project with transfer learning. I used the course material as my project’s skeleton but I have encountered this phenomenon:
When I run the same code on my computer, the accuracy of my fine-tuned network (it’s the same as the assignment), the accuracy plummeted and became so much worse before fine-tuning.
Also I have found that I have one less layer (156)compared to the assignment (157). But the final layer is almost the same, which is : global_average_pooling2d_13 and predictions. Compared to the assignment, which is: global_average_pooling2d_7 and predictions
I do not have a GPU currently, could this be the issue? Or could there be another cause for this issue?
Normally, at the start of fine tuning the accuracy will worsen because it will take sometime for the model to learn about the new data. A GPU can accelerate the learning phase but even with CPU if given enough time it will improve its performance.
Please, share your knowledge about this topic more! Sadly the fine-tuned accuracy was 45%, which is much less than 78% before fine tuning. I have not a lot of data to run, so I did my best.
I dont have anything more to say, if everything is built as supposed just train longer and you will improve performa ce. When I say train long i mean on the same images train for longer time.
This is the problem that I’ve encountered. It still seems very weird, I have trained it through 6 more epochs yet the drop of accuracy is significant…