This is an important question for me. I am in the homework of “Transfer_learning_with_MobileNet_v1”
Regardless of mobileNet we are using here, my question is, how do I know I need to pick
tf.keras.applications.MobileNetV2 for the purpose of alpaca vs none alpaca ? In other words, if my new project is sci-fi alien vs none sci-fi alien (assuming still uses mobileNet), I am still going to load this
Q1. My understanding is, the premade mode I pick must have something similar to my target project. The similarity here means the trained data should be somewhat related. (I am not talking about the NN). yes or no ?
Q2. If Q1 is yes, what’s the training data for the remade network
tf.keras.applications.MobileNetV2? The reference talks about the structure and functions, but not the training data… or this is not important at all? as long as ppl uses MobileNet, we just load it no matter my project target looks like?
Ofcourse, if the chosen model was trained with photos of objects that look very similar to the objects that you want to detect, then your choice is more likely to be a good one! However, I won’t say you must pick that model because sometimes that model simply doesn’t exist. Also, the more layers of CNN you retrain, the more you can shift the chosen model to your target dataset. At the end, you decide which model to select among your model candidates by their validation scores. So, there is no model that you must pick, and you probably want to try more than one model candidates if you are very uncertain.
If you check out this method, and go to the source code and backtrace from there, you will see at least 2 things:
- that it is using
imagenet_utils and you can google about imagenet for what it is
- that this is the file which stores the name of the 1000 classes. Even without the images you can have a look at the list to make some initial judgements.
Raymond has really covered your questions, but here are a couple more general points that might be worth adding:
For any of the published models like MobilNet, ResNet, EfficientNet, VGG and so forth that are candidates for Transfer Learning, you should be able to find the documentation about the model, including the architecture of the model and all the details of the training data and how the training was performed. For example, here’s the Keras page about all three of the MobilNet models. Please have a look and you’ll notice it points to lots of other reference material.
Here’s the Keras page one level higher that lists all the pretrained models they have.
There may be other considerations besides the accuracy and similarity of the training data to your target data. E.g. in the specific case of the MobilNet models, they are designed to work well on smaller platforms in “predict only” mode: they take advantage of the depthwise separable convolutions that Prof Ng explains in C4 W2 in order for the models to have a lot fewer parameters and thus take less memory and compute power to run in “predict” mode while still giving excellent accuracy, thus making them very good choices for smaller platforms like phones.
Thanks for those details.
I do opened
imagenet_class_index.json, and I saw those classes name. That’s very good:)
But may I ask you, where I could directly find out what those network training data look like? Is there a fast way to directly see those pictures? I know you said just google
imagenet_utils . I did, but all I found out is the code (or I have to check the details in the code, download those pic to see?).
Try just googling “imagenet”. They have a website, which also links to a Kaggle challenge involving imagenet, from which it’s easy to download the dataset.
Thanks, Paul. I saw the data download link.