I am just finished with 2nd week of assignment. I am looking for saving trained zombie detector model. for some reason they showed us how to build custom model and train it, but they did not show us how to export or save model. does any body know how to adapt our training loop and save the model ?
In online resources they are always using command line arguments.
Sorry, i was not clear enough, I am practicing with the same model but now i am using different datasets. but the issue is that, as soon as i train the model, i do not know how to save the model so that i can run inference on different jupyter notebook. it is also taking long time train with the new dataset (25 images). so i do not want to train it every time, i would like to play with the model.
you want save model or the zip download of zombie images?
If you want in particular the zombie model to be downloaded you will have run some of the models locally and then save the model.
There is a tf documentation on how to save model in different formats which is accessible publically. That should help you in case you only want to save the model.
the zombie detection is created using retinanet, so it should be a fun practice to do locally but make sure you have requirement.txt file module version matched.
@vivdon, are you just asking how to save the results.data so you can submit it? If so, this code at the end of the assignment will save the file to your computer:
from google.colab import files
files.download('results.data')
yes, i have already submitted my results. that is not the issue. I would like to save trained model. so that i can play with it in different notebook. instead of running the same notebook and wait until it train every time.
Seems you are asking how to save a model in Tensorflow? This tutorial shows you that. You can find some example codes there. Please be reminded that, after you have saved a model, try to load it back (to a different variable name so that the original model does not get replaced) to make sure everything is fine.
hello Raymond,
thank you for your response.
yes, i tried it, it does not work. i am using object detection API and build model using model = model_builder.build(model_config=model_config,is_training=True) so model.save('main_model.keras') and tf.keras.Model.save(model, f'my_model.keras') do not work. maybe i have to save pipeline_config.config file with trained weights ? i am not sure how to do it.
BR,
Vivek
If you don’t mind to send me your notebook by replying to the direct message I will send you in a bit, I could look into that. I need to find out what’s not working and see if I can find out how to make it work.
Sorry for late reply. but i found a solution after some trail and error, I have tried model.save('main_model.keras') and tf.keras.Model.save(model, f'my_model.keras') to save the trained model, that did not work. but then i found one cool solution
@tf.function(input_signature=[tf.TensorSpec(shape=[None, 640, 640, 3], dtype=tf.float32)])
def detect_fn(input_tensor):
preprocessed_image, shapes = model.preprocess(input_tensor)
prediction_dict = model.predict(preprocessed_image, shapes)
# use the detection model's postprocess() method to get the the final detections
detections = model.postprocess(prediction_dict, shapes)
return detections
# Save with signatures
tf.saved_model.save(model, f'{OUTPUTS_DIRS}', signatures={"serving_default": detect_fn})
i did not know that we use signatures to save pre-process and post-processing images.
we can load the model again with