Overall the plans sounds reasonable to me; I believe couple of points are not certain if they work out 100% and of course you also have some trade-offs. Here are my comments:
- you should work with a validation (or dev) and test set in addition to your train set to make sure you would not overfit on the train set, see also: How and why do training and cross validations sets wear out in time? - #3 by Christian_Simonis
- dependent on how close you are to your performance limits: you can think of model pruning before sending the weights to the edge device, see also: Pruning in Keras example | TensorFlow Model Optimization
- you can change the weights with the set_tensor() function in tflite if you want to minimize data transfer and exploit prior knowledge of the frozen layers. Or alternatively you can also learn the new weights in the finetuned model and then change the whole model to a tflite model. Probably this would be the easier way to get your system running in a first step. But if you need to reduce data transfer probably not the whole model but rather the new weights only should be transferred in a zipped way, considering also the communication protocol overall.
- when you change the model architecture (e.g. modifying the trainable layers), you need to think about the consequences with respect to data transfer to make sure that your Raspberry Pi as edge device has all the information to interpret the new model correctly (besides correct tf version, also knowledge about the recent architecture (e.g. number of neurons, activation functions, …)) .
As you see some of your decisions how to design your system really depend on how much margin you have to reach your non-functional requirements. But overall I believe you have a good rough action plan, which you can refine with new learnings along the journey!
Would be quite interested if your project works as planned. Good luck and happy learning!
Best regards
Christian