additionally I found answers in Keras documentation
The ideal machine learning model is end-to-end
In general, you should seek to do data preprocessing as part of your model as much as possible, not via an external data preprocessing pipeline. That’s because external data preprocessing makes your models less portable when it’s time to use them in production. Consider a model that processes text: it uses a specific tokenization algorithm and a specific vocabulary index. When you want to ship your model to a mobile app or a JavaScript app, you will need to recreate the exact same preprocessing setup in the target language. This can get very tricky: any small discrepancy between the original pipeline and the one you recreate has the potential to completely invalidate your model, or at least severely degrade its performance.
It would be much easier to be able to simply export an end-to-end model that already includes preprocessing. The ideal model should expect as input something as close as possible to raw data: an image model should expect RGB pixel values in the
[0, 255]
range, and a text model should accept strings ofutf-8
characters. That way, the consumer of the exported model doesn’t have to know about the preprocessing pipeline.