Hi, I have a question involving some speech to text conversion application. The instance of data used for training the model is 16 mb but the real time data instance is 1600 mb and this is creating a bottleneck for the system as we can load fewer data instances at the real time. If the size of the realtime data instances can be reduced we can handle more users.
My question is can we use data normalization for the realtime data as well like we do for training data? Or any other solution? Can anyone guide me, thanks in advance