Serving ML models in the ungraded lap

I was doing the ungraded lap in week one and there are some new concepts to me.

Your API is coded using fastAPI but the serving is done using uvicorn

For example: What is “serving”? and what is the difference between “deploying” ML model and “serving” it?

Hi Bassel and welcome to the forum,
when model is trained and ready for production use, you deploy the model to the production environment, where it’s served to the users.

While training phase is mostly in the realm of Data Scientists, the serving is in the domain of Data Engineers.