Serving TFX Pipeline

Hi! I guess I have missed some intuition about serving TFX Pipeline.
Here are my questions: How to deploy TFX Pipeline with all tracking steps like ExampleValidator, TFMA, etc. ? Is tf serving only for deploying a pure model like in MLOps level 0? Cause, as I understood, I can only embed tf Transform as a layer.

Thanks!

Hello,

In the next course of the specialization you will see some of the answers to your questions.

TFX Pipeline with all tracking steps like ExampleValidator, TFMA, etc. ? - It think this is dealt with Course 3 including all necessary steps.

Is tf serving only for deploying a pure model like in MLOps level 0? - No just level 0, can be used for higher levels as well.

Thanks for the quick answer!

Well, I have actually completed all of the specialization courses (just wasn’t sure which tag to use for my question, sorry). And I have looked through all labs from courses 3 and 4, but haven’t found what you were taking about.

Now I guess, I have understood the misunderstanding here.

Am I correct saying that it’s wrong to run all the pipeline steps (like StatisticsGen, ExampleValidator, etc.) while serving, cause it would lower latency? Instead of this one should run all the pipeline steps (except Trainer and publisher, if it’s only for monitoring puposes) in offline mode for a batch using sheduler or smth?

I think you are correct in your understanding. You have the schenas and execution graphs for serving ready to go.

Model validation happens once in a while in background, especially if changes to data or behaviour is detected as time goes on.