One of the things that’s impressed me about this content is the ability to have the Jupyter notebooks for all the learners. Since I’m interested in cloud architecture I was wondering if anyone could provide some insight on what technologies are used on the back end to support this. Do users get dedicated instances? Container orchestration? How is the service scaled? This would be an interesting point of discussion for those of us looking to handle ML workloads at scale.
If its any help to you all of the major cloud providers offer these services as far as I know. You can try them they offer scalable and various infrastructures to code on and run engines. The MLOPs and PDS specializations have some insights into these.
Yeah I was wondering specifically how these services are configured in this case. I look forward to taking those specializations.