Join our new short course, Efficiently Serving Large Language Models , to build a ground-up understanding of how to serve LLM applications from Travis Addair, CTO at Predibase. Whether you’re ready to launch your own application or just getting started building it, the topics you’ll explore in this course will deepen your foundational knowledge of how LLMs work, and help you better understand the performance trade-offs you must consider when building LLM applications that will serve large numbers of users.
2 Likes
It is great you are adding free courses on the hot issues and SOTA information.
It will be awesome if you can add closed captions -even machine-generated is a significant improvement to non-native English speakers.
Is this the right place to ask questions about the course? I am on lesson 1. I executed the notebook cell that generated 10 tokens and timed the process, which took 90+ sec on the Deeplearning hardware and 1.x seconds for the instructor. Is that expected? Is the instructor running on hardware that is much more performant than the students? That is fine, of course, but I want to adjust my expectations accordingly. Thanks for what looks like a great course!
Hi, chrome has an option to activate live caption in videos