Can I get a Transcript of this course

I need a transcript of this course, is there any way to download the transcript. I need this to take notes. Any help is appreciated.

Thank you.

Hey there,

You can use subtitles while watching the video to take your notes. There might be other methods that I’m not aware of.

@108 on the video itself there is a little ‘transcript’ button you can hit… But unfortunately this only provides you with a transcript seemingly inside the video window, not ‘raw text’, which still requires a lot of scrolling.

I looked for a few minutes @ the HTML but I could not find any super easy way to extract it.

I realize some also would find a transcript useful for ‘accessibility’ purposes.

Ha-- Just for fun though (and interested), I decided to run the first lecture in the series through the Google Live transcribe app on my Pixel 6 (it is ‘AI’, and that is what we are studying, after all, isn’t it ?)

And-- Hahah, some places it really f**** up (or it is obvious a lot of technical terms were not part of the train set). But it at least gives you the gist, and perhaps for study purposes, even better, because you’ll read something and say to yourself ‘well that makes no sense’ and go back to the point.

In any case, here is the raw, unedited transcript that came out:

4:50 AM
Welcome to introductions on device, AI built in partnership with Qualcomm and taught by Chris tutor. A modern smartphone may have 10 to 30 teraflops of compute power. When you take a picture, that smartphone may be running, dozens of AI models simultaneously for rotating semantic segmentation and scene understanding. In this course, you learn how to create AI applications that run on device. These techniques are applicable, not only to making your app potentially run on the about 7 billion smartphones out there. But also potentially billions of other devices, including cameras robots, drones AR VR cases and many more Despite the differences in hardware and operating systems, among all these devices, the principles of the key technical steps for deploying on device are actually quite similar. For many of these devices given the model they already trained back to the cloud to deploy it on device. The first step is model conversion, which means converting your model from say a pi total tensorflow framework into a format compatible with the on-device runtime in the set. The model is Frozen into a neural network graph which is then converted into an executable for the device devices such as smartphones and Edge devices often contain a mix of processing units including CPUs gpus and neuroprocessing units or npus knowing the exact devices, your app will run on allows for optimizations that can dramatically enhance performance. Sometimes making models run up to 10 times faster. You also got tools that hope you accomplish this across many different devices. And this is important because the amount of different smartphone Brands and models with your mobile app potentially running on maybe over 300 different smartphone types. It's then also important to ensure that your model performs consistently across these many different devices. This might mean validating the on-device numerical correctness, it causes broad range of devices to prevent cases, where model, offers correctly on one device, but not in another future Hardware differences. You learn how to do all this. And then lastly, quantization is also a common step of running on device AI models as you see in the real-time segmentation app. In this course, quantization can make your ad Grant several times faster and while resulting in much smaller model size. In our case about four times faster. With also, four times smaller model size. Our instructor, just a Shader is senior director of engineering and Qualcomm. He's been doing on device. AI for about the Credit card deployment on device infrastructure. That might well be running on your spine field right now. Tristan is directly helped deploy over a thousand models on devices and over a hundred thousand applications have used the tech, he and this team have built. Thanks Andrew. In this course, you'll first learn how to deploy an on-device model in order to reduce latency improve privacy as well as improve efficiency. You will deploy your first model on device with just a few lines of code. The model will do real-time segmentation From your camera stream. You will learn four key Concepts as part of this course. The first one is how to capture your model as a graph that can be portable and runnable on a device. The process of compilation of that graph for a specific device. The hardware acceleration of that model in order to run it efficiently on device as well as the process of validating. That particular model for numerical correctness on device. Finally, you will learn how to quantize a model so you can improve the performance by nearly 4X while also reducing the footprint of that particular model. Finally, we will integrate this particular model in an Android application that you can play around with many people. Have lost situated. This course, I'd like to thank from Qualcomm, Corey Watson, Gustav Larson. And silica Africa also ashmall gagari and Jeff Hardware from deep Also contributed to this course on-device, deployments of AI models is taking off and opens up. A lot of exciting capabilities for Builders of AI systems. Let's go on to the next video to get started. 

Best of luck,

1 Like

*In particular, perhaps totally dropping the phoneme model, at least with regards to somewhat more ‘obscure names’ (i.e. Your name is not ‘John Smith’)… was… perhaps not the best idea.

Yeah, even I tried to do something with the HTML, I was unsuccessful.

Your idea of using Google Live transcribe app, is not easy going for the whole course.

Thanks for the reply thought.

Well I generally give the transcript to Chat GPT or any other LLMs to give the important points to me, so I wanted the transcript.

Short course transcripts are not available for learners.


For the earlier short courses, the transcripts and the ability to download the videos was available. But more recently those features were apparently removed. :frowning:

ok, thanks