General Question on CPU/GPU processing

I have been doing some reading on GPU computing, NVIDIA chips, CUDA, parallel processing, etc. When we are doing our exercises in Python in Jupyter notebooks, does the chip hardware on our local machines affect performance in any way? Currently I am using a MacBook Air with an M2 chip, which has 8 CPU cores (4 performance, and 4 efficiency - whatever that means) and 8 GPU cores. Are my GPU cores being used at all? Is there an optimal setup for the work we do in our courses?

If you’re using a Jupyter Notebook via Coursera Labs, the computation is not done locally - it’s done on the remove server, and only the results are displayed in your browser.

Yes, thanks, that’s clear now that I think about it. But the actual intent behind the question was to find out if there is anything that is going on at the server end (or the software too, for that matter) that is utilizing GPU computation and/or parallel computing.

When Andrew and his colleagues are doing their own research, do they use specialized hardware and or software to speed up the training of their models?

I believe everyone working on a large deep learning solution is using a GPU, or an array of GPUs.

Would the code automatically use the GPU or array of GPUs, or does the code (which in our course is basic Python (including Numpy, TensorFlow, etc.) need to be different?

There are options in the software tools that let you select the computing platform.

OK. Let me ask the question in a different way, to better illustrate what I am getting at.

If you were an AI researcher and had an unlimited budget, you might get yourself an NVIDIA DGX A 100 machine for $200,000. I have no idea how many GPU cores it has, but I’m sure it’s more than my Mac. Whatever software you would be using would undoubtedly exploit CUDA and parallel processing wherever possible. So what would be the minimal software configuration you would need on the platform?

Now let’s suppose you don’t have a spare $200K lying around, but have “only” $5K. What would be your ideal set up then?

If you think the question is worth passing on to Andrew, please do so.

It depends on what sort of systems you want to train, how often you need to re-train, and how large the datasets are.

I do not have any way to contact Andrew. I’m just a community volunteer.