this is my current laptop config can i add an external gpu for faster training process will it work? how to utlize all the gpu ??
@SASWATA_MAITY it all depends on what program/software you are running, or even whether it is one you built/wrote yourself.
For example: Plain vanilla Python won’t just ‘magically’ start using your GPU unless you write a program that tells it to.
Here is some very sketchy background on machine learning and GPUs. I don’t know very much about it.
Some GPUs provide an interface called CUDA.
Some machine learning tools (like TensorFlow and PyTorch) can be configured to use CUDA.
This results in your GPU being used for the machine learning calculations.
So you can start by doing some research on your laptop’s GPU and see if it supports CUDA.
If it does not support CUDA, and if your laptop has a USB-C or Lightning interface, there are external GPUs you can connect to your laptop.
Sorry, not to repeat somewhere else, but no longer true:
@Nevermnd What do you mean by but no longer true
?
At this point, zluda is not production ready based on this note:
Warning: this is a very incomplete proof of concept. It’s probably not going to work with your application. ZLUDA currently works only with applications which use CUDA Driver API or statically-linked CUDA Runtime API - dynamically-linked CUDA Runtime API is not supported at all
shy– So I have a childhood friend that ended up on the AMD GPU driver team-- though now he works for Qualcomm-- And I actually have an AMD chip in my desktop, which I got more confident with after the XP series-- I mean, yes they were fast, but could also readily catch on fire-- Sorry, scarry.
And with that, I run an Nvidia RTX, thus I can’t completely confirm… But, unless AMD is totally out in left field at this point, this is more of a compiler issue which I don’t see as insurmountable.
Based on the work done in the repository, I don’t think anything is going to hold the developer(s) back from publishing a production ready version for public use. Until then, it’s not a great idea IMO for enterprises and learners with little experience to use such packages.
Hi @Nevermnd
he isn’t pointing at your knowledge about Nvidia but @balaji.ambresh response is based learner’s image which gives information about the driver’s date as well as nvidia GeForce. Please avoid thinking everyone is arguing here, as a moderator and mentor, mentors go through training and every mentor main focus is and should be addressing learner’s query. please refer FAQ Section for community guidelines for better understanding and communication.
Regards
DP
I was not looking to argue-- I mean this is getting a bit personal, but my ‘daily driver’ is a T470P, and honestly it is quite fine for ‘most’ of the things I want to do.
If I wish to go heavy, I go to my desktop.
But even with a Geforce 940MX, on the laptop even that supports CUDA…
This information would be helpful to the post creator, thank you for your information.
Regards
DP
@SASWATA_MAITY with a different mentor the other day, I also stumbled upon this:
I… have not had a chance to play with it yet, but this could potentially be an easy way to accelerate your Python apps without having to dive into the whole ‘Numba’ business.
*Not sure if you have to download the CUDA libraries first or if it is self-contained.
i have tried using torch but the memory isnt en ough
external gpu i dont know what to buy
i dont have a pc i have a loptop with an intel ghraphics and a nvdia gpu rtx1650 i dont know its not enough maybe
let me try thanks a lot everyone
Friend, I think (?) you mean GTX, not RTX, but as far as I can tell, yes, that also has CUDA: CUDA-Enabled GeForce 1650? - CUDA Setup and Installation - NVIDIA Developer Forums
*I would say though, it is one thing if you are writing your own programs-- Or the memory issue can be a problem, though at the loss of speed you could probably solve this with HD paging. But if you are using an ‘out of the box’ package, unfortunately it may be the case you have less options.