Have you checked with The NVIDIA System Management Interface (nvidia-smi) ? It provides us the current status of GPU devices including memory usage.
Here is an example.
You can use this command line interface from a cell in a Jupyter notebook with adding “!” in front of nvidia-smi command. If “Memory-Usage” shows that someone is already using your resource, one of the ways to use this GPU device is “kill remaining processes”, which may be slightly dangerous. Or, of course, just wait and get another instance which does not have any remaining processes.
In any cases, knowing the GPU memory usage may help you to think about the next step if you get the same error.
