YOLO transfer learning needs a high performance computer?

Dears,
Transfer learning on the YOLO coco dataset need a high-performance computer and GPU? or it’s easy? because I want to retrain YOLO on this dataset just for humans.

Looking at the model summary in the assignment, there are a lot of parameters (~51 million). Using GPU is a good choice to finetune all layers of yolo model.

1 Like

Thanks for your reply.
But I have an MSI GTX1070.
I can’t compile my dnn and open cv with GPU Cuda.
I went through many tutorials over the internet, but still, I couldn’t able to run deep learning applications and models with GPU benefits
there is any advice? how I can solve this problem?

Please click my name and message your notebook as an attachment along with the trace of compilation error. Before sending over, please create a cell and run the following content:

!nvcc --version # should print information about cuda version on your machine.

import tensorflow as tf
tf.test.is_gpu_available() # should be `True` here

Information about your OS would be helpful as well. Thanks.

1 Like

Thanks for your reply.
The output of this code is True
My GPU works fine with Tensorflow,
but when I use for YOLO object detector, it’s not working.

This is the output of the following code on jupyter notebook:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
WARNING:tensorflow:From <ipython-input-4-ab2bd6b06785>:4: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
True

My operating system is ((Ubuntu 20.04.2 LTS))
How I can use my GPU for the YOLO Object detector model also?

I suppose your GPU is used.
As YOLO is designed to work even for resource restricted device, there are lots of techniques to reduce the computational efforts. Many persons ported to hand-held devices or even RaspberryPi.
Since its workload for inference (detection) is small, it can be seen that GPU is not working. If you repeat predictions like 100 times, then, you will see that GPU is being used.
On the other hand, GPU works fully for training. I have my own YOLO implementations on latest Tensorflow/Keras environment to use GPU. Training takes huge amount of time that GPU is really necessary. Unfortunately, training is not included in this assignment. So, there will be no chance to use your GPU fully for this assignment.

You can use tools ike nvtop to see gpu usage over time.

I still don’t understand what you meant by

I can’t compile my dnn and open cv with GPU Cuda.