Image Classification app running slow after upgrading Gradle Plugin

Hello everyone,
I downloaded and built the Image Classification app on my PC (latest version of Android Studio). I ran it on my smartphone, getting predictions in less than 1 second.
Then I noticed a message suggesting to upgrade Gradle Plugin, so I did it:

Then I launched the app on my smartphone, to check it still worked, but now I am getting predictions in more than 3 seconds!
Any idea about the possible cause?

Hello Francesco, welcome to the community.
There might be various possible reasons for this. Mostly due to updates in gradle (which is like the build platform), which might not be compatible with the dependency libraries versions, or might not be using old slow code which the build platform might not be optimizing while compiling the java files.
For example, one place I can think of in your project, is that you have updated gralde plugin, but since your inference time is slow, I suspect the tensorflow dependency in your build.gradle.
The github link has an old version of TFLite (0.0.0-nightly), while if you see the latest release of TFLite, the latest release is 2.8.0. You can try checking this latest version in your code, if this works better.

1 Like

Hello Gopi, thanks for your answer.
I changed to the latest release, but unfortunately there is no improvement.

1 Like

Okay. So then you might need to debug this further. You could start adding logs in your code. Logs around the inference code should give you time taken by tflite tp infer the model. Similarly other places you can add. Once you note down times, then revert your gradle changes and capture same logs indicating time.
This should point out which part in your android project is taking time after gradle update.

1 Like

The “guilty” function is ImageUtils.convertYUV420ToARGB8888().

Great you figured it out :+1:

Hi @Francesco_Tessitore and @gopiramena

When you say the following:

Do you mean you managed to use the image classification app with your own custom model? Or just the ready-made app?

In other words, did you manage to complete this assignment: Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera ?
Specifically this part:

Then, if you’re brave enough, you can edit the Image Detection app for Rock, Paper and Scissors only, instead of the 1,000 classes it could recognize!

If so, how did you do it? What code must be changed in android studio in order for the custom app to work?

Hi @Jaime_Gonzalez,
I meant the ready-made app. However, I did manage to modify the app to work with my model. To do that, please follow these steps (assuming you can run the ready-made app):

  1. Build the .tflite file for your application (be sure it is compatible with the example, i.e. it accepts RGB 224x224 images)
  2. Locate the folder “your_path”\image_classification\app\src\main\assets
  3. substitute “mobilenet_v1_1.0_224_quant.tflite” with your model file
  4. edit “labels_mobilenet_quant_v1_224.txt” to match your application and rename it appropriately
  5. update mModelPath and mLabelPath in “your_path”\image_classification\app\src\main\java\com\google\tflite\imageclassification\sample\camera\Camera2BasicFragment.kt with the values of the files you just created
  6. build your app in Android Studio

Hope this helps!

2 Likes

Hi @Francesco_Tessitore

Though I did manage to make it work eventually by doing all of this (“C2 W2 Optional Exercise: Rock Paper Scissors for Android - #6 by Jaime_Gonzalez”), I found that my app was quite terrible at its job.

Rarely were the predictions right or constant. (If I was showing paper, I could get varying rock paper or scissor predictions).

Was this the case for you, or where you getting accurate predictions when gesturing your hands to rock paper and scissors?

Thanks for your time

Jaime

For future learners:

I decided to search for a different image classification android app (different to the one provided by the TF D&D specialisation) on github which worked well with my model and eventually I found it: “GitHub - IJ-Apps/Image-Classification-App-with-Teachable-Machine: Android app that uses a TensorFlow Lite model for image classification of common objects, trained through Google's Teachable Machine.”

In this image classification android app, if you simply change the model given (“model.tflite”) with your own (“converted_model.tflite”), and change the labels given {“Banana”, “Orange”, “Pen”, “Sticky Notes”} with your own {“Rock”, “Paper”, “Scissors”}, the app works well. i.e.: the accuracy of predictions is reasonable.

1 Like

Hi @Jaime_Gonzalez
actually my model is up to a completely different task, but it is not that bad.

As you wrote in the dedicated thread you created, I think that for Rock Paper Scissors the problem is related to data format compatibility. This is something I had to work on a lot (on the model side) as well.

A simple test: take a picture that gives bad result in the app and verify if the model on the PC gives the right prediction.

However, there is no need to spend a long time in this investigation, since you found another suitable app.

Thank you for sharing!

1 Like

Hi @gopiramena

actually I was interested in the root cause of the problem: do you have any idea about why upgrading Gradle Plugin made the execution of ImageUtils.convertYUV420ToARGB8888() so worse?

Difficult to say, it’s not a standard function provided by android. It’s a function implemented for the demo it seems. Not sure how it would behave across different environments/ build tool versions. To speed it up, maybe you can run it in background thread if your app logic/UI allows it, or see an alternative to using the function (using opencv might be faster)

1 Like

Hi @gopiramena,
after several attempts with different Android Studio releases, I give up: this community is about deep learning, not Android. As you highlighted in your last post, the problem is the different behaviour of the same code across different build tool versions, so I am marking that as the solution.

I did the 6 steps that @Francesco_Tessitore mentioned except step 5 where the file to change is tracking\DetectorActivity.kt instead of Camera2BasicFragment.kt (I believe there were some slight changes between 2022 and 2023 to the project). As far as I can tell, there’s nowhere else that points to the model and label files and even did a string search of all files containing the old model name on command prompt.

I’ve also rebuilt the app (step 6), somehow it’s still displaying the old classes instead of rock, paper and scissors.

I did not get the error that @Jaime_Gonzalez faced although I tried those steps too, but without any difference in the outcome. Any ideas?

1 Like