Why data seems to be different in Google Colab

Hey guys. So recently i try to implement Semantic Segmentation. However, since implementing in Jupyter Notebook makes training takes so long, I implement it in Google Colab, which is recommended by the Mentors in this thread: Training takes so long - #11 by Nobu_Asai
But then, after getting the data sorted in Google Colab, there’s one thing that makes me confused. Why is the data changing? As you can see, the First Picture is taken in Jupyter Notebook while the Second Picture is taken in Google Colab.
Jupyter Notebook:


Google Colab:

As you can see in the picture, the picture of the segmentation were very different. Is there any solution for this type of problem? Thanks Ahead!

At a guess, I’d say your Colab code doesn’t work correctly.
Or perhaps you did not train for long enough.

Oh this came from the actual data. I haven’t trained it. This is the comparison of the data between Jupyter Notebook and Google Colab. The difference is that the segmentation map.

Something is different. Please verify that in your Colab, you’re using exactly the same versions of all of the tools and software as are used on Coursera’s labs.

Interesting! In addition to Tom’s “versionitis” point (which is always the first thing to check in a case like this), also check to make sure you didn’t modify any of the rendering logic. Note that those image files are 4 channel (RGBA) PNG files. You need to correctly select the output channel for the mask images. It looks like you are “blending” the real images with the masks somehow.

Here is mine. Absolutely no problem.

Looks like you data is broken…

Thanks for the advice guys! I’ll try to do some debugging and see the problems that caused it.

What I did is quite simple.

  1. download all files from Coursera platform (Files.zip)

  1. upload Files.zip to Google Drive
  2. From Google Colab, mount Google Drive
  3. Copy Files.zip into Google Colab environment
  4. Unzip Files.zip
  5. move data & two python files to appropriate directory

That’s all. You do not need to touch data actually.

Ah yes. My code is wrong. This is probably the easiest mistake. I accidentally put the mask_path to the path of the CameraRGB instead of Camera Mask. Silly me, anyways thanks guys!

Cool! That makes total sense now when we go back and look at the renders: instead of the mask, you just had one color channel of the real image. It’s always nice when the explanation makes sense. At my last company, we used to call that type of mistake a “copy pasta error”. :laughing: