Hey guys. So recently i try to implement Semantic Segmentation. However, since implementing in Jupyter Notebook makes training takes so long, I implement it in Google Colab, which is recommended by the Mentors in this thread: Training takes so long - #11 by Nobu_Asai
But then, after getting the data sorted in Google Colab, there’s one thing that makes me confused. Why is the data changing? As you can see, the First Picture is taken in Jupyter Notebook while the Second Picture is taken in Google Colab.
Jupyter Notebook:
Oh this came from the actual data. I haven’t trained it. This is the comparison of the data between Jupyter Notebook and Google Colab. The difference is that the segmentation map.
Something is different. Please verify that in your Colab, you’re using exactly the same versions of all of the tools and software as are used on Coursera’s labs.
Interesting! In addition to Tom’s “versionitis” point (which is always the first thing to check in a case like this), also check to make sure you didn’t modify any of the rendering logic. Note that those image files are 4 channel (RGBA) PNG files. You need to correctly select the output channel for the mask images. It looks like you are “blending” the real images with the masks somehow.
Ah yes. My code is wrong. This is probably the easiest mistake. I accidentally put the mask_path to the path of the CameraRGB instead of Camera Mask. Silly me, anyways thanks guys!
Cool! That makes total sense now when we go back and look at the renders: instead of the mask, you just had one color channel of the real image. It’s always nice when the explanation makes sense. At my last company, we used to call that type of mistake a “copy pasta error”.