I’m confused by the following question. Since the explanation says stacking more layers could increase training performance, I think it also means that it won’t hurt the performance.
I’m not a native speaker of English. Does “won’t hurt” have some special meanings here?
My understanding is that, given two inception networks N1 and N2, if N1 is deeper than N2, then N1.performance >= N2.performance. Is it correct? Isn’t the explanation in red talking about this same thing as the option?