DLS Course 4: Suspect pool_backward is wrong in Assignment: Convolutional Model, Step by Step

In both “max” and “average” version, we have

 # dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None

I feel it should be

# dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, :] += None

using c to index dA_prev seems wrong because c is for current layer.

Also if my previous claim was correct, the corresponding validation sample is also wrong, because when I tried to use the “correct method”, the validation doesn’t pass.

Looking forward to hear your insights or prove my statement is wrong :), thank you

BTW: I successfully finished the assignment by using the current logic, so I’m not blocked, but I really think the current logic is wrong , so my current “successful” answer should also be wrong!

c refers to the current channel. Given that forward pooling is done on a per channel basis, why would you recommend cross channel interaction in the backward pass?

Ha, I see what you mean. since max pooling doesn’t change channel number, so here actually n_C_prev == n_C, so it is safe to use c =[0, n_c) for dA_Prev

Thanks