Why does bias have [1,1,1,nc[l]] dimensions?

Hello,

in the video “One Layer of a Convolutional Network” it is mentioned that the Bias b is convenient to represent an array with [1,1,1,nc[l]] dimensions. I understand why it should have nc[l] depth. But why do we need other dimensions?

Thanks
Henrikh

Because then you can index the bias vector the same way you index the weights vector. You will see this in the Week 1 assignment, conv_forward(A_prev, W, b, hparameters) implementation. However, it is not necessary to have this representation.

1 Like

I see. If I got it right, it is just more convenient to have W and b of the same number of dimensions so that the coding could be less confusing. Am I right?

Henrikh

Yes, the goal is to make the for loops and indices in the exercise less confusing.

1 Like

Thanks! It’s clear to me now.

1 Like