Why add another dimension to data in Wk 4 Classifying Sign Languages

In Week 4’s assignment we are told:

"# In this section you will have to add another dimension to the data

So, for example, if your array is (10000, 28, 28)

You will need to make it (10000, 28, 28, 1)

Hint: np.expand_dims"

Why do we have to expand the dimensions?

Why can’t the input shape of the first layer of the model just be:
input_shape=(28, 28)

?

Thanks

Hi @pablowilks,
because the first is the batch size, in this case 10000, and then you have width x height x channels. With the last value, you are telling that these images have 1 channel, meaning b/w.

Best

Many thanks! Makes sense that it is bw and so channel dimension would be 1.
Why isn’t the channel dimension implicit though? Isn’t (10000, 28, 28) the same thing as (10000, 28, 28, 1)?

No, it is not. It cannot be implicit because you can have colour images, with 3 channels instead than 1. Anyway, Conv2D is waiting for data in a (width, height, channels) format.

Best

great, thanks. makes sense!