Week1 Assisgnment 1, numpy arrays differ from tensor structure?

Hi There. the assignments and videos for week 1 say that the arrays are structures in the following format:
numpy array of shape (n_H_prev, n_W_prev, n_C_prev)

With n_channels being the last parameter.

But when you create:
a_slice_prev = np.random.randn(4, 4, 3) #as defined in the assignment
you get FOUR layers/channels of 4hx3w arrays.

array([[[ 1.62434536, -0.61175641, -0.52817175],
        [-1.07296862,  0.86540763, -2.3015387 ],
        [ 1.74481176, -0.7612069 ,  0.3190391 ],
        [-0.24937038,  1.46210794, -2.06014071]],

       [[-0.3224172 , -0.38405435,  1.13376944],
        [-1.09989127, -0.17242821, -0.87785842],
        [ 0.04221375,  0.58281521, -1.10061918],
        [ 1.14472371,  0.90159072,  0.50249434]],

       [[ 0.90085595, -0.68372786, -0.12289023],
        [-0.93576943, -0.26788808,  0.53035547],
        [-0.69166075, -0.39675353, -0.6871727 ],
        [-0.84520564, -0.67124613, -0.0126646 ]],

       [[-1.11731035,  0.2344157 ,  1.65980218],
        [ 0.74204416, -0.19183555, -0.88762896],
        [-0.74715829,  1.6924546 ,  0.05080775],
        [-0.63699565,  0.19091548,  2.10025514]]])

Am I missing something here?

I think it’s just a question of how to interpret the output you are seeing. Note that there are three layers of square brackets. That means it’s a 3D array. It’s 4 x 4 x 3, so it prints as 4 (the first dimension) arrays of size 4 x 3.

You could as easily interpret it as a 4 x 4 array of vectors with 3 elements. It’s just a question of how you look at it. But if you wanted it printed that way, you’d have to write the logic yourself. Numpy has its own idea about that.

The question of arrays versus tensors can be viewed as a) just a terminology question or b) a difference between numpy and other packages like TensorFlow and PyTorch.

In numpy, they don’t use the term tensor and they just call them arrays with arbitrarily many dimensions. You have the usual containment relationship:

scalars \subset vectors \subset matrices \subset arrays

Just as every square is a rectangle, but not every rectangle is a square, a matrix is an array with exactly 2 dimensions. So every matrix is an array, but not every array is a matrix.

In TensorFlow, they refer to everything as tensors, but they can have any number of dimensions (including 0).

It’s hard to visualize things in more than 3 dimensions, of course. Here’s a thread that shows some relevant examples about how to flatten 3D and 4D image arrays into vectors and matrices that we can deal with using our Neural Networks.

Of course here in Course 4 we are learning about neural networks that can take more than 2 dimensional objects as input without first “flattening” them.

Ok - Thank you very much! I appreciate the quick reply!