# What is Dimension?

As much as I had learned a lot from the MLS courses up until this date, today I found myself totally lost on the concept of matrix/array dimension, although I kept revisiting the Data in TensorFlow and Building a Neural Network video modules a few times.

Before attending this course and just by reading internet articles about NumPy, my understanding was: when we define an array (x, y), x automatically explains the dimension of our array. In other words, in the so-called “ND Array,” N always represents x (which is meanwhile driven by the number of rows).

However, at some point in the aforementioned videos, in one instance the respected instructor called a 1x1 matrix a 2D array. In another slide I was confused by a 4x2 matrix introduced again as a 2D array.

My questions:

1. Is array in NumPy the same concept as matrix in TensorFlow, in terms of dimentionality ?
2. If the two concepts are different, how should we differentiate them exactly?

I would appreciate any explication from my fellow classmates reading my post online.

Thanks

Those are interesting thoughts and I think there are many subtle nuances. I think the numpy N-D concept is a generalization of the specific example you are talking about. In this specific case, we are processing a 2 dimensional structure, such as an image. in this case a 1x1 matrix is 1 row and 1 column, likewise 4 rows and 2 columns 4x2 matrix.

I am reading that Tensorflow N-D arrays are similar to those in numpy: Introduction aux tenseurs  |  TensorFlow Core

However, I’m not sure about the distinction between their memory layouts between the 2.

Let me know what you think!

Yes! You count number of dimensions by number of axes.

In NumPy dimensions are called axes.

Here is a “scalar” or “rank-0” tensor . A scalar contains a single value, and no “axes”
A “vector” or “rank-1” tensor is like a list of values. A vector has one axis
A “matrix” or “rank-2” tensor has two axes

Therefore, for the N in a N-DArray, or the N in a rank-N tensor, both Ns are determined by the number of axes, which is equal to the length of the shape. E.g. if the shape of your NDArray is (a, b, c, d), then it has 4 axes, and it is a 4DArray. If the shape of your tensor is (a, b, c, d, e), then it has 5 axes, and it is a rank-5 tensor.

On the other hand, the same numpy doc also says

For example, the array for the coordinates of a point in 3D space, `[1, 2, 1]`, has one axis.

Here, you use a 1DArray to represent a point in a 3D space, because this point has 3 elements.
To extend on this idea, we also say we have a dataset X of shape (m, n), here we are using a 2DArray to store our data, because the length of the shape is 2. However, each sample is a data point in a n-dimensional space, because each sample has n components.

Cheers,
Raymond

2 Likes

As a supplement to the thorough explanation from our mentors :

The primitive way to define an array in Numpy or Tensorflow is to start from a list, and depending of what you want you can make a list of numbers, a list of list of numbers, a list of list of list of numbers and so on…

The dimension will be the depth you have to dig before getting to the numbers

I invite you to use the .shape attribute to np arrays like those and inspect the output :
[5]
[[2, 8]]
[[[[4, 2, 8],[5, 8, 1]],[[4, 2, 8],[5, 8, 1]]]]

and try to create a (6, 2, 3, 2) tensor

Hope this helps