Hello!

Could you please help me with understanding why Y contains only one vector (m == 1) and X contains two (m == 2)?

```
X = np.array([[1., -2., -1.], [3., 0.5, -3.2]])
Y = np.array([[1, 1, 0]])
```

Thank you!

Hello!

Could you please help me with understanding why Y contains only one vector (m == 1) and X contains two (m == 2)?

```
X = np.array([[1., -2., -1.], [3., 0.5, -3.2]])
Y = np.array([[1, 1, 0]])
```

Thank you!

Welcome, @Piotr_Wojcik. Your are referring to the notebook cell that tests the code for your `propagate`

function. As prelude to the test, the first lines of the cell define a sample dataset on which your code with be run. Note that the weight matrix `w`

and the bias unit `b`

are also defined. So with those first four lines of the cell, you have some â€śdummyâ€ť data upon which to test your code. These comprise the arguments to `propagate`

. Note the â€śsignatureâ€ť of the function: `propagate(w, b, X, Y)`

.

Your question presents a good opportunity to learn the structure of a simple network with no hidden layers. (More on that next week.) The dummy input data `X`

is set up as a dimension 2 \times 3 matrix. (The `np.array()`

function takes a Python list of lists argument to create that.). So you have n_x=2 data â€śfeaturesâ€ť and m=3 â€śexamplesâ€ť of each. Note, m \neq 2. Matrix `Y`

sets up the test output data. There is only a single output in binary classification. Thatâ€™s why it is a single vector. Either the result is a â€śpositiveâ€ť (y=1 i.e. itâ€™s a cat) or a negative (y=0 i.e. not a cat).

So, with that information, you can verify that the constituent part of the vector-valued equation y = \sigma(wx+b) conforms to matrix multiplication.

2 Likes

In addition to Kenâ€™s excellent explanation, itâ€™s also worth calling attention to the significance of the square brackets in those declarations of X and Y. You need to learn to focus on those. You can also run your own experiments and use the â€śshapeâ€ť attribute of numpy arrays to see the resulting dimensions.

Hereâ€™s a way to concretely see what Ken pointed out about the shapes:

```
X = np.array([[1., -2., -1.], [3., 0.5, -3.2]])
Y = np.array([[1, 1, 0]])
print(f"X.shape = {X.shape}")
print(f"Y.shape = {Y.shape}")
```

Running that gives this:

```
X.shape = (2, 3)
Y.shape = (1, 3)
```

Because there are two nested sets of square brackets, both arrays end up having two dimensions. Now watch the difference if I only use one set of brackets to define Y:

```
Y = np.array([1, 1, 0])
print(f"Y.shape = {Y.shape}")
Y.shape = (3,)
```

So the shape ends up having only 1 element, which means that Y in that case has only one dimension. Prof Ng discusses the difference between 1D and 2D arrays briefly in the lectures, but then points out that we only use 2D arrays for everything in these courses.

As just one more experiment, letâ€™s see what it would look like to create a 3 x 2 array instead:

```
Z = np.array([[1., -2.], [-1., 3.], [0.5, -3.2]])
print(f"Z.shape = {Z.shape}")
Z.shape = (3, 2)
```

So we end up with the same values, but in a different shape. Itâ€™s all determined by the placement of the brackets.

2 Likes