# Week 2 Exercise 5 - conceptual question

After some trial and error, I got it working. However, during some of the trial and error iterations I wasn’t sure what I was doing. So I had to go back to lecture notes and work through the dimensions (shape) of the various vectors involved, specially in multipication (dot product) operations. Seems a lot of work (which is useful for learning). Is there a better way than to be paying attention or counting the dimensions and see if they balance/behave according to Linear Algebra rules?

Thanks

Hi @dds ,
It is really a good question! While doing with np.dot(), for instance np.dot(A, B), you can put print(A.shape) and print(B.shape). So if np.dot() doesn’t work, you can look at the shape of A and B and figure out why.

Hi @Phuc_Kien_Bui
Confusion arises when say both A and B are dim (1, m). Suppose you want to take np.dot(A, B). According to the numpy documentation it should do elementwise multiplication and then add the term (just as expected from our basic Physics 101).
Example:
import numpy as np
A = np.array([1, 2])
print( "A shape: " + str(A.shape))

B = np.array([3, 4])
print( "B shape: " + str(B.shape))
print(np.dot(A, B))

However, in Exercise 5 had to take transpose of B to make the dot work correctly np.dot(A, B.T). Though in the example above np.dot(A, B), np.dot(A, B.T) , np.dot(A.T, B) all give same result. It is relevant when computing np.dot (w.T, X) - both w and X have smae shape (m, 1).

X does not have shape m x 1. It is n_x x m, where n_x is the number of input features in each “sample” and m is the number of samples.

Here’s a thread which talks more about dot product versus elementwise multiply and some other issues like broadcasting.

And here’s one that talks about transposes and vectors.

1 Like