Check for understanding: Week 2 Exercise 2

In Logistic_Regression_with_a_Neural_Network_mindset.ipynb, It’s said that

For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px ∗ num_px ∗ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px ∗ num_px ∗ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗ c ∗ d, a) is to use:
X_flatten = X.reshape(X.shape[0], -1).T

What I understood from Andrew is to reshape all arrays to X.reshape(-1, 1) so the result is one array with the number of rows and only one column, so why X.reshape(X.shape[0], -1).T??

Please see this thread for a detailed explanation of that reshape command.

Also note just as a general matter that I think you must be misinterpreting what Prof Andrew said if you think all reshape operations end up with column vectors as output. :scream_cat:

2 Likes