In Logistic_Regression_with_a_Neural_Network_mindset.ipynb, It’s said that
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px ∗ num_px ∗ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px ∗ num_px ∗ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b ∗ c ∗ d, a) is to use:
X_flatten = X.reshape(X.shape[0], -1).T
What I understood from Andrew is to reshape all arrays to X.reshape(-1, 1) so the result is one array with the number of rows and only one column, so why X.reshape(X.shape[0], -1).T??
Please see this thread for a detailed explanation of that reshape command.
Also note just as a general matter that I think you must be misinterpreting what Prof Andrew said if you think all reshape operations end up with column vectors as output.
For me, the part of this that is confusing is the instructions for Exercise 2:
“Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px ∗ num_px ∗ 3, 1).”
At first glance, it seems like it wants us to create an array of a truly flat shape (ie enerything in one column). Maybe it could say that each image would be a column like that, such that the full set will end up as a matrix with one column for each example. That is hinted at in the paragraph before the exercise, but that’s easy to miss.
Yes, that’s what they meant. Note in their sentence that you quoted the phrase “into single vectors”, making it clear they are talking about what happens to each image individually. Did you read the other thread I linked above?
Also note that this thread has been dead for 4 years at this point, so you just got lucky that I “follow” it. In general it’s fine to use the historical information here, but if you really want to guarantee a response it’s better to start a new thread than to reply on that old a thread.