Hello,
I completed the function model() and when I run it, I get the 3 tests failed and the following errors:
Error: Data type mismatch
Error: Wrong shape
Error: Wrong output
Since it is not possible to debug from the test result because the output is obscure, I retried the function step by step in a new cell.
After running initialize_with_zeros(), I run optimize() that calls propagate(). At this point there is an error when computing A because of the dot product. The error is as such:
ValueError: shapes (1,12288) and (209,64,64,3) not aligned: 12288 (dim 1) != 64 (dim 2)
Note: since I don’t know what is the input of X_train, I used train_set_x_orig instead.
I looked back how I made it work when first coding propagate, and I notice that the matrix of X is of size 2x3. As such 1x2 * 2x3 doesn’t return an error.
So, is there any more operation to apply on X_train for the model to work or is there something else that I am not understanding?
I will be very glad if someone can help me!
Dear @lachainone in the propagate() function there is useful comment with dimension recap for all the function parameters. It states the following for w and for input X:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
X -- data of size (num_px * num_px * 3, number of examples)
What I understand from what you wrote is that your X matrix is probably not in the flatten format (num_px * num_px * 3, number of examples) but in the original format of the training set (num_px, num_px, num_channels, number of examples). Hope this helps you looking for problems in earlier cells.
Well, the “docstring” there is a bit misleading. You should not assume that the dimensions of the inputs are 12288 features. We are trying to write “general” code here, so you should not “bake in” any assumptions about the dimension. Just use the “shape” attribute of the input X matrix to determine the number of features or number of samples. Another error that will fail the model_test in exactly that way is to reference global variables like num_px to determine the dimensions. That’s wrong for two reasons: the global variables may not be present in the grader context and it also “hard-wires” the dimensions in the way I just warned about.
Thank you for your answer.
Actually, I was trying to flatten it when it was already flattened. I didn’t realize it.
Yes right, it was actually just a mistake in how I computed the parameter “dim” of “initialize_with_zeros”.