Random_mini_batches Wrong Shape in 0 mini batch for x

I am having loads of problems with this code and I have gone through several of the previous questions without being able to find the answer. I keep getting the AssertionError: Wrong shape in 0 mini batch for X, regardless of what I change in my code.

AssertionError                            Traceback (most recent call last)
<ipython-input-120-2caeb7a27c84> in <module>
     11 assert n_batches == math.ceil(m / mini_batch_size), f"Wrong number of mini batches. {n_batches} != {math.ceil(m / mini_batch_size)}"
     12 for k in range(n_batches - 1):
---> 13     assert mini_batches[k][0].shape == (nx, mini_batch_size), f"Wrong shape in {k} mini batch for X"
     14     assert mini_batches[k][1].shape == (1, mini_batch_size), f"Wrong shape in {k} mini batch for Y"
     15     assert np.sum(np.sum(mini_batches[k][0] - mini_batches[k][0][0], axis=0)) == ((nx * (nx - 1) / 2 ) * mini_batch_size), "Wrong values. It happens if the order of X rows(features) changes"

AssertionError: Wrong shape in 0 mini batch for X

I have followed the hint for ‘second_mini_batch_X’, but subsituted 2 for k. Then, in the conditional statement if m%mini_batch_size != 0:…, I have tried several different versions of m - mini_batch_size * [m / mini_batch_size], though I am not sure how to use shuffled_X, and shuffled_Y with the variables. I imagine this is probably a very simple solution, but I cannot seem to find it.

When you have an incorrect shape, the first question is “Well, what shape is it?” Then work backwards from there.

Also notice that it is the very first minibatch it is complaining about, whereas the conditional bit you highlight is the logic for the last (partial) minibatch.

Thank you for your reply. Unfortunately, I am still very lost. I imagine the shape is (1,64), but then again, I’m probably wrong. I keep changing ways of doing mini_batch_X = shuffled_X[:, size: iteration variable * size] (not exact code). I still have no idea how else to change the shape or what it’s supposed to be. Sorry.

Why “imagine” what the shape is? Why don’t you print it out and see?


And 1 x mini_batch_size would be wrong in any case. The error is on the X component (the first index), so it should be n_x x mini_batch_size, right? That’s what the assertion is checking.

Debugging is analogous to doing “hand-to-hand” combat with the code. You have to get your hands dirty and actually engage with what’s happening. Start putting print statements in the code to check your assumptions.

Of course there are two fundamental ways in which things can go off the rails:

  1. Your code does not actually do what you intended for it to do.
  2. Your understanding of what you’re supposed to do is incorrect. Meaning that even if the code does what you intended it to do, the problem is that your intentions are misguided.

If you have invested a bunch of effort on point 1) to no avail, then maybe it’s time to step back and read the instructions again carefully and reconsider.

Hey @Daniel_S
If you haven’t solved this problem.
Use the hint in the notebook to use k in for loop. You don’t directly substitute 2 with k. Look the relationship between first_mini_batch_X and second_mini_batch_X when using k and mini_batch_size.
Here’s how I did it.

mini_batch_X = shuffled_X[:, (k)*mini_batch_size: (k+1) * mini_batch_size]

Same for mini_batch_Y

And for the last batch, you don’t have to specify indexJump.
simply put.

mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : ]

same for mini_batch_Y of last batch.