6.1 - Mini-Batch Gradient Descent

Under Run the following code to see how the model does with mini-batch gradient descent. in week 2 all my tests passed previously, but I’m getting the following error.


ZeroDivisionError Traceback (most recent call last)
in
1 # train 3-layer model
2 layers_dims = [train_X.shape[0], 5, 2, 1]
----> 3 parameters = model(train_X, train_Y, layers_dims, optimizer = “gd”)
4
5 # Predict

in model(X, Y, layers_dims, optimizer, learning_rate, mini_batch_size, beta, beta1, beta2, epsilon, num_epochs, print_cost)
58
59 # Backward propagation
—> 60 grads = backward_propagation(minibatch_X, minibatch_Y, caches)
61
62 # Update parameters

~/work/release/W2A1/opt_utils_v1a.py in backward_propagation(X, Y, cache)
156 (z1, a1, W1, b1, z2, a2, W2, b2, z3, a3, W3, b3) = cache
157
→ 158 dz3 = 1./m * (a3 - Y)
159 dW3 = np.dot(dz3, a2.T)
160 db3 = np.sum(dz3, axis=1, keepdims = True)

ZeroDivisionError: float division by zero

Please what could be the problem

The problem was in the function random_mini_batches. It’s possible to write code that works for k=1 and k=2 but not for k>=3. Double check the indexes used to compute mini_batch_X and mini_batch_Y if you get this error.

Good luck with the rest of the specialization, @Marvin!

Thanks, I fixed it. Easier than I made it :slightly_smiling_face: