DL4 -Week1 assignment 1:ValueError: operands could not be broadcast together with shapes (1,3,3,4) (3,3,4,1)

83 weights=weights.reshape(f, f, n_C_prev,1)
84 bias= b[:,:,:,c].reshape(1,1,1,1)
—> 85 Z[i, h, w, c]=conv_single_step(a_slice_prev, weights, bias)
86
87 # YOUR CODE ENDS HERE

in conv_single_step(a_slice_prev, W, b)
23 # Z = None
24 # YOUR CODE STARTS HERE
—> 25 s=a_slice_prev*W
26 Z=np.sum(s)+float(b)
27 # YOUR CODE ENDS HERE

ValueError: operands could not be broadcast together with shapes (1,3,3,4) (3,3,4,1)
Hello Everyone,
I can’t proceed after this step I feel stuck.
Basically I am not getting the mistake on how to make shapes same
It would be great if anybody helped me.

So that must mean that there is something wrong in how you are managing the “slicing” of the various arrays in your conv_forward logic. The bug is not in conv_single_step, right? It’s that you passed mismatching arguments to it.

I added a bunch of print statements to my conv_forward logic and here’s what I see when I run the test case for that:

stride 2 pad 1
New dimensions = 3 by 4
Shape Z = (2, 3, 4, 8)
Shape A_prev_pad = (2, 7, 9, 4)
Z[0,0,0,0] = -2.651123629553914
Z[1,2,3,7] = 0.4427056509973153
Z's mean =
 0.5511276474566768
Z[0,2,1] =
 [-2.17796037  8.07171329 -0.5772704   3.36286738  4.48113645 -2.89198428
 10.99288867  3.03171932]
cache_conv[0][1][2][3] =
 [-1.1191154   1.9560789  -0.3264995  -1.34267579]
First Test: All tests passed!
stride 1 pad 3
New dimensions = 9 by 11
Shape Z = (2, 9, 11, 8)
Shape A_prev_pad = (2, 11, 13, 4)
Z[0,0,0,0] = 1.4306973717089302
Z[1,8,10,7] = -0.6695027738712113
stride 2 pad 0
New dimensions = 2 by 3
Shape Z = (2, 2, 3, 8)
Shape A_prev_pad = (2, 5, 7, 4)
Z[0,0,0,0] = 8.430161780192094
Z[1,1,2,7] = -0.2674960203423288
stride 1 pad 6
New dimensions = 13 by 15
Shape Z = (2, 13, 15, 8)
Shape A_prev_pad = (2, 17, 19, 4)
Z[0,0,0,0] = 0.5619706599772282
Z[1,12,14,7] = -1.622674822605305
Second Test: All tests passed!

Have a look at the corresponding dimensions you are seeing and maybe that will give you a clue as to where the problem is.

I don’t remember reshape() being necessary in that part of exercise (see lines 83 and 84 in the OP trace above). Has the notebook guidance about that step changed?

Eeeek! You’re right. Sorry I obviously didn’t examine the full exception trace carefully enough. There should be no “reshapes” required in any of this. All you need to is to index the various arrays appropriately to select the portion you need for a given operation. In the case of the weights and bias values, you only need to select the correct value of the “channels” dimension. For the A_prev values, you will use ranges for the h and w indices and select all elements in the “input channel” dimension.

Thanks I just now noticed it