Error in Identity block in week 2

Returns: X – output of the identity block, tensor of shape (n_H, n_W, n_C) “”" # Retrieve Filters F1, F2, F3 = filters # Save the input value. You’ll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = 1, strides = (1,1), padding = ‘valid’, kernel_initializer = initializer(seed=0))(X) X = BatchNormalization(axis = 3)(X, training = training) # Default axis X = Activation(‘relu’)(X) ### START CODE HERE ## Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = ‘same’,kernel_initializer = initializer(seed=0))(X) X = BatchNormalization(axis = 3)(X) X = Activation(‘relu’)(X)​ ## Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = ‘valid’,kernel_initializer = initializer(seed=0))(X) X = BatchNormalization(axis = 3)(X)​ ## Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation(‘relu’)(X) ### END CODE HERE​ return XIn [19]:np.random.seed(1)X1 = np.ones((1, 4, 4, 3)) * -1X2 = np.ones((1, 4, 4, 3)) * 1X3 = np.ones((1, 4, 4, 3)) * 3​X = np.concatenate((X1, X2, X3), axis = 0).astype(np.float32)​A3 = identity_block(X, f=2, filters=[4, 4, 3], initializer=lambda seed=0:constant(value=1), training=False)print(’\033[1mWith training=False\033[0m\n’)A3np = A3.numpy()print(np.around(A3.numpy()[:,(0,-1),:,:].mean(axis = 3), 5))resume = A3np[:,(0,-1),:,:].mean(axis = 3)print(resume[1, 1, 0])​print(’\n\033[1mWith training=True\033[0m\n’)np.random.seed(1)A4 = identity_block(X, f=2, filters=[3, 3, 3], initializer=lambda seed=0:constant(value=1), training=True)print(np.around(A4.numpy()[:,(0,-1),:,:].mean(axis = 3), 5))​public_tests.identity_block_test(identity_block)With training=False

[[[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]

[[192.71234 192.71234 192.71234 96.85617]
[ 96.85617 96.85617 96.85617 48.92808]]

[[578.1371 578.1371 578.1371 290.5685 ]
[290.5685 290.5685 290.5685 146.78426]]]
96.85617

With training=True

[[[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]

[[ 1. 1. 1. 1. ]
[ 1. 1. 1. 1. ]]

[[47.04585 47.04585 47.04585 25.02293]
[25.02293 25.02293 25.02293 14.01146]]]
---------------------------------------------------------------------------AssertionError Traceback (most recent call last)
in 22 print(np.around(A4.numpy()[:,(0,-1),:,:].mean(axis = 3), 5)) 23
—> 24 public_tests.identity_block_test(identity_block)

/tf/W2A1/public_tests.py in identity_block_test(target) 54 [0.37394285, 0.37394285, 0.37394285, 0.37394285]],
55 [[3.2379014, 4.1394243, 4.1394243, 3.2379014 ],
—> 56 [3.2379014, 4.1394243, 4.1394243, 3.2379014 ]]]), atol = 1e-5 ), “Wrong values with training=True”
57
58 print(colored(“All tests passed!”, “green”))

AssertionError: Wrong values with training=True

I dont understand why i am getting this error, can someone please clarify

There appears to be an error in your code.
But I cannot read your code, the formatting is too confusing.
Maybe you could post a screen capture image instead.

Thanks. While I look at your code, can you please delete it from your message?
We need to keep tidy with the course Honor Code (which says students should not share their code).

Yeah did it, please let me know if you find any error

You need to pass (X, training=training) to all the BatchNorm layers. Not just the first one.

2 Likes