Convolutional_block week 2 assignment_1

I am facing the following while running the code.
X_shortcut = Conv2D(filters = F3, kernel_size = 1, strides = (s, s), padding=‘valid’, kernel_initializer = initializer(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3)(X, training=training)(X_shortcut)

error:
TypeError Traceback (most recent call last)
in
8 X = np.concatenate((X1, X2, X3), axis = 0).astype(np.float32)
9
—> 10 A = convolutional_block(X, f = 2, filters = [2, 4, 6], training=False)
11
12 assert type(A) == EagerTensor, “Use only tensorflow and keras functions”

in convolutional_block(X, f, filters, s, training, initializer)
47 ##### SHORTCUT PATH ##### (≈2 lines)
48 X_shortcut = Conv2D(filters = F3, kernel_size = 1, strides = (s, s), padding=‘valid’, kernel_initializer = initializer(seed=0))(X_shortcut)
—> 49 X_shortcut = BatchNormalization(axis = 3)(X, training=training)(X_shortcut)
50 ### END CODE HERE
51

TypeError: ‘tensorflow.python.framework.ops.EagerTensor’ object is not callable

Hey, as you can see in the BatchNormalisation step, you’ve passed the tensor X to the layer, then followed that by passing X_shortcut. Once you pass a tensor through the layer, the output is a tensor so its not callable like the layer. You’ll have to pass the correct tensor in there.

Thanks, It work but now showing this error although the expected value is correct.
X_shortcut = BatchNormalization(axis = 3)(X_shortcut)
error
tf.Tensor(
[[[0. 0.66683817 0. 0. 0.88853896 0.5274254 ]
[0. 0.65053666 0. 0. 0.89592844 0.49965227]]

[[0. 0.6312079 0. 0. 0.8636247 0.47643146]
[0. 0.5688321 0. 0. 0.85534114 0.41709304]]], shape=(2, 2, 6), dtype=float32)

AssertionError Traceback (most recent call last)
in
16
17 B = convolutional_block(X, f = 2, filters = [2, 4, 6], training=True)
—> 18 assert np.allclose(B.numpy(), convolutional_block_output2), “Wrong values when training=True.”
19
20 print(’\033[92mAll tests passed!’)

AssertionError: Wrong values when training=True.

Expected value

tf.Tensor(
[[[0. 0.66683817 0. 0. 0.88853896 0.5274254 ]
[0. 0.65053666 0. 0. 0.89592844 0.49965227]]

[[0. 0.6312079 0. 0. 0.8636247 0.47643146]
[0. 0.5688321 0. 0. 0.85534114 0.41709304]]], shape=(2, 2, 6), dtype=float32)

You’ll need to specify training in the BatchNormalization step. I think you probably have specified it before in the same function so you’ll get an idea by looking at your previous code.

Thanks a lot, it’s been pass I had also passed training then it give me the all pass signal.