C5 W4: KerasTensor' object has no attribute 'numpy'

Hi,

I am working on the call(self, x, training, mask) method within the EncoderLayer class. When running EncoderLayer_test() method, I receive the following error:

91     assert np.allclose(encoded.numpy(), 

AttributeError: ‘KerasTensor’ object has no attribute ‘numpy’

It is failing at the following line:
assert tuple(tf.shape(encoded).numpy()) == (1, q.shape[1], q.shape[2]), f"Wrong shape. We expected ((1, {q.shape[1]}, {q.shape[2]}))"

The following is encoded tensor:

encoded KerasTensor: KerasTensor(type_spec=TensorSpec(shape=(None, 1, 3, 4), dtype=tf.float32, name=None), name=‘layer_normalization_1/batchnorm/add_1:0’, description=“created by layer ‘layer_normalization_1’”)

Please give me some pointers on how to resolve this issue.

Thanks so much.
Odon

Please share the full error.

Hi,

The following is the output on Jupiter :slight_smile:

AttributeError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 EncoderLayer_test(EncoderLayer)

~/work/W4A1/public_tests.py in EncoderLayer_test(target)
87
88 assert tf.is_tensor(encoded), “Wrong type. Output must be a tensor”
—> 89 assert tuple(tf.shape(encoded).numpy()) == (1, q.shape[1], q.shape[2]), f"Wrong shape. We expected ((1, {q.shape[1]}, {q.shape[2]}))"
90
91 assert np.allclose(encoded.numpy(),

AttributeError: ‘KerasTensor’ object has no attribute ‘numpy’

Thanks,
Odon

Please use the search bar of this forum to find out similar posts like this and this.

Thanks so much,

One of the link you pointed out told me to use the key words query, value, key, and attention_mask. I just don’t see it documented in the lab instructions.

Thanks again.
Odon

Glad to know you solved it. But note that all the instructions are in the notebook. Sometimes, you need to click on Additional Hints.

I guess what is making this exercise particularly difficult is our limited knowledge on Python and Keras. I thought in the beginning I need to create and pass tf.keras.input() with the shape of x into MultiHeadAttention. I came with the idea when I looked at the documentation of MultiHeadAttention.

Thanks,
Odon