C5 W4 A1: Scaled_dot_product_attention AttributeError: 'tuple' object has no attribute 'rank'

matmul_qk = *************
# scale matmul_qk
dk = **********

print(k)
print(q)
print(v)
print(matmul_qk)
print(dk)


scaled_attention_logits = ***************
print(scaled_attention_logits)
print(mask)

# add the mask to the scaled tensor.
if mask is not None: # Don't replace this None
    scaled_attention_logits += **************
print(scaled_attention_logits)


return

[[1. 1. 0. 1.]
[1. 0. 1. 1.]
[0. 1. 1. 0.]
[0. 0. 0. 1.]]
[[1. 0. 1. 1.]
[0. 1. 1. 1.]
[1. 0. 0. 1.]]
[[0. 0.]
[1. 0.]
[1. 0.]
[1. 1.]]
[[2. 3. 1. 1.]
[2. 2. 2. 1.]
[2. 2. 0. 1.]]
4
[[1. 1.5 0.5 0.5]
[1. 1. 1. 0.5]
[1. 1. 0. 0.5]]
None
[[1. 1.5 0.5 0.5]
[1. 1. 1. 0.5]
[1. 1. 0. 0.5]]

AttributeError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 scaled_dot_product_attention_test(scaled_dot_product_attention)

~/work/W4A1/public_tests.py in scaled_dot_product_attention_test(target)
55 v = np.array([[0, 0], [1, 0], [1, 0], [1, 1]]).astype(np.float32)
56
—> 57 attention, weights = target(q, k, v, None)
58 assert tf.is_tensor(weights), “Weights must be a tensor”
59 assert tuple(tf.shape(weights).numpy()) == (q.shape[0], k.shape[1]), f"Wrong shape. We expected ({q.shape[0]}, {k.shape[1]})"

in scaled_dot_product_attention(q, k, v, mask)
44 # softmax is normalized on the last axis (seq_len_k) so that the scores
45 # add up to 1.
—> 46 attention_weights = tf.keras.activations.softmax(scaled_attention_logits) # (…, seq_len_q, seq_len_k)
47
48 output = attention_weights*v # (…, seq_len_q, depth_v)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 “”“Call target, and fall back on dispatchers if there is a TypeError.”""
200 try:
→ 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/activations.py in softmax(x, axis)
72 ValueError: In case dim(x) == 1.
73 “”"
—> 74 rank = x.shape.rank
75 if rank == 2:
76 output = nn.softmax(x)

AttributeError: ‘tuple’ object has no attribute ‘rank’

I don’t know what is wrong
Please help me

1 Like

There appears to be a problem with your code for the scaled_attention_logits.

Also, in “output = …”, you should consider using tf.matmul(), as the instructions suggest.

1 Like

Also, I recommend you read this thread:

1 Like

I change np.matmul to tf.matmul.
I solve the error.
Thank you

2 Likes