Hi,
can you help me to see why my call is wrong?
Looking at the error message the problem seems to be in my call to the fully connected layer but I can’t see what’s wrong. It seems quite straight forward:
ffn_output = self.fnn(out1) # (batch_size, input_seq_len, fully_connected_dim)
Here is the error log:
UNIT TEST
EncoderLayer_test(EncoderLayer)
AttributeError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 EncoderLayer_test(EncoderLayer)
~/work/W4A1/public_tests.py in EncoderLayer_test(target)
84 encoder_layer1 = target(4, 2, 8)
85 tf.random.set_seed(10)
—> 86 encoded = encoder_layer1(q, True, np.array([[1, 0, 1]]))
87
88 assert tf.is_tensor(encoded), “Wrong type. Output must be a tensor”
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
→ 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
in call(self, x, training, mask)
46
47 # pass the output of the multi-head attention layer through a ffn (~1 line)
—> 48 ffn_output = self.fnn(out1) # (batch_size, input_seq_len, fully_connected_dim)
49
50 # apply dropout layer to ffn output during training (~1 line)
AttributeError: ‘EncoderLayer’ object has no attribute ‘fnn’