I am not clear if my LSTM layers are correct. I’ve dumped the value of X at every layer and my compilation error below.
Would you mind throwing some light on what LSTM layer parameters - units and return sqeuences mean?
From my understanding: Each LSTM unit takes one embedding (of a single word) as input, but not sure what is the dimension of the output of that unit? is output dimension = number of hidden units (128 in the first LSTM case) ? if so how - Iam having a hard time understanding LSTM units even after watching the videos - any explanation on the programming part is greatly appreciated !
In regards to return sequence : we consider all the return sequences (128) of the first LSTM layer but only the last one for the second LSTM layer, right?
Output
np.shape(emb_matrix) = (15, 2)
embedding_layer= <tensorflow.python.keras.layers.embeddings.Embedding object at 0x7f494d18e950>
emebeddings= Tensor(“embedding_13/embedding_lookup/Identity_1:0”, shape=(None, 4, 2), dtype=float32)
After first LSTM output X= Tensor(“lstm_14/PartitionedCall:1”, shape=(None, 4, 128), dtype=float32)
After first Dropout output X= Tensor(“dropout_14/cond/Identity:0”, shape=(None, 4, 128), dtype=float32)
After second LSTM output X= Tensor(“lstm_15/PartitionedCall:0”, shape=(None, 128), dtype=float32)
After second Dropout output X= Tensor(“dropout_15/cond/Identity:0”, shape=(None, 128), dtype=float32)
After Dense output X= Tensor(“dense_7/BiasAdd:0”, shape=(None, 5), dtype=float32)
NotImplementedError Traceback (most recent call last)
in
22
23
—> 24 Emojify_V2_test(Emojify_V2)
in Emojify_V2_test(target)
16
17 maxLen = 4
—> 18 model = target((maxLen,), word_to_vec_map, word_to_index)
19
20 expectedModel = [[‘InputLayer’, [(None, 4)], 0], [‘Embedding’, (None, 4, 2), 30], [‘LSTM’, (None, 4, 128), 67072, (None, 4, 2), ‘tanh’, True], [‘Dropout’, (None, 4, 128), 0, 0.5], [‘LSTM’, (None, 128), 131584, (None, 4, 128), ‘tanh’, False], [‘Dropout’, (None, 128), 0, 0.5], [‘Dense’, (None, 5), 645, ‘linear’], [‘Activation’, (None, 5), 0]]
in Emojify_V2(input_shape, word_to_vec_map, word_to_index)
58
59 # Add a softmax activation
—> 60 X = softmax(X)
61
62 print(“After Softmax output X=”,X)
~/work/W2A2/emo_utils.py in softmax(x)
27 def softmax(x):
28 “”“Compute softmax values for each sets of scores in x.”""
—> 29 e_x = np.exp(x - np.max(x))
30 return e_x / e_x.sum()
31
<array_function internals> in amax(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial, where)
2666 “”"
2667 return _wrapreduction(a, np.maximum, ‘max’, axis, None, out,
→ 2668 keepdims=keepdims, initial=initial, where=where)
2669
2670
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
88 return reduction(axis=axis, out=out, **passkwargs)
89
—> 90 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
91
92
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in array(self)
846 “Cannot convert a symbolic Tensor ({}) to a numpy array.”
847 " This error may indicate that you’re trying to pass a Tensor to"
→ 848 " a NumPy call, which is not supported".format(self.name))
849
850 def len(self):
NotImplementedError: Cannot convert a symbolic Tensor (dense_7/BiasAdd:0) to a numpy array. This error may indicate that you’re trying to pass a Tensor to a NumPy call, which is not supported