Hi all,
according to the suggestions in the notebook, I have tried adding a bidirectional LSTM, changed the number of units in it, however the accuracy just isn’t reaching 80% or more
Does anyone have any inputs on this?
Thanks
Hi all,
according to the suggestions in the notebook, I have tried adding a bidirectional LSTM, changed the number of units in it, however the accuracy just isn’t reaching 80% or more
Does anyone have any inputs on this?
Thanks
Hi mugdhagovilkar,
My sense is that in order to get higher accuracy you have to go beyond the structure of the model presented in the assignment. If this has your interest, you can have a look here or another source you may be able to find using bidirectional LSTMs in siamese networks to evaluate question duplicates.
With the following model you can get an accuracy of almost 85%
model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(128)))
model.add(Dense(total_words, activation='softmax'))
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=['accuracy'])
Yes, this does work.
Looking at “C3_W4_Lab_2_irish_lyrics.ipynb”, it uses Adam(learning_rate=0.01). If this is used for this assignment, the accuracy can only reach at 0.7. Can I ask why 0.01 was used at “C3_W4_Lab_2_irish_lyrics.ipynb”?