W1 Writing like Shakespeare how is LSTM powerful?

In section 4 Writing like Shakespeare, this is my output:

Forsooth this maketh no sense,
the ibfing owhens my partlove marn cime,
** …**
** …**
and me ans hell your see, aud conse my side.
make, that ho worth which i low then she thou jing,
thy w

Which means that after training for ~1000 epochs the LSTM model basically still outputting gibberish. Are there anything else we can try on LSTM to show how it can actually be useful? Thanks!

1 Like

Everything we do here is pretty limited by the resource constraints of the notebook environment. This can work really well, but you need a pretty large training corpus and a lot of iterations, aka a lot more resources than we have here. Have a look at the famous article on this subject by Andrej Karpathy.

Try some more experiments: e.g. bumping up the number of epochs to 5000 or 10000 and see if you notice any improvement.