Using a LSTM layer in a GAN

Dear all,

I am creating this topic because I would like to discuss the use of LSTM cells inside a GAN.

In the DeepLearning specialization, we have learnt about LSTM cells, and how to train them to generate nice dinosaurs names. This works actually pretty fine !

However, the training comes from using a word as both input and output for the LSTM cell (by including a shift in positions).

This aspect of the training cannot be reproduced because that’s not how GANs are trained.

I have tried to use an LSTM layer in a classic GAN architecture, but I do not get satisfying results (yet).

As a reminder, when working at a character level, each character in a word (sentence) is one hot encoded. I’m worried that this one hot representation makes it harder for the LSTM cell to learn during the GAN training.

To clarify, I also checked existing implementations but nothing has proven to be relevant or usable.

Do you have any comments or ideas I should follow ?

3 Likes

I can provide the following article as an example: An LSTM Based Generative Adversarial Architecture for Robotic Calligraphy Learning System

The LSTM cells function with similar purpose in mind when serving as blocks in GANs.

1 Like

Thanks for the link @cvetko.tim

I’ve been able to implement both GAN and WGAN with LSTM cells.

I’ve used simple infrastructures for the discriminator and generator.

So far, the normal GAN shows some small signs of learning, while a standard LSTM cell (as usef for music generation in another specialization) learns way faster.

I am wondering if I should implement a custom layer or something.

1 Like

No problem. What is it exactly that makes you think of implementing your own layer? Can you tell me what kind of task you are trying to solve?

1 Like

@cvetko.tim I am trying to make a GAN learn patterns in a dataset I have.
Each element is a string, with some patterns, hence the LSTM cells can learn it fairly well.
However, the GAN is quite unstable, even after implementing WGAN.

2 Likes