C5-W1-A3: Idiomatic usage of tf.keras.layers.LSTM vs LSTMCell

I was just wondering if the code given for the Jazz improvisation task is actually idiomatic usage of Tensorflow/Keras. There we loop over t in range(Tx) and apply the LSTM_cell over and over. But LSTM_cell is really a tf.keras.layers.LSTM object, rather than tf.keras.layers.LSTMCell. My understanding is that LSTM is supposed to get a whole sequence as input, and if return_sequences is set to True, it will output the entire sequence of outputs; LSTM has a member LSTMCell object which it uses for the steps. So it seems like this is actually an abuse of the LSTM class.
Can someone confirm if this is true, or is there some specific reason why the code was set up the way it is?

I’m wondering if I understand your question completely.

Does this thread help?

In the problem statement says: “The weights and biases are transferred to the new model using the global shared layers (LSTM_cell, densor, reshaper) described below”

LSTM_cell is a global variable, hence the second function for inference work with the same LSTM_cell object that it was trained in the previous function ‘djmodel’, LSTM_cell contains Weights and Biases.