C5 W1 A3 Jazz Improvisation assignment

3.1 - Predicting & Sampling part
Each time I call the music_inference_model function the number of inputs increases by 2 and I have to rerun the whole notebook all over again to fix it.

music_inference_model(LSTM_cell, densor, Ty = 50)

To fix it I need to create another object of the LSTM_CELL ( Rerun this cell:

n_values = 90 # number of music values
reshaper = Reshape((1, n_values)) # Used in Step 2.B of djmodel(), below
LSTM_cell = LSTM(n_a, return_state = True) # Used in Step 2.C
densor = Dense(n_values, activation=‘softmax’) # Used in Step 2.D


Why does this happen?

What exactly do you mean by “the number of inputs increases by 2”?
Are you referring to the “n_values” variable?

I tested my code, and it is always 90 regardless of how many times I run the music_inference_model() cell.

Can you provide some more data about this issue?

Yes sure, for example when I ran

inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)

twice, here is what happens

Running it for a third time:

I knew what exactly went wrong in the code, I specified the depth in the tf.one_hot as 1 where it should be n_values

def music_inference_model(LSTM_cell, densor, Ty=100):
Uses the trained “LSTM_cell” and “densor” from model() to generate a sequence of values.

LSTM_cell -- the trained "LSTM_cell" from model(), Keras layer object
densor -- the trained "densor" from model(), Keras layer object
Ty -- integer, number of time steps to generate

inference_model -- Keras model instance

# Get the shape of input values
n_values = densor.units
# Get the number of the hidden state vector
n_a = LSTM_cell.units

# Define the input of your model with a shape 
x0 = Input(shape=(1, n_values))

# Define s0, initial hidden state for the decoder LSTM
a0 = Input(shape=(n_a,), name='a0')
c0 = Input(shape=(n_a,), name='c0')
a = a0
c = c0
x = x0

# Step 1: Create an empty list of "outputs" to later store your predicted values (≈1 line)
outputs = []

# Step 2: Loop over Ty and generate a value at every time step
for t in range(Ty):
    # Step 2.A: Perform one step of LSTM_cell. Use "x", not "x0" (≈1 line)
    a, _, c = LSTM_cell(x, initial_state=[a, c])
    # Step 2.B: Apply Dense layer to the hidden state output of the LSTM_cell (≈1 line)
    out = densor(a)
    # Step 2.C: Append the prediction "out" to "outputs". out.shape = (None, 90) (≈1 line)

    # Step 2.D: 
    # Select the next value according to "out",
    # Set "x" to be the one-hot representation of the selected value
    # See instructions above.
    x = tf.math.argmax(out,axis=-1)
    **x = tf.one_hot(x,1)**   ####### Here the depth should have been n_values 
    # Step 2.E: 
    # Use RepeatVector(1) to convert x into a tensor with shape=(None, 1, 90)
    x = RepeatVector(1)(x)
# Step 3: Create model instance with the correct "inputs" and "outputs" (≈1 line)
inference_model = Model(inputs=[x0, a0, c0], outputs=outputs)


return inference_model

However, I do not understand the behavior itself. To get the same error just change the depth to be 1 instead of n_values.

and then run the function calling twice:

inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)

Your tf.one_hot layer is not correct. The instructions tell you to use depth=n_values, not 1.

I have run the function many times - the number of inputs is always the same.

Yes, I understand it should be I was just asking about the behavior itself when it is put incorrectly. Many Thanks to you.