Course 5: Week 1 - Assignment 3 (Jazz) - Exercise 1

Hello, I’m on Exercise 1 of Course 5: Week 1 - Assignment 3 (Jazz Music). Here’s what I have:

{moderator edit - solution code removed}

When I ran this code, I got this odd error:

ValueError: as_list() is not defined on an unknown TensorShape. Any idea ?

Why are you ignoring the parameters that were passed into your function? It is a mistake to directly invoke Reshape() and Dense(). Those are “layer” functions which return functions. You want functions that return tensors. You were passed instantiations of those functions, right?

1 Like

Thanks Paul. I see that the instantiation of Reshape() and that of Dense were passed into the djmodel() function. So, reshaper and densor are tensor objects. I can’t see how the Reshape class looks like because I would like to do something like this for reshaping:

x = reshaper.shape(x)

But shape() is not a method of the Reshape class. Perhaps, the method for reshaping is something else but I don’t know what it is. Can’t find it.

Reshape is a layer function: it takes parameters and returns you an instance of a reshape function with the parameters that you requested. That return value is a function that takes a tensor as an argument and then returns a reshaped tensor as its output value.

Look at how reshaper was defined:

reshaper = Reshape((1, n_values))

So reshaper is a function, not a tensor. Of course it doesn’t have a shape attribute. You invoke it with an input and get the reshaped value as the output.

1 Like

This is exactly how the “Layer” functions worked back in Course 4, right? Remember happyModel and convolutional_model in the C4 Week 1 Assignment 2? Maybe it would also help to read this thread from ai_curious.

1 Like

Thanks Paul! So I have corrected the code so that

x = reshaper(x)
out = densor(a)

and got this error: AssertionError: Error in test “Test failed at index 2”.

Hymn, what else is wrong here ?

Yes, it looks like you’ve got those two steps right now. But that’s only one possible bug. Looking back at your original post, another mistake I didn’t notice is that you always start from [a0, c0] in LSTM_cell. That kind of defeats the purpose, right? That’s not how successive “time steps” are intended to work.

1 Like

Thanks Paul. I fixed the code so that

a, _, c = LSTM_cell(inputs=x, initial_state=[a, c])

and it worked for Exercise 1. But, in Exercise 2, I did the same

a, _, c = LSTM_cell(inputs=x, initial_state=[a, c])

but it prompted this error:

ValueError: Layer lstm expects 21 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor ‘input_13:0’ shape=(None, 1, 90) dtype=float32>, <tf.Tensor ‘a0_12:0’ shape=(None, 64) dtype=float32>, <tf.Tensor ‘c0_12:0’ shape=(None, 64) dtype=float32>]

Have you considered that maybe the problem is that your a and/or c values are the wrong shape? What shape are they? Add print statements. What shapes should they be? You should be able to figure that out by reading the instructions again.

Hi,
I’m having an issue with this. I’m not sure what the shape of a0 and c0 should be. I’m seeing the shape of a0 and c0 as (None, 90) and shape of X is (None, 30, 90). Do I need to include the n_a in the input shape?

I’m getting the error: ValueError: An initial_state was passed that is not compatible with cell.state_size. Received state_spec=ListWrapper([InputSpec(shape=(None, 90), ndim=2), InputSpec(shape=(None, 90), ndim=2)]); however cell.state_size is [64, 64]

I figured it out. I did have to use the n_a for the a0 and c0 shape

I had this same error. I found that recompiling the LSTM and retraining, then changing the one_hot function to have no other inputs than ‘indices’ and ‘depth’ will solve the problem:

tf.one_hot( indices, depth)

This error starts with running the below cell:

“### YOU CANNOT EDIT THIS CELL
inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)”

The initial error is:
“Input ‘b’ of ‘MatMul’ Op has type float32 that does not match type int32 of argument ‘a’”

This occurs if you set your function:
one_hot(indices, depth, on_value = 1 and off_value = 0)

because these values are dtype = ‘int32’

After which, the error, interestingly becomes:

ValueError: Layer lstm expects 5 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor ‘input_13:0’ shape=(None, 1, 90) dtype=float32>, <tf.Tensor ‘a0_12:0’ shape=(None, 64) dtype=float32>, <tf.Tensor ‘c0_12:0’ shape=(None, 64) dtype=float32>]

The number of inputs LSTM expects increases by 2 every time you run the cell. So, you input 3 tensors, but initially LSTM expects 5, then 7… etc.

I am not sure why this is but based on the fact that LSTM must be recompiled to work after the edits, I assume that the function is being appended after two iterations every time it is run. So the ‘for loop’ adds a required input after initialization and then another when the error occurs. I don’t understand the internals enough to know why.

1 Like

It is always the case that after you change a function cell, you must execute that cell explicitly in order for the new code to be incorporated into the runtime image. If you just call the function again without doing that, it just runs the old code. You can easily demonstrate this effect with a little experiment: take an existing function that works and introduce an obvious bug in it. Then call it again: it still works. Now click “Shift - Enter” on the modified cell and then call it again. Kaboom!

But beyond that, there is something clearly wrong here. I don’t have a theory based on the behavior you describe, but I can testify that in my code the LTSM inputs and outputs are the same shape on every iteration.

Actually here’s a theory: are you sure that you are calling the LSTM_cell function that is passed in as an argument to music_inference_model? As opposed to directly calling the TF function LSTM in your code? LSTM is a TF “layer function”, so calling it with arguments returns another function.

So, I did check to make sure I called in the LSTM_cell, which is necessary since all that training would have gone to waste if I only called LSTM function! Bad Jazz!

But I should have been clearer on the recompiling comment. If I called the trained LSTM_cell layer on any cell within the code, including the djmodel which had already passed all tests, then the above error with an increased request for inputs would be thrown. So, to fix the bug, it is necessary to go back to the start of section 2 and redefine ‘LSTM_cell’, then retrain the djmodel and LSTM_cell layer, then the music_inference_model with the above-mentioned fix. Then the model works and LSTM_cell does not change.

1 Like
AttributeError                            Traceback (most recent call last)
<ipython-input-29-2ad560a25ebc> in <module>
      2 
      3 # UNIT TEST
----> 4 output = summary(model)
      5 comparator(output, djmodel_out)

~/work/W1A3/test_utils.py in summary(model)
     34     result = []
     35     for layer in model.layers:
---> 36         descriptors = [layer.__class__.__name__, layer.output_shape, layer.count_params()]
     37         if (type(layer) == Conv2D):
     38             descriptors.append(layer.padding)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in output_shape(self)
   2190                            'ill-defined for the layer. '
   2191                            'Use `get_output_shape_at(node_index)` '
-> 2192                            'instead.' % self.name)
   2193 
   2194   @property

AttributeError: The layer "reshape" has multiple inbound nodes, with different output shapes. Hence the notion of "output shape" is ill-defined for the layer. Use `get_output_shape_at(node_index)` instead.

I get this error. I have done x = reshaper(x), where x = X[:,t,:]

Please check your DMs for a message from me about how to proceed.

The comment from “Winston_Elliott” helped me out. In fact, once you have djmodel function corrected. You will run the next cell "model = djmodel(…), and you may end up with an error. You will need to do as indicated in this comment: “it is necessary to go back to the start of section 2 and redefine ‘LSTM_cell’, then retrain the djmodel and LSTM_cell layer, …”

2 Likes

Hi @paulinpaloalto, I have the exactly same issue as the one Bhavika_Rakesh experienced before. Would you help me fix this?

Just to close the loop on the public thread, I had a DM conversation with Masaya, but the problem was already solved by restarting the kernel. So the theory is that the code was already correct, but that the runtime state of the notebook was out of sync. It’s always a worthwhile experiment to try:

  1. Kernel → Restart and Clear Output
  2. Save
  3. Cell → Run All

That will insure that “what you see is what you get”, meaning that it’s really running your current code.

2 Likes