Course 5 Week 1 Project 3

Made my way through this assignment. It seemed pretty self explanatory until I had finished it and realized they were a few things imported that I had not used and I am a bit unsure if I am using t, the iterable value from the for loop, correctly. First off, I did not use Lambda. I have a hunch it is my issue with the djmodel function. For step 2.a i had this:

Step 2.A: select the "t"th time step vector from X. 
x = X[t]

I am getting this error:

AttributeError: The layer "lstm" has multiple inbound nodes, with different output shapes. Hence the notion of "output shape" is ill-defined for the layer. Use `get_output_shape_at(node_index)` instead.

considering LTSM takes the output from selecting the t’th time step vector from X and get_output_shape_at is not imported with tensorflow.keras.layers.

For the second function, music_inference_model, i am a bit unsure of my code for the ony_hot lines. This is where i am unsure about the iterable value t:

x =   tf.math.argmax(out, axis=-1)
x = tf.one_hot((x), t)

I am getting errors there and i cant find out of my third function is working correctly. I am half posting this to write down where my issues down so I can hopefully see where my silly mistakes are, but also if any else gets stuck with this, it can be documented somewhere. Please help if you have any advice that will help. I am very very new to the deep learning side of ML, and really pretty new to ML in general. I remember we used tensorflow at a previous job back in 2016, but i was too ignorant to learn anything about it cuz it was pretty new technology then, but it being very heavily used for map creation and object detection.

You don’t need to use lambda.

But you do need to slice the X matrix to get just the ‘t’ values from the 2nd of its 3 indices.

So that’s x = X[:,t,:]

There are some tips on the forum for the djmodel and music_inference_model functions, you can search for them by name.


ahhhh thank you, i wasnt even looking at it from the right perspective. Im pretty out of practice and i am just getting back into python for the first time since last august. I need to take a refresher course in the basics on codeacademy or something lol

Can anyone provide a hint on how the one-hot line is supposed to look. I am pretty sure mine is right. I am getting errors still on that block but i don’t think its related to the on-hot encoding, but i guess there is a chance its not correct. Like do we use x and t or x and another integer like Ty. I am guessing it is t since it is not used anywhere else in my block. If it is supposed to be somewhere else, please correct me.

This is my error:

ValueError: Input 0 is incompatible with layer lstm: expected shape=(None, None, 90), found shape=[None, 1, 0]

but when in the code the hint state says:

# Use RepeatVector(1) to convert x into a tensor with shape=(None, 1, 90)

So should the first shape be,

   (None, 1, 90)

or should it be what it says in the error:

(None, None, 90)

Considering it is a one-hot and then RepeatVector functions i am assuming it needs the 1 in the middle instead of None. But then why is it throwing an error like that. My issue is i am missing the ‘90’ at the end of the tuple.

Ok so re read the instruction. It looks like it is suppose to be (None, 1, 90) instead of what the error states it should be. I also added four print statements to my code. One before argmax = (None, 90). Then my one after argmax = (None,). So i changed the axis from -1 to +1 in the argmax. I get the same result though. I have my prints before argmax. between argmax and one-hot. after one-hot and after repeatvector. I am getting:

(None, 90)
(None, 0)
(None, 1, 0)

So first i need to figure out why i am losing the 90 with the argmax function. Second, why does it add a zero after the one-hot encoding.

I changed the axis to 0 on the argmax function and the output has changed, but its backwards now

(None, 90)
(90, 0)
(90, 1, 0)

I Now i am thinking that the argmax function should be axis=-1. It gets The 'None" at the beginning of the typle. I think my issue is with the one-hot not adding a 90 to the typle and it is adding a 0 instead.

Okay i figured out my issue there, its starts running well, then stops with this error now:

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(None, 1, 90), dtype=float32) at layer "lstm". The following previous layers were accessed without issue: []

Going to look into what happened and post here what i figure out. If any one has advice please reply.

Here is what prints out before the error message:

Model Inputs [<tf.Tensor 'repeat_vector_49/Tile:0' shape=(None, 1, 90) dtype=float32>, <tf.Tensor 'input_3:0' shape=(None, 64) dtype=float32>, <tf.Tensor 'input_4:0' shape=(None, 64) dtype=float32>]
WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "functional_2" was not an Input tensor, it was generated by layer repeat_vector_49.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: repeat_vector_49/Tile:0

This tensor looks correct for the last one. It suppose to have 0-49 repeat_vectors.

Here is the error i am getting from the autograder:

Cell #15. Can't compile the student's code. Error: ValueError('Graph disconnected: cannot obtain value for tensor Tensor("input_7:0", shape=(None, 1, 90), dtype=float32) at layer "lstm_2". The following previous layers were accessed without issue: []')

Does anybody have an idea where I should start looking. I am not seeing what is wrong, and i dont know what to start tracing down. If its an error in my code, or my code failing a unit test or is my code only failing the autograder.

Got my problem solved. I accidentally hit a key on my keyboard in the middle of a variable.