C5 [Week 1] Problem slicing Tensor in "Jazz Solo with an LSTM Network"

I cannot seem to figure out the correct way to slice the Input tensor X in the djmodel function. I have tries 2 options,

        # Step 2.A: select the "t"th time step vector from X. 
        x = Lambda(lambda x: X[:,t,:])(X)
        #x = X[:,t,:]  # also tried to simply slice it

        # Step 2.B: Use reshaper to reshape x to be (1, n_values) (≈1 line)
        x = reshaper(x)

But I get this error:

The layer “reshape” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.

The size of the slice is x.shape = (None, 90), but the description says that the size should be (n_values,) = (90,), is that the issue?

5 Likes

There’s a much easier way to select a sclice of X

Can you please give me more details, I’m struck here for the past hour :frowning: … I tried doing X[:,t,:] too

I used x=X[:,t,:] and works for me

1 Like

Restarting the kernel fixed it… :expressionless: Something must’ve been messed up before…

8 Likes

I had the same issue, and restarting the kernel fixed it for me, as well! Thanks for posting this or I would have been chasing my tail for hours.

5 Likes

I just had the same issue as Roland and Marcus. Thanks all for posting.

1 Like

Based on the guidance, I tried using tf.slice but couldn’t get it to work properly.

Dimensions of tf.slice(X,[0,t,0],[-1,t,-1]) become [0, 90] and leads to the following:

ValueError: total size of new array must be unchanged, input_shape = [0, 90], output_shape = [1, 90]

Am I using this wrong, or does it not really apply in the first place?

Thanks,

Tom

X[:,t,:] seemed like the obvious choice and was the first thing I tried too and i went around for hours, including restarting the kernel with other methods before reading this thread, restarting, and retrying this method.

Does anybody have any ideas for why the kernel seems to have trouble here? Is there some kind of caching going on with the global variables that make it keep some bad state from failed attempts?

Yes, restarting the kernel just worked for me as well. Of course, I wasted a bunch of time chasing my tail. Frustrating. I want to spend my time learning not getting frustrated.
I think it’s pretty poor that whoever maintains this course hasn’t corrected this by now, or at least put in some instructions in the notebook itself about how to deal with this issue if it occurs.
Chasing this solution down teaches students absolutely nothing about the subject matter. I’m grateful for this forum and all who have posted in it.

The issue that trips up most students is not that the lab has errors.

It is that almost all of the Course 5 notebooks rely on the LSTM object as a global variable.

So, any time that you modify anything about the LSTM through debugging your code, you have to go back and restart the kernel and create the LSTM object again. This is because if you run a cell, then modify it and run it again, you’re applying multiple incompatible updates to the same LSTM object.

Restarting the kernel is the cleanest way to handle this.