C5W3 modelf

Hello community :slight_smile:
For the exercise where we have to code modelf, I am having this error message

I don’t see where my error is. I have used the global layers without modifying them.

  • for getting the context, I used one_step_attention with a, s
  • for the post activation LSTM cell, I used context as inputs and [s, c] as initial state
  • for out, I used ouput_layer with inputs=s
  • for model, I used Model with inputs=[X, s0, c0] and ouputs=outputs

Thanks a lot for you help!

1 Like

I doubt the issue is with Step 1: Define your pre-attention Bi-LSTM. Double-check how you are coding that. I think you need to specify the input shape in Bidirectional.

1 Like

I just checked my assignment, it is not necessary to pass the input shape to Bidirectional.

Have you specified the return_sequences to true in step 1? If so, have you passed the previous test, one_step_attention?

1 Like

Yes, I have set the return_sequences to True
And yes, I passed the test for one_step_attention

1 Like

I guess the problem is either in Step 1 of modelf(), or it’s in one_step_attention() where you compute the value for s_prev.

Or, somewhere you could have added some code that was not necessary.

Maybe don’t use the “inputs=” tag there.

1 Like

Hi,

I tried to remove the tag “inputs=” but same error.
In step 1 of modelf, I use Bidirectional(LSTM(units=Tx,return_sequences=True))(X)
I passed all the test for one_step_attention: I used s_prev = repeator(s_prev)

Here you go. Tx is the length of the input sequence and it is not the same as units. Think about it. Hint: what is the difference between the length of the input sequence and the hidden state size?

Thank you! Indeed, it was my mistake.