C5 Week 4 Exercise 7: AssertionError: Wrong values in outd

Hi everyone,

I am stuck on the final assignment of the specialization. Exercises 1-6 are passing, but Exercise 7 is failing as follows:

# UNIT TEST
Decoder_test(Decoder, create_look_ahead_mask, create_padding_mask)

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-63-dd4b65c051b4> in <module>
      1 # UNIT TEST
----> 2 Decoder_test(Decoder, create_look_ahead_mask, create_padding_mask)

~/work/W4A1/public_tests.py in Decoder_test(target, create_look_ahead_mask, create_padding_mask)
    221     assert tf.is_tensor(outd), "Wrong type for outd. It must be a dict"
    222     assert np.allclose(tf.shape(outd), tf.shape(encoderq_output)), f"Wrong shape. We expected { tf.shape(encoderq_output)}"
--> 223     assert np.allclose(outd[1, 1], [-0.2715261, -0.5606001, -0.861783, 1.69390933]), "Wrong values in outd"
    224 
    225     keys = list(att_weights.keys())

AssertionError: Wrong values in outd

Any suggestions? I have read a lot of forum posts and have double-checked Exercises 1-6 also (I found some little issues and corrected them). I don’t see how to debug this…

Thanks!
Gary

One more thing: I tried print the output x and got this, which looks very close to the desired values, but not exact:

x= tf.Tensor(
[[[-1.3821416   1.4325931   0.10909806 -0.15954943]
  [-0.36006218 -0.5374867  -0.81191266  1.7094616 ]
  [-0.3749532  -0.8820826  -0.44182518  1.6988611 ]]

 [[-1.158294    1.5947365  -0.11840409 -0.3180383 ]
  **[-0.27348393 -0.5665973  -0.8549699   1.6950512 ]**
  [-0.31467515 -0.88652277 -0.49338835  1.6945863 ]]], shape=(2, 3, 4), dtype=float32)

I added the below lines of code, at the relevant places:

print(f'x after word embeddings:{x}')
print(f'x after scale embeddings:{x}')
print(f'x after positional encodings:{x}')
print(f'x after dropout:{x}')
print(f'final x:{x}')

Here is the output:

x after word embeddings:[[[-0.02472718 -0.03895496  0.01122528 -0.03709315]
  [ 0.00860578  0.00740315 -0.0409526   0.00755553]
  [ 0.01270969  0.04936013 -0.04764051 -0.04633161]]

 [[ 0.00860578  0.00740315 -0.0409526   0.00755553]
  [ 0.01270969  0.04936013 -0.04764051 -0.04633161]
  [ 0.0144151   0.03082472  0.03976548  0.01368902]]]
x after scale embeddings:[[[-0.04945436 -0.07790992  0.02245057 -0.0741863 ]
  [ 0.01721156  0.01480629 -0.0819052   0.01511107]
  [ 0.02541938  0.09872026 -0.09528103 -0.09266322]]

 [[ 0.01721156  0.01480629 -0.0819052   0.01511107]
  [ 0.02541938  0.09872026 -0.09528103 -0.09266322]
  [ 0.02883019  0.06164945  0.07953096  0.02737803]]]
x after positional encodings:[[[-0.04945436  0.92209005  0.02245057  0.9258137 ]
  [ 0.8586825   0.55510855 -0.07190537  1.015061  ]
  [ 0.93471676 -0.3174266  -0.07528237  0.9071368 ]]

 [[ 0.01721156  1.0148063  -0.0819052   1.0151111 ]
  [ 0.8668903   0.6390225  -0.08528119  0.90728676]
  [ 0.9381276  -0.3544974   0.09952962  1.027178  ]]]
x after dropout:[[[-0.04945436  0.92209005  0.02245057  0.9258137 ]
  [ 0.8586825   0.55510855 -0.07190537  1.015061  ]
  [ 0.93471676 -0.3174266  -0.07528237  0.9071368 ]]

 [[ 0.01721156  1.0148063  -0.0819052   1.0151111 ]
  [ 0.8668903   0.6390225  -0.08528119  0.90728676]
  [ 0.9381276  -0.3544974   0.09952962  1.027178  ]]]
final x:[[[-1.3821421   1.4325927   0.10909846 -0.159549  ]
  [-0.35824597 -0.53128856 -0.8189088   1.7084434 ]
  [-0.37440884 -0.882805   -0.4414815   1.6986953 ]]

 [[-1.158295    1.5947357  -0.11840305 -0.31803778]
  [-0.27152616 -0.5606005  -0.86178285  1.6939094 ]
  [-0.31307012 -0.8865212  -0.49486318  1.6944547 ]]]

You may try this and see where your values are not matching with mines. That is the place to look out. But make sure you add these statements in the correct places. Do not add inside the loop.

Thanks for the test values!

I inserted the print statements and the 1st errors occurred in “x after dropout”. I had a look at my code for the dropout layer and compared it with the version from an earlier function. I made a tiny change to the starter code and I just got “All tests passed”. I replaced “+=” with “=”.

The starter code looks like this:

        # apply a dropout layer to x
        # use `training=training`
        x += None

So maybe " +=" is incorrect?

Gary

PS: Just got this message: "Congratulations on completing Deep Learning from DeepLearning.AI. " Thanks again!

Whether += is correct depends on how you initialize the x value.

My starter code is this:

# apply a dropout layer to x
# use `training=training`
x = None

I don’t know how you get different. Anyways, congrats on finishing the course.

I rebooted the lab and checked the starter code. Yes, you are right. I must have accidentally copied x = None from the previous starter code.

Hmm. In the future, if you need a fresh copy of an assignment, you can do this.