C5_W1_A2_Dino names wrong output

Hi everyone,

I’m getting very close to the right output in the final model and not sure what I’m doing wrong.
I’m doing the assignment locally on vscode, so I hope that’s not what’s causing the ever so slight discrepancy.

I’m getting

j =  0 idx =  0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
 X =  [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19] 
 Y =        [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0] 


Iteration: 22000, Loss: 22.784517


where the expected is:

Iteration: 22000, Loss: 22.728886


Any help would be really appreciated, Thanks!

I uploaded to the coursera juptyer and tried there and it worked. I guess it was due to a mismatch in the environments, maybe python version, library versions or something similar?

The course authors assume you’re going to use the provided Coursera notebook environment.

Hello all,

I am experiencing a wrong expected output error message, but after reviewing the various threads about the topic I cannot come to a solution. After many print statements, I can confirm that my single example characters and indexes, Xs and Ys variables are correct. After using the hints to get the idx and sampling into the dataset “data_x”, I also believe that those are correct. So now, I am drawing a blank on what could be the problem to my code. I have passed all test cases in the optimize function. Any help is appreciated, thank you!

My final output is:

1 Like

Hi Tyler -

I got the same answer as you for a while. In my case the first name pulled was “Aachenosaurus”, which was not correct.


I forgot and posted this in the forum, should have posted it here. Same issue I think. I hope we will see a helpful response, because it is a nagging issue. The controls to get the training environment predictable and well tested seem impressive, did something slip through with this one? Environment differences seems plausible. I had a bigger difference in the Loss however, which doesn’t seem like it could be explained that way.

below is last output step and traceback. I realize Loss difference from expected is a clue, but can’t find what has gone wrong. Optimize passed tests and seems straightforward. use of input args and naming of return values also seems to be what is needed.

Assistance requested

Iteration: 22000, Loss: 0.294620

Iavesaqr Esitoriasaurus Esitoriasaurus Iaeaurus Urus Andoravenator Saurur

AssertionError Traceback (most recent call last) in

1 parameters, last_name = model(data.split(“\n”), ix_to_char, char_to_ix, 22001, verbose = True)


----> 3 assert last_name == ‘Trodonosaurus\n’, “Wrong expected output”

4 print(“\033[92mAll tests passed!”)

AssertionError: Wrong expected output

I had the same problem, though I had different answers than you. When I changed by modulo (%) statement in calculating idx, I got your answers. So my guess is we have some sort of index problem here.
Hope you’ve made some progress!

Corrected the modulo statement. Think carefully about how many dinosaur names you are starting with!

I got the same output
when idx=0, the first name pulled is ‘aachenosaurus’, which is not the correct one.
check whether, you indexing on shuffled list, or on the input list itself ?

Thank you so much! I didn’t realise there was a shuffled pile for some reason and I was going insane over this. It makes a lot of sense. I don’t know why I thought it was okay to go in alphabetical order…

I also had wrong values fro the names but they were different from the reported names in this thread. In addition I noticed two strange things: 1. My loss goes from ~23 at iter 0 to ~3 at iter 2000, 2. My first print statement must be “j:0, idx: 0” and the X, Y arrays at idx 0 but mine was j: 0, idx: 511. Obviously something was wrong with my indexing then I noticed that for the modulo operation I was using num_iterations instead of j. That mistake makes the model iterate 22001 times over one training example.

I hope someone can benefit from my mistake.