NLP C4_W1 UNQ_C10 Oddly slow runtime

I’ve successfully completed the assignment for Week 1 of this course and all the graded functions are indicated as correct in the notebook (not so in the grader, but that’s a different issue).
The final cell, assessing mbr_decode(), runs correctly and returns a positive All tests passed! almost immediately. By using print statements within my function, I see that this assessment tests it by translating around 3-4 sentences. However, when I want to test out mbr_decode() by running any of the suggested cells, such as

mbr_decode(your_sentence, 4, weighted_avg_overlap, jaccard_similarity, model, TEMPERATURE, vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR)[0]

(having run the previous one where your_sentence is declared), the kernel keeps running and does not return anything. This happens even when I’m trying to test a one-word sentence.

Does anyone have any idea of why this could be? I thought maybe my code was not efficient and was maybe executing unnecessary tasks, but then you would expect the assessment cell to also be ridiculously slow, so I doubt that could be the reason? I also tried restarting the runtime, but this only seemed to make it worse.

Lab ID for mods: awboxxrb

Hi nohstns,

The tests in the notebook do not test everything, so if you get error notifications in the grader then that would be a first thing to look at.

The issue with the grader was something completely different which I have been able to solve already, and also not really what I was asking about. I’m mostly wondering why the example test cells after question 10 need so much time to run (I never saw any of them output anything, to be honest) compared to the assessment cell.

Ok. I just tested the cell you posted.

The first time I ran it, it took around 2 minutes to get a result, the second time 30 seconds. The next cell (‘Congratulations’) took some 10 seconds, and ‘Das ist die Augabe!’ appeared after 25 seconds.

All not too bad, although the unit test took even less time. The test uses very simple sentences you can find in the w1_unittest.py file.

From this it’s hard to tell what could cause the issue on your side. It may be something in your notebook, although it doesn’t seem like it as you passed the grader. But I could have a look at your notebook if you want, if you send it to me as an attachment to a direct message.

1 Like

Thank you for the quick reply!
So the time the model needs to translate the sentences depends on the sentence’s complexity? There is no need to look a the notebook specifically – it seems like you are able to replicate the same behavior I had, and I am mostly interested in the theoretical aspect as to why the model seems to need much more time to translate some sentences and significantly less for others. First I thought this could be because it was a seq2seq model, but it doesn’t explain why a one-word sentence takes more time than a longer one; or am I missing something here?

Hi nohstns,

A few things to note:

  • the first time I ran the first cell, it took some 2 minutes, the second time 30 seconds. This means that there’s something seemingly unrelated to the coding of the model and the complexity of the sentence that influences the time the model takes to arrive at a result. This could be due to hardware load, connectivity, or something else behind the scenes. It’s hard to tell.

  • I do get results from running the cells, whereas you do not. This again points in the direction of connectivity or hardware loading on your side.

  • It is true that the sentences from the w1_unittest.py take less time to pass through the model. If you have a look at w1_unittest (go to File → Open and find it in the directory), you can see that the code for the test uses different specifications from the code in the assignment. This includes the use of a mock function that avoids the recreation of the full behavior of the model. This is, e.g., described here. So instead of requiring a full run through the model, certain values are checked instead allowing for a fast test of the correctness of the model.

This makes total sense, thank you for the thorough explanation! My bad for not having gone through the w1_unittest.py file, that’s my bad and would have clarified many things – such as the fact that the test was not recreating the full behavior. I think the reason why I wasn’t getting results was just that I was being too impatient and not giving it the time it needed – I doubt I waited for more than 1.5 minutes.