C2_W3_Assignment - UNQ_C10 - calculate_perplexity()

Hi - I’m having trouble finding the right way to work through C2_W3_Assignment - UNQ_C10 - calculate_perplexity(), any guidance is VERY much appreciated.

I’m not able to produce the correct perplexity output for the [calculate_perplexity] assignment:

{Moderator’s Edit: Solution Code Removed}

Screen Shot 2023-07-03 at 3.24.17 PM

Screen Shot 2023-07-03 at 3.24.32 PM

All the previous unit tests have passed, and in reading through other comments/posts - have looked at the estimate probability function and done some checking there - yet I’m not able to find anything that’s helping me get past this one. I’ve tried slicing the [n_gram] every possible way I can think of, but not really sure or understanding what I can do next.

As such, any information or assistance is greatly appreciated

@gent.spah | @Elemento - I saw you provided some guidance for this problem previously, if you’re able, any thoughts here?

Hey @Javid_Ferguson,
Welcome, and we are glad that you could become a part of our community :partying_face:

There are 2 issues in your implementation:

  • The first issue lies in the for loop that you have defined. Note that n and N-1 both need to be inclusive. Does your range takes care of that?
  • The second issue lies in how you are computing product_pi. I suggest you to read the comment once again, and that will help you find out what you are missing. You can also check out the formulation in the markdown once again, to figure out your error in this.

Let us know if this resolves your issue.

P.S. - Since this is your first time in the community, I am assuming you would be unaware of this. Posting solution code publicly is strictly against the community guidelines. Instead, you are always recommended to post the error stack.


1 Like

Hi @Elemento -

Thank you so much for the helpful information!

As well, apologies for the posting of code - I will make sure not to make this mistake again


First time here. I am also having trouble with UNQ-C10. All previous cells pass and I take care of range from n to N (inclusiveness). I get wildly different answer for perplexity. Is there a list comprehension in the probability calculation inside the for loop ? I am confused. On the surface, it seems as simple as getting the word at t (sentence[t]) and the n-gram for it (sentence[t-n:t) and then calling the estimate_probability() with the parameters above (and other parameters that are just passed on). Clearly, I am missing something here. Any help is appreciated .


Figured it out. Staring at the probability formula and comment by Javid helped. It was a silly oversight. Thanks

Hey @Bharath_Mukundakrish,
Welcome, and we are glad that you could become a part of our community :partying_face: Thanks for letting us know that your issue has been resolved.



It’s my first time posting, so I’m not sure if I should tag someone specifically, but I need help with my calculate_perplexity function. I passed all earlier tests in the notebook, but the problem seems to come from my estimate_probability function. After the first output (t = 0), all subsequent outputs are identical, leading to totally incorrect answers for my perplexity calculations. If anyone has time to look over my functions, I would really appreciate the help (let me know and I can give you my lab ID). Thank you in advance!

@Elemento Hello again, maybe I’m posting this incorrectly, but I haven’t received any feedback or help with my problem yet, and I would really like to finish this course - this is the last thing holding me up, and I can’t pass the homework assignment without getting this fixed. If there is a different way that I’m supposed to be asking for help, please let me know. Thank you!

Forgive me if this does not help. Previously when I had a similar problem, I found that in the earlier functions that seemingly worked fine, there was an input parameter that I did not pass on to the function I was calling because it had a default parameter. Sorry, if this does not help.

@Bharath_Mukundakrish Thank you for offering help! I think I’m assigning the input parameters (i.e., vocabulary_size, k, etc.) appropriately within the function - I had that problem in an earlier assignment as well.

Hi @Veronica_Scerra

One possible mistake or correction if the outcome is always the same could be - make sure you update product_pi variable in each word iteration through the sentence (in other words, product_pi *= ... or product_pi = product_pi * ... but not product_pi = ...)

Also, make sure you using estimate_probability(...) and not estimate_probabilities(...) inside the calculate_perplexity(...) function.

Let me know if that helps.


1 Like

Hello, thank you @arvyzukai for engaging with my question and helping out! I was using prod += math.log(1/p) instead of prod *= 1/p, and changing that solved my problem! I really appreciate the help.

1 Like