So I’m coming up against a deadline, not even really understanding what the penalty would be if I miss it, but I’m a big stuck with my unit tests for estimate_probably in C2_W3_Assignment passing, but the provided estimate_probabilities function is not producing what the notebook says it should. Here’s the output I’m getting:
{‘like’: 0.09090909090909091,
‘dog’: 0.1,
‘a’: 0.09090909090909091,
‘this’: 0.1,
‘i’: 0.1,
‘is’: 0.1,
‘cat’: 0.2727272727272727,
‘’: 0.09090909090909091,
‘’: 0.1111111111111111}
The notebook says I should be getting:
{'cat': 0.2727272727272727,
'i': 0.09090909090909091,
'this': 0.09090909090909091,
'a': 0.09090909090909091,
'is': 0.09090909090909091,
'like': 0.09090909090909091,
'dog': 0.09090909090909091,
'<e>': 0.09090909090909091,
'<unk>': 0.09090909090909091}
which is close, but different. I’m not aware of having edited the provided estimate_probabilities, which in my notebook is currently:
# moderator edit: code removed
My perplexity calculation later on is working but giving slightly different answers. So there’s presumably something slightly askew somewhere - maybe in my estimate_probability function (but something that’s not being caught by the unit tests) or I’ve accidentally edited some intermediate code.
Can anyone provide help in terms of have they encountered the same, or can anyone just check for me that estimate_probablities is correct?
Many thanks in advance
Best, Sam