C3_W1 - understanding Calculating perplexity lab

I am trying to understand how the calculations presented in this lab (link) represent the perplexity.

Specifically, I don’t grasp how “By calculating the product of the predictions and the reshaped targets and summing across the last dimension, the total log probability of each observed element within the sequences can be computed:”

log_p = np.sum(predictions * reshaped_targets, axis= -1)

I would appreciate any clarification.

hi @Omer_Tzuk

Here is a beautiful explanation about perplexity by @jyadav202

feel free to ask if you still have any query

so basically it logs probability between the given context and predicted word or sentences(corpus), and gives a log of probabilities and the reshaped targets with the axis dimension, helps to get the text or output based on the model created for word or sentence prediction.