Hi,
It seems the video is accidentally cut off, as it does not go over all the annotations on the annotated slides.
Edit: Also, I believe the video “GloVe Word Vectors” is edited and it’s missing the explanation of b_i and b_j(b_j'). They just come from nowhere…
Hello @Youze_Zheng,
At what time mark is the video “Embedding Matrix” cut off?
At what time mark in the video “GloVe Word Vectors” do you find the symbols come out of nowhere?
Please share the above for us to start from the same page.
Hi Raymond,
Thanks for your reply! The video was cut off at the end, where some additional annotated notes were not explained.
And for the “GloVe Word Vectors” video, b_i,b_j' suddenly appeared at 5:51 mark.
Hi @Youze_Zheng,
I have watched the last 20 seconds of “Embedding Matrix” and I don’t see that. Maybe I have missed that. Please state the time mark, and tell us what annotation notes were not explained (via a screenshot and circle the notes, maybe?)
For the two b's, they are the trainable bias terms. The subscripts i, j and the letters t, c above them should have been explained. It’s interesting that they just pop up without being gone through, but they are not strangers - we have seen bias terms everywhere in many neural network examples.
Cheers,
Raymond
Hi Raymond,
Thanks again for your reply. For “Embedding Matrix” video, the bottom “In Practice…” part is not explained, though it’s kinda trivial.
For the bias terms, yes they do appear a lot in NN examples and also classic recommender systems models. As they weren’t explained, I thought it might be an error editing the video.
Thanks!