Programming Assignment: Named Entity Recognition (NER)

I have some conceptual doubts
1: how are embeddding are made using tl.Embedding as we have studied in Cbow they are learned using context and center word here we are passing the vocab index and labels only and how they learn embedding .
ALso embedding are here for a full sentences
2: tl.LSTM we are not iteratinlike deep n gram for several parallel input .
NER is this we have one to one sequential model ??
can we do as many to many here in for every word and label learning type thing

Hi kroshan_20 ,

In many to many learning, we have one input and multiple outputs. This is different from the one to one sequential model that is typically used for NER.

Many to many learning can be used for NER, but it is not common. This is because NER is typically used to classify a single word as a named entity. In many to many learning, we would need to classify each word in a sentence as a named entity. This would be computationally expensive and it would not be very accurate.

1 Like

In NER, we use TL.LSTM to process the words in a sentence, one at a time. The output of the LSTM at each time step is a vector that represents the meaning of the word at that time step.

The LSTM is not iterated like Deep N Gram for several parallel inputs. This is because NER is a sequential task. In other words, the meaning of a word depends on the words that come before it.

1 Like