My thoughts on Language Model

Judging how good the predictive text on my mobile phone keyboard is, I suspect that a lot must have been done about predictive text. Watching the lectures in week 1, I was just thinking of ways to add rules to RNN.

Has anyone done this? Maybe it might be in subsequent lectures.

I am thinking of structuring a corpus dictionary into a dataframe of the 8 parts of speech (noun, verb, adjective, etc.) and the ninth column for phrases and idioms, such that a predictive text algorithm can predict phrases together, instead of word by word. And also thinking of a way to teach the algorithm the language sentence structure. (something like “subject - verb - predicate”) because this is how humans learn languages.

If you are a researcher at a University reading this or working on NLP, publishing a paper, etc. Please, I would love to join you.

If you know of something similar somewhere else or later in the course, Please, I would love to know about it.

Thank you.

Hey @Chuck,

Nice out-of-box thinking. I am not sure as to whether something like this has been done already or not, but feel free to skim through some of the Google Scholar’s top results that you find by searching for your idea. I am sure you will get at least something regarding to this, that might help you to further hone your intuition and advance your understanding.

Although the Course doesn’t discuss anything along your line of thoughts, but the Course does delve a lot more deeper in the sequence models as you progress through the course. So, I hope you enjoy those concepts :nerd_face:

Cheers,
Elemento

Wonderful!
I just got to the topic called “word embeddings” so I had to come back to this post I made in August. Word embedding was exactly what I had in mind. I did not know it was already implemented and had the name “word embedding”.

I feel good preconceiving this idea. If I listen and carry out all my wild thoughts while taking this course, I might come out with something really novel. But I am not alone, these researchers are already doing the unthinkable.

Hey @Chuck,
It’s really nice to see that you are connecting your ideas with those that have been discussed in the course. I would like to highlight one difference here between what you proposed and the concept of “word embeddings”.

As per my understanding, you want the representation of different parts of speech in different ways (such as for phrases and idioms), but “word embedding” doesn’t distinguish like that. It learns a similar form of representation for every part of speech, for instance, be it a verb, an adjective or an adverb, word embedding may encode every word in the form of a 300-dimensional vector representation.

However, the follow is where your idea aligns with the idea of “word embeddings”. Both of them learn these representations with the help of some context. Now, in your case, the context is clearly interpretable by humans, like verbs, adjective, adverbs, and other grammatical structures and rules. And in the case of word embeddings, the context may be more abstract, but yes, it takes into account the nearby words, and since in any language for instance English, the rules of grammar are almost consistent, hence, the level of abstraction reduces more and more.

I hope this helps.

Cheers,
Elemento

1 Like