Need clarity in Generative AI

Hello Everyone,

Since this community has many great minds, I wanted to check here.

I have worked in computer vision and NLP projects and new to this Generative AI. While I was doing the course on Generative AI with LLM, I see many techniques are using classic RNN techniques. So got confused what is the difference between usual RNN projects and Generative AI, or is there any difference at all ??

Your answers would help me a lot…

Thanks

In Generative AI, loosely speaking, the objective is to generate contents. For example, generating a sequence of text (as a response to question), or sentence completion given the first few words in a sentence. It is called generative modelling. In vision, it is generating an image. In all cases, generating means, sampling from the learned distribution.

In general, all ML models are grouped into two categories: i.) Discriminative ii) Generative. In generative models, the objective is to learns the distribution of the input data (text, image, video…) and then use it to predict or generate new data points (text, image, video…). However, learning the true distribution is often impossible (not necessary for most of the use cases). We use various techniques to overcome the challenge. Therefore, the model does not matter. One could use RNN or Transformer models to achieve this. Hope this helps

Thanks much Arun
Appreciate your help :slight_smile:

This is an interesting thought, @Arun_Prakash_A.

Can you provide a literature source or context where this is stated?

Reason why I ask: frankly, I am not sure if this is true, especially a logic „XOR“ seems to be difficult here:

  • E.g. take a variational autoencoder: you can use it for both discriminative (e.g. anomaly assessment) as well as generative tasks (sampling from latent space)

  • in which category would an Autoencoder or a PCA fall, if you would use them only for dimensionality reduction?

Best regards
Christian

Great question! I can recommend to take a look at an excellent article / blog post from Sebastian Raschka where he provides an really good overview on this topic and provides also links to follow up the deeper details. Feel free to check it out, @Aravind.S:

Please let me know if this answers your question.

Best regards
Christian

1 Like

I should have written all (not being pedentic) ML “classification models”. I am not sure where I studied this exactly. However, I found this page helpful :Generative model - Wikipedia and the lecture slide: https://cedar.buffalo.edu/~srihari/CSE574/Discriminative-Generative.pdf

Why is it so? Could you elaborate on this?

Sure:

In the first example, the VAE as ML model can be used for both: discriminative and generative tasks, see examples mentioned above.

In the second example (dimensionality reduction with a PCA or AE) it’s actually neither a discriminative or a generative task in my understanding but rather a transformation task or let’s say a compression.

So both cases are neither „either generative or discriminative“ I guess…

Anyhow: Intetesting thought and thanks for your reply!

Best regards
Christian

I have almost completed the Generative AI with LLM, and here are my understandings.

One thing I understood is, the RNN models are perform well on single task like text to text translation, NER.
But these LLMs are able to perform multiple tasks with single model file and also does much more advanced tasks which will take longer set of trainings in classical RNNs.

Also, there are many fine tuning techniques are available in GenAI which can be done on the pre trained LLM models. In that way we can produce socially responsible AI.

Am I in the correct way??
Please correct me if im wrong…

They are able to do because they understand the language. There are language models based on RNN also, for example, ELMO. However, we need a task specific model for down stream tasks.

Training RNN based Language Models is prohibitive, as they can’t be trained in parallel (loosely speaking).

This is true to some extent

1 Like