The generative model predicts the class labels (Positive, negative, neutral) based on the input prompt. But I need to get the probability of each class similar to BERT model i.e, If I fine-tuned a BERT model, It is easy to get the probability of each class. We need to add a SoftMax layer to the last year which returns the logits for each class. But the performance of BERT is not good for my scenario and using a generative model has a good performance. But I don’t know how to get the probabilities for each class using these generative models which output the probability distribution over the vocab not over the classes. basically for classification, how to provide a probability associated with each class prediction, enabling users to set their own classification or confidence thresholds. Any Idea?
I have been thinking about this as well. If we think what the model is doing it generates words with a probability. If we can generate the class as a single word/token we should be able to extract the probability of this word in our dictionary. however this needs to be generalized to multiple classes. see for example discussion here: python - How to output the list of probabilities on each token via model.generate? - Stack Overflow