Although the procedure is correct, it tells me that it is wrong. I don’t know if there is a possibility to reset and delete everything I did to check if, due to human error, I changed something in the base code.
I appreciate your prompt response.
Note that this is NLP Course 1, not NLP Course 2, so I changed the title category for you.
If you still have the same problem after starting again with a fresh notebook using Tom’s instructions, then please let us know.
There are a number of common mistakes here, but I have not previously seen the logprior value being wrong. So it may be worth looking at your code. We can’t do that on a public thread, but there are private ways to do that. Please check your DMs (personal messages) for a mesasge from me about how to proceed.
Note that the most probable cause here is that your code is actually wrong, so it would be a good first step to read through the instructions carefully. Pay particular attention to the purpose and meaning of the logprior value. The result should be 0.0 here, because the number of positive and negative input tweets is the same. Meaning that there should be no natural positive or negative bias in the training data. You need to figure out why that value is negative in your code.