Hello there,

I really don’t understand this question:

True/False: If you are training an RNN model, and find that your weights and activations are all taking on the value of NaN (“Not a Number”) then you have an exploding gradient problem.

I don’t have the right to say what the correct answer is, but on the other hand I’m obligated to ask for your opinion which might give the answer: if anyone who implements an RNN gets `NaN`

values, they can never have a bug in their program, or any mistake? It’s always supposed to be an exploding gradient?..

I feel like this question if basic logic error `A=>B`

so `B=>A`

. Yes, Pr. Ng warned that because of exploding gradient you can get `NaN`

’s, but he never said that there’s literally no other way to get there…

I’m posting here because I find lots of questions of the quizes are dubious and require sheer luck to get on first try. You just have to be in the evaluator’s head. At least that’s my feeling.

Thanks for reading, and eventually help me understand what I’m missing.

Cheers!