Week 1 - Quiz 6 coinflip?

Hello there,

I really don’t understand this question:

True/False: If you are training an RNN model, and find that your weights and activations are all taking on the value of NaN (“Not a Number”) then you have an exploding gradient problem.

I don’t have the right to say what the correct answer is, but on the other hand I’m obligated to ask for your opinion which might give the answer: if anyone who implements an RNN gets NaN values, they can never have a bug in their program, or any mistake? It’s always supposed to be an exploding gradient?..

I feel like this question if basic logic error A=>B so B=>A. Yes, Pr. Ng warned that because of exploding gradient you can get NaN’s, but he never said that there’s literally no other way to get there…

I’m posting here because I find lots of questions of the quizes are dubious and require sheer luck to get on first try. You just have to be in the evaluator’s head. At least that’s my feeling.

Thanks for reading, and eventually help me understand what I’m missing.

Cheers!

1 Like

You posted this against Course 4, but RNNs are Course 5, right?

It is true that a lot of the quiz questions from Course 3 on seem to have a “reading the tea leaves” sort of quality to them. You’re right of course that there can be other ways to get NaN, in particular if your “back prop” logic is just broken. But maybe what the person who formulated the question was assuming is that you are using TF/Keras or some other package, meaning that the code is correct, thus eliminating the “incorrect code” possibility and leaving you with a model that is prone to exploding gradients given your training data.

The only solace is that you can take the quizzes multiple times. But also note that some of them don’t give you exactly the same questions every time. Sometimes the difference is as minor as reordering the choices on a multiple choice question, so that you can’t just mechanically copy the answer from a previous version.

So can the course creators simply fix/elaborate on the question instead of having us make assumptions about the questions or assumptions about the assumptions that the person who formulated the question was making while doing so? This looks really like the course creators are lacking a sense of responsibility about the quality of their course. And indeed, the question/‘correct’ answer is a logic error, and in many other quiz questions of the DL specialization, it was exactly such tiny errors that made the difference between wrong and truly correct answers.