Ambiguously worded questions in this course's quizzes

Hi,
Overall these courses have been great. However, in the 2 quizzes for this particular course, I’ve found a lot of the MC questions to be vague, ambiguous, and/or self-contradictory.

For example,
“An end-to-end approach doesn’t require that we hand-design useful features, it only requires a large enough model. True/False?”

The “correct” answer was “True”. However, it’s clearly not true to say that “it only requires a large enough model”, as it often needs a lot of data too. Indeed, one of the other questions in the same quiz asks exactly about E2E learning and data volume.

Since this course is evaluated entirely by ~30 MCQs, over 20% of which have vague or ambiguous wording, I’m finding that you can’t pass the tests until you’ve “guessed” the vague questions correctly.

I feel your pain in general with the quiz questions here in C3. I had to take multiple tries on quite a number of them. The good news is that you can retake them as many times as you like with no penalties, although the 3 tries in any 8 hour period can be a pain in the neck in that scenario.

But I disagree about the particular example you’ve chosen. I think you’re just “over-reading” the question. Of course that’s not the only thing you need. Yes, you also need a lot of data, but that’s always true basically and the question is not asking about data. You also need to know python and you need a powerful enough computer to run the training and you need the power grid to stay up and yadda yadda … But the question is do you need hand designed features? No, you don’t.

Hi - I agree that I’ve over-read that Q. However, I felt a bit ‘conditioned’ to, having been caught out by other questions on this course. For example, there was one where it asked us to select the ‘statement which is correct’, but then you could select more than one statement, and there were 2 that seemed correct. Do you select one, like the Q asks? Or the 2 that look right?

Anyway, I’ve passed it now, thanks for your reply!

Glad to hear that you have passed everything. Yes, maybe the wording could be clearer if they meant “pick all that apply”, but note that (for future reference) you can tell whether it is a single answer or “pick all” by looking at the shape of the radio buttons that you click. I forget which is which right off hand, but in one case they are square and the other they are round. Or you can just experimentally click a second one and see what happens. You can change it back before you hit “Submit” of course.

Another example is in C3W1’s case study quiz. One question asks the optimal order of accuracy from worst to best.

But is is possible to compare the learning algorithm’s performance with human-level performance? I believe the order varies in different scenarios.