Has there been an update to a lecture?

In the Bird Recognition in the City of Peacetopia,
Question 6 has the following option.
"The 1,000,000 … (similar to the New York City/Detroit housing prices example from the lecture)."
I can’t find any mention of such an example in any of the lectures on course 3 or course 2 - or any reference to Detroit / New York City
Is this an error or am I missing something simple here?

1 Like

The quizzes have been updated quite recently, but there are no updates to the lectures in course 3 that I’m aware of. Maybe it’s a mistaken “forward reference”. Was the quiz the week 1 or week 2 quiz?

Week 1 quiz - It may well be a forward reference. I’ll check that out tomorrow (It is 02:00 here.
As an extra point though, if the quizzes have been updated, could you ask the team to review some of the answers / responses to other questions please. Different iterations of the same questions seem to come up with conflicting answers / recommendations.
I can’t give you specifics because the differences pop up when you redo the quiz but the effect is especially noticeable on questions 5 & 6 relating to citizen images - although it is not limited to those questions.
There also appears to be logical inconsistencies between consecutive questions (as iterated) where say for example a correct answer to question 5 would be 100 % at variance to a correct answer to question 6.
I do appreciate that questions in a quiz ‘stand alone’ but when the correct answer to one question contradicts the correct answer to the next question it gives us mere students headaches.

Yes, there are lots of bug reports filed against the new quizzes.

Thanks for the response Paul.
So what now?
Do we submit ‘wrong’ responses knowingly to get the marks?
Do we submit ‘correct’ responses knowing they will be marked down?
Do we pause the course awaiting a further update?
PS I could not find any reference to Detroit / New York in the second week’s lectures, although I just skimmed through the text.

It would be bad strategy to plan anything based on the expectation of fixes on any particular schedule. Their behavior in that regard is not predictable: sometimes they respond quickly, but mostly they don’t.

If you need the grade to pass, then you just have to keep trying until you can figure out what pleases the grader. But note that you don’t always get the same version of the quiz on a retake, so a given question may require a wrong answer in one case or a correct one to another very similar sounding question. Frustrating, I know. Sorry, but that’s the best I can say at this point …

Thanks Paul
Its not a case of passing the grade (I already have a pass score) but understanding really what is the best option for testing my knowledge against the training received.
I like to do these quizes multiple times as the questions vary and I do enjoy them perhaps perversely)
The issue arose because I was getting scores that seemed almost random and it took me a little while to understand that the quiz was badly structured.
I guess the second week quiz will be in a similar state?

I have not had time to spend on checking out the new quizzes yet, so I don’t have any direct evidence on whether Week 2 will be in any better or different shape than Week 1.

Week 2 quiz has one odd question for me … I have no idea what Approach A/B referred to below are. They were never mentioned prior to question 15 in the quiz.

Looks like the quiz is generated and perhaps it’s not always generating a cogent set of questions from the various possibilities.


Question 15
Approach A (in the question above) tends to be more promising than approach B if you have a ________ (fill in the blank).

1 point

Large training set
Large bias problem
Multi-task learning problem.
Problem with a high Bayes error.

1 Like

Some of the wordings on the quizzes are quite ambiguous too.

Ie. Because this is a multi-task learning problem, when an image is not fully labeled (for example: (0, ?, ?, 1, 0)), we can use it if we ignore those entries when calculating the loss function. True/False? - what does it mean to ignore “entries”? Does it mean “missing labels”?

And: An end-to-end approach doesn’t require that we hand-design useful features, it only requires a large enough model. True/False? - does large enough model mean there is sufficient data?

Please correct me if I’m wrong, thanks!

For the first one, I think it does mean ignore the ‘missing labels’ in a given training example.

For the second one, I think ‘large enough model’ refers to the size of the NN. Not the size of training data.


I guess it is difficult setting up these quizzes.
The ‘team’ need to make us think not just about the topic as taught in Andrew’s lectures but also about the ambiguities that will hit us in real life when we try to set up a project.
However there is a line between creating questions that challenge students and making mistakes in the language or logic of the question / answers

Blockquote Because this is a multi-task learning problem, when an image is not fully labeled (for example: (0, ?, ?, 1, 0)), we can use it if we ignore those entries when calculating the loss function. True/False? - what does it mean to ignore “entries”? Does it mean “missing labels”?

Andrew explains this in one of his videos.
It simply means that when calculating the loss function you should only average over those examples which have a label (IE not a question mark ‘?’).
My problem with that is along the lines of ‘If the labels are binary 1/0 how do you specify the third option (the ?)’

Thanks Nidhi and Ian!

I do understand Andrew’s point, it’s just that when the question says “ignore entries”, I wasn’t sure what are we “ignoring” - just the missing label? The entire data point? There are some trick questions in these quizzes after all.

Also for end-to-end learning, doesn’t it require sufficient data then? I’m pretty sure this requirement was stressed repeatedly by Andrew and other questions in the quiz.

Maybe I missed the part where “?” are called missing entries

Yes, you are correct, it’s mentioned several times that it require sufficient data.

Also, it seems when they said ‘large model’, they meant ‘large data’ ( judging by the way the answer was graded in the quiz). Huh?

I hope that is a bug in the quiz, or I’m confused.