No explanation for wrong answer in C3 W1quiz question 14

I was unable to correctly answer question 14 of the W1’s quiz. I think for some other quiz and other questions within this quiz, we get an explanation after we submit, but this question didn’t come with an explanation. Right now I can’t figure out why my answer is wrong and I don’t know how to improve. Would it be possible to add explanations to this question as well?

1 Like

Hey @Sara,

I’ve also found last three questions quite ambiguous.
What was your answer for the 14th question?

1 Like

Hi,

The problem is I don’t know if I can talk about the wrong answer here. I am afraid it would go against the course conduct requirements as it implies the correct answers…

1 Like

You can ask a related question and I will do my best to answer it.

In fact, you have already asked quite close question here. In the quiz they said that the true data distribution has changed. It means you need to correct evaluation of your algorithm. The target is shifted, you need to aim to the new position :slight_smile:

1 Like

I also had 2 failed attempts at answering this one. I’m still unclear how “aiming for a new position” would ultimately improve the model if it wasn’t previously trained with the new data. Any explanation of why that would be? If this is getting too detailed, I can remove it. Feel free to DM me if you don’t want to make easy for people who haven’t taken the quiz. cheers

1 Like

Hi @billcrook and welcome to the DLS community!

Aiming for a new position helps us understand how well our learning algorithm performs. That’s a first step to improve it.

I hope that makes sense.

1 Like

By the way, explanations for this question should be available in the quiz by now.

1 Like

Right. How can we improve the model against new data if it was previously trained without the new data? Wouldn’t the improvements only be marginal considering the trained model was not trained with the new data?

1 Like

In the question, they tell amount of the new data, and we know the amount of data we already have. If we consider possible impact of each option, only one really makes sense. We can’t estimate amount of its impact or even tell if there is any improvement, but we have only one option that allows to use the new data wisely.

1 Like

Ok, I think I got you. Thanks

2 Likes

When you get the answer wrong you see:
Which seems to contradict the answer to question 5, no?

But I guess the number of new samples in the key thing to consider?

Edit: Section removed due to being a hint to the answer

The 5th question relates to data augmentation (the number of classes of birds isn’t changed) and the 14th describes a situation where the true data distribution has changed (new classes of birds were introduced). They are different.

Please, remove the hint to quiz answer from your post.
@Yousif

I also get this explanation for my answer qualified as incorrect.
But I don’t quite see the point here. The idea of data augmentation in this context is not to add data only to the training set. It is to add the augmented data to the full set (in such a way that the new distribution is correctly represented) AND THEN split the full set into train-dev-test. Thanks for clarifying!

1 Like

Hey @alex38,

We evaluate the model on dev and test sets. We want their examples to be as close to true data as possible. Why would one augment data in the dev or the test set?

I am a bit late to the party but I am just taking this course now in 2022. I read this complete thread and it mostly makes sense to me, but not completely. I’ll keep this general so as to not give quiz answers away.
Given:

  • Dataset of 10,000,000 bird images.
  • New dataset of 1,000 images of a new species of bird

Would it be best to augment the data and add it to the training set, and retrain the model. Then add a shuffled 2% to the dev and test sets each. Then iterate on the dev set?

OR, is it better to augment the data and just add to the dev and test sets because the new bird species will be a more balanced distribution in the dev and test sets because they are smaller in comparison to the training set?

2 Likes

Even if you do not change the training data, there are ways of tuning the performance of the model that is being trained, e.g. by regularization techniques.

In this particular example this might imply tuning the regularization hyperparameters in a way that the NN performs well on both old and new images of two different species of birds.

Say, the previous model has been fitted only to the first species of birds. If now a second species appears, the previous model suddenly might become overfitted, as it might be overly sensitive to characteristics specific only to the first species, although there are many commonalities between the two kinds of birds. Unfortunately, the more general aspects common to both kinds of birds then might become de-emphasized.
Adding stronger regularization and re-training will then lead to better performance on the altered dev- and test-sets.

Hope that makes sense, it’s just my personal interpretation.

1 Like