A huge dip in the accuracy history for image segmentation?

Dear all,

I saw a very weird dip in my accuracy in the 19/20 epoch of my curve:

image

The homework is 100/100 so the code should be right. I know the batch norm can make the curve non-monotonic but is this a bit too crazy?

Shawn

No, it isn’t too crazy. It totally depends on the data set and the characteristics of each batch.

1 Like

Yes, the solution surfaces here are pretty crazy, so there are no guarantees of monotonic convergence: you can go off a cliff at any point. In the instance you show, it looks like a strong case for “early stopping” at 18 epochs. :nerd_face:

1 Like

Thank you both and @paulinpaloalto for mentioning the early stopping! I didn’t think of that and this is a really nice learning for me. Just to make sure, this curve is the accuracy for the training set instead of the dev set? And if I plot the same curve for the dev set (and see the same dip), that’d be a good indication for early stopping, is that right? Because I am wondering if it’s possible that before the dip is overfitting the training set…

You can expect entirely different curves for the training set vs the dev set, since the sequence of the examples is what causes the variance.

Right. @TMosh do you know if the code in the assignment plots the train or dev set accuracy?

Is this for Course 4, Week 3, Assignment 2? You didn’t specify this in the thread title.

If that’s for section 4.2 of the notebook, that’s the result from training the model. So that’s for the training set. There doesn’t appear to be a dev set in this assignment.