Cost fluctuation in Assignment 2

While training the deep NN in Assignment 2 (last exercise), I noticed that after 1100 iterations, the cost suddenly tripled (see attached image). I’ve passed the assignment, but just curious as to why this is happening.

All things equal (e.g. learning hyperparameters such as the learning rate) , neural networks are highly nonlinear beasts and one cannot count on monotonically decreasing (everywhere non-increasing) convergence to a solution. There will be much more on this in the second course.

That said, your convergence path is different from mine (and I am a little bit curious as to how your solution passed). Did you change the learning rate?

I used the default learning rate (0.0075)