In the Week 3 practice lab where the simple model generalizes better than the complex model. What is the takeaway from this? What did we do wrong with using the complex model?

Your screenshot shows the boundaries for the simple model, can you also share those for the complex model? By comparing the two sets of boundaries, is there something you can say about it?

Raymond

I suppose my question, when creating a network should you always use regularization or should you go for the simple model for computational efficiency?

I’m just confused on how to decide how big of a network to use and whether to use regularization or not.

In a word: Overfitting.

Hello @Amit_Misra1, simpler better. I would go for the simple model.

You can only make decision based on what you have, if a simple and a complex model performs equally well, then I will go for the simple model. Here, I would not say in any confidence that the simple model will work better in the FUTURE than the complex one, but the bottom line is, it is completely your freedom to keep track of both models’ performance over a period of time, while only one of the two models is for production use.

Regularization helps overfitting problem, so you use it when there is a sign of overfitting.

A bigger network helps underfitting problem, so you go bigger when there is a sign of underfitting.

@Amit_Misra1, as much as I want there to be some hard code rules to tell us what to choose at when for which dataset, there is no such rules, so learning ML is really about those practical experience instead of just the theory. The lectures can only at most give you some directions, but never about helping you solve your problem. If you are confused with what to do, I suggest you to start building some models and exercise every skill that the lectures have taught.

Cheers,

Raymond

Thanks Raymond I appreciate it