1-I divided my model into 60% of the data set as training data, 20% as test data and the other 20% as validation_cross data, I trained the data by adding validation_cross data to validation_data appropriately, and then I tested all accuracy values were 1. When I tested the model afterwards, it predicted all cv, test and training data 100% correctly without any problems. But then, when I wanted to train the cross validation and test values in the model (my goal here is to train the data with these values, to give better results for generalize), I started getting errors in my next prediction tests. Why do I start getting prediction errors when I want to train a model that is compatible with cv and test data with these values?
2- In addition, what I want to ask will be the logic of J_cv and J_test? I understand a certain amount, but I’m not sure.
3-I know that in Linear Regression the model is scaled up and fitted to the data, but what exactly provides this in neural networks, layer counts or neuron counts? How can I understand this?
See training data which is 60% of your data is used to train your model. Then the cross validation data is used to calculate J_cv (error) this J_cv is used to identify bias and variance in your models (that you have trained with training data).
With this you can fix the bias/variance in your model (there are several methods of that told in the video) and use the test data to report out its performance. It is important that you don’t mix up this order because otherwise your model will become familiar with these data and we will not be able to know its true performance.
The logic of j_cv is that it is cross validation error it used to detect bias and variance in your models that you have made and help us fix the model. Where as j_test is used when you have the best performing model to report out its performance.