Validation vs Training Graph

The following graph was obtained for the network I implemented in Week exercise :
download
download (2)

Validation Accuracy seems to move in the same range 0.92 vs 0.94
Validation Loss seems to be increasing in every epoch

My Questions -

  1. Why is validation loss increasing from Epoch 1 itself? Does this mean the minimally trained network with just one epoch is good enough?
  2. Is there any linear relationship defined b/w Accuracy and Loss?
  3. Given two graphs, what should be the ideal scenario to stop and why?

Hello @theLifter
Your graphs show that the training error became constant after some epochs of training. And it is that your model now memorized all the training examples which is an overfitting issue and that’s why it is not performing well on unseen data in the validation set. And training epochs are one of the hyperparameters to be tuned during the training process. And for accuracy and loss, I can say that accuracy is the measure of how well your model correctly classifies training examples, and loss is simply the penalty for bad prediction. The ideal scenario to stop the training is when the model well generalize both training , validation and test examples means when your model well performs on training that and the unseen data from both validation and test sets.