Training loss and validation loss

Dear Community I am working on landmark classification problem but during my training of using a convolutional neural network from scratch by using about 8 layers of CNN in the backbone and 4 layers of MLP in the head i.e 3 hidden layers and 1 output layer during my training my training loss drops till it get stuck at 3.9 my validation loss starts from 3.9 and get stucked there I am using sgd optimizer but it seems that i got trapped at a local minimum any ideas to improve this network

Please check your loss function and number of neurons in the output layer.
Do check deep learning specialization for lectures on optimizers and the related assignment.

ok I used nn.CrossEntropyloss and I know it does a softmax first before doing the loss so I made my network not calculate a softmax in the output layer of the MLP only do the Linear sum and this is not from any of the speciialization course It is a self project sir

No worries. Tips of model optimization and ways to approach data augmentation are applicable to the problem and is independent pytorch or tensorflow.

You are correct about the model requiring to predict just logits for CrossEntropyLoss.

Unfortunately, it’s impossible for me to repeat courses 2 and 3 of deep learning specialization. Please see the lectures for pointers on dealing with bias and variance issues.

so what do you suggest sir
that could help me in my problem

I already gave you my sugggestion: