The cost is going down

While training a deep neural network with layers [784, 44, 10], to classify the MNIST data set, I keep getting the cost as follows:

Kindly advise on how I can resolve such problem?

Hey @M_jAd1 ,

Please check these:

  1. Change Learning Rate
  2. Check Initialization
  3. Normalize Data
  4. Apply Regularization
  5. Experiment with Architecture
  6. Change Batch Size
  7. Try Different Optimization Algorithms

Feel free to if you need further help!

If this is for one of the course practice labs, maybe there is an error in your code.

Thank you for your reply.
The data is normalized.
How to chose the best initialization technique? and What do you mean by Optimization algorithms, how can I use them in this case?

You can randomly create weights using randomly and then multiply them by 0.01:

np.random.rand(*shape) * 0.01

For optimization algorithms we have Stochastic Gradient Descent (SGD), RMSProp, or ADAM. While you can implement SGD yourself as it is straightforward, others like RMSProp and ADAM are more complicated and are best used through libraries (TensorFlow or PyTorch).

1 Like