Difference between loss function and cost function?

why we use loss function and cost function both in the model? is it better to use both or you can prefer one of them according to your need of model?

Hi there,

often, the terms loss and cost functions are used interchangeably even though they are not 100% synonyms.

More specifically:

  • cost function is more associated with optimization. Sometimes, costs can also refer to evaluating the optimization (resp. the model)
  • loss is usually referring to the training of a model, usually a ML model. Here, of course also an optimization is solved with loss as the criterion (= costs, see previous point. We want to minimize the costs by fitting the model parameters during training), see also this thread: How does a Deep Neural Network work? - #4 by Christian_Simonis

Hope that helps, @Ahmad_Fareed_Khan!

Best regards


I agree that there is some ambiguity in the use of the two terms and in a lot of contexts they are essentially equivalent. But I think Prof Ng has a definite pattern to how he differentiates the two: in his usage when he says “loss function”, he means a vector valued function with the loss per sample and when he says “cost function”, he means a scalar valued function which is the average of the loss function values across all the samples in the current batch.