Cost function vs loss function vs error?

I am little bit confused in regards to difference between these terms.

Can we say that these three are synonyms ?
Cost function vs loss function vs error?

Hi @ABTJ!

In the context of deep learning models, cost function, loss function, and error are related concepts, but they have distinct meanings and roles. Here’s a breakdown:

  1. Loss Function

    • Definition: The loss function calculates the error for a single instance (or example) in the dataset.
    • Purpose: It measures how far off the model’s prediction is from the actual target for an individual data point.
    • Example: For regression, a common loss function is the Mean Squared Error (MSE) for a single data point.


    • Example: For classification, a common loss is Cross-Entropy Loss.

    • Scope: Applies to one data point or observation.

  2. Cost Function

    • Definition: The cost function is the average or total loss over the entire training dataset. It aggregates the individual losses across all training examples.
    • Purpose: It provides a single scalar value to evaluate the model’s performance on the dataset as a whole.
    • Example: The cost function is often the mean of the loss function across all data points.

    • Scope: Applies to the entire dataset and is often used during optimization (e.g., minimizing the cost function using gradient descent).

  3. Error

    • Definition: Error generally refers to the difference between the predicted value and the actual target value.
    • Types of Error:
    • Training Error: The error calculated on the training dataset.
    • Validation/Testing Error: The error calculated on unseen data (validation or test set).
    • Purpose: It provides a more intuitive notion of how “wrong” a prediction is, without necessarily being tied to a specific mathematical formulation.
    • Example: For regression:
    image

Let me know if you have any questions. Happy learning!

4 Likes

Thanks alot for your courteous and detailed response. I am able to understand, cost function is mean/average of Loss function across entire data set

But i am not able to understand difference between Loss function and error, because their mathematical expression/formula seem almost same

“error” is the true value ‘y’ minus the predicted value ‘y_hat’.

It is used to compute the cost or loss, for example by squaring the error for every example and adding them all together.

1 Like