Naming convention in Tensorflow: Loss vs. Cost

Hello to everybody,

I have a rather useless question that is nevertheless intriguing me.

Along this course, we used the nomenclature “cost” function for the final value of the model over a bunch of samples, and the term “loss” for the value over only one example. However, when one calls…) in Tensorflow, it prints the word “loss”. Is it refering to what we called earlier “cost”? Is the convention established in this course for cost vs. loss common or is only in the scope of this lectures?

Thank you in advance and sorry for being so picky over this detail :sweat_smile:.
Best regards,

Hello Manuel @Manuel_Sanchez,

Welcome to our community!

Yes - when we compile a model, we provide a “loss” argument which is one of the tf.keras.losses.Loss object such as tf.keras.losses.BinaryCrossentropy for binary classification tasks. This “loss” here is refered as “cost” since it measures the cost of a batch of samples, though it can also measure just the loss of one sample when the batch size is one. I would simply say the “loss” here is just “cost” that we know from the lecture.

However, such differentiation between the “loss” and the “cost” isn’t only in these lectures, though it is not surprising to see people use them interchangably. Afterall, there is no strong standard in the naming. :wink:


1 Like

Thank you Raymond! This completely clears up my doubt.

You are welcome Manuel!