A simplier and different logistic loss function?

Not sure if it ever occured to anyone. Just wanted to post here for some brainstorming and fun discussion.

The loss function for logistic regression introduced the log idea. Its formula is nicely laid out as : 𝑙𝑜𝑠𝑠(𝑓𝐰,𝑏(𝐱(𝑖)),𝑦(𝑖))=(−𝑦(𝑖)log(𝑓𝐰,𝑏(𝐱(𝑖)))−(1−𝑦(𝑖))log(1−𝑓𝐰,𝑏(𝐱(𝑖)))

I am just thinking is it possible to use a simplier way to gauge the loss by this formula: 𝑙𝑜𝑠𝑠(𝑓𝐰,𝑏(𝐱(𝑖)),𝑦(𝑖))=|𝑓𝐰,𝑏(𝐱(𝑖)) - 𝑦(𝑖)|

That is, we can take the absolute difference of 𝑓𝐰,𝑏(𝐱(𝑖)) and 𝑦(𝑖). Say if 𝑓𝐰,𝑏(𝐱(𝑖)) = 0.8. If 𝑦(𝑖) = 1, then loss is 0.2. If 𝑦(𝑖)=0, then loss is 0.8 (0.8 - 0).

Just thought that it is an easier way to estimate loss. But of course, I might overlooked some advantages of using log over my own idea. Appreciate your feedback!

HI @Paige_Yang

For what we used log in logistic regression ?.that to make loss function convex curve as when we didn’t use log (like the image) we found that we may fall on local minimum cost
image so we must use log


Another important factor is not how we compute the loss, but that the gradients of the loss (the partial derivative) is always real and defined and continuous.