Is Logistic Loss Function based in probability and inverse exponent?

The cross entropy loss function -log(\hat{y}) is not the inverse function of sigmoid. I also don’t know what you mean by avoiding Euler’s constant here: note that log means natural logarithm in Machine Learning, so e is involved.

Here is a thread from mentor Raymond that explains how the loss function works and why it is defined the way it is.

Here’s another thread that includes a graph of log(z) which will help with the intuition.

1 Like