Loss Function for logistic regression confusion

Hello, I am going through the 2nd week and I had a quick question about why Andrew mentions that log y_hat should be large? https://www.coursera.org/learn/neural-networks-deep-learning/lecture/yWaRd/logistic-regression-cost-function

Shouldn’t we want the loss function to be as close to zero as possible? Then why do we want log(y_hat) to be large?

Thanks

Remember that logs of numbers between 0 and 1 are all negative, right? So making a negative number larger (further to the right on the number line), makes it closer to 0, but from “underneath”.

Here’s a thread with a graph and some more discussion of the points here.

Oh right, that makes sense.

Thanks Paul! Great name btw :slight_smile:

1 Like