like what does it mean when loss function is negative when its zero i understand that the expected and actual output are the same but what about negative loss function ?
In regression problems and classification problems loss functions are defined to be positive, so the minimum value of the loss is zero. You filed this question in DLS Course 1 and I don’t recall Prof Ng ever discussing negative valued loss functions there, but it has been several years since I watched all these lectures in detail. If you have a reference to where you heard this in the lectures, please give the name of the lecture and the time offset.
loss function (L) = -(ylog(a) + (1-y) log(1-a)) so if y = 1 and a =0.8 then we get loss = -( 1 * log(0.8)) = -0.223 which is negative or are we supposed to disregard the negative sign. @paulinpaloalto
The point is that logarithms of numbers between 0 and 1 are negative, right? So multiplying by -1 is necessary to produce a positive value. If y = 1 and a = 0.8, then we have:
L = -1 * 1 * log(0.8) = -1 * -0.22314 = 0.22314
yes you are right i see my mistake