Assigning different weights to type 1 and type 2 errors?

Hi, Just started learning Logistic regression and Neural Networks and I have a question related to a practical issue I faced: Typically loss function is
-y*ln(f) - (1-y)*ln(1-f)

As you can see it does not discriminate between y=0 and y=1. But I want to train my neural netwok (or run logistic regression if aplicable) do discriminate between Type 1 errors and Type 2 errors i.e. I want false positives to weight more than false positives. Something like
-yln(f) - D(1-y)*ln(1-f) where D is a weight assigned to false positives

I’ve looked into tensorflow.keras.losses and there doesn’t seem to be appropriate loss function for this. Does anyone have experience with similar problem and can it be solved by playing with loss function or should there be an entirely different approach?

Hi @Ivan_Iurchenko,

Usually a fitting algorithm allows you to assign weights to samples. What about assigning more weights to all of your negative samples? In that way, false negatives will cause more loss than equally false positives.

Otherwise, you can make a custom loss function.

Raymond

1 Like

Thanks, I appreciate your quick reply.