Hi, Just started learning Logistic regression and Neural Networks and I have a question related to a practical issue I faced: Typically loss function is

-y*ln(f) - (1-y)*ln(1-f)

As you can see it does not discriminate between y=0 and y=1. But I want to train my neural netwok (or run logistic regression if aplicable) do discriminate between Type 1 errors and Type 2 errors i.e. I want false positives to weight more than false positives. Something like

-y*ln(f) - D*(1-y)*ln(1-f) where D is a weight assigned to false positives

I’ve looked into tensorflow.keras.losses and there doesn’t seem to be appropriate loss function for this. Does anyone have experience with similar problem and can it be solved by playing with loss function or should there be an entirely different approach?