def result(self):
'''Computes and returns the metric value tensor.'''
# Calculate precision
if (self.tp + self.fp == 0):
precision = 1.0
else:
precision = self.tp / (self.tp + self.fp)
# Calculate recall
if (self.tp + self.fn == 0):
recall = 1.0
else:
recall = self.tp / (self.tp + self.fn)
# Return F1 Score
### START CODE HERE ###
f1_score = ####################
### END CODE HERE ###
return f1_score

This was the method used in the F1 score Class. But I think it is logically incorrect. When the if conditions for both recall and precision are True, the return value should be 0 instead of 1. If the true positive values (tp) are zero, then:

Precision should be 0 because there are no correct positive predictions, meaning all the predictions are wrong.

Recall should also be 0 because there are no actual positive instances that have been correctly predicted.

For tp + fp == 0 to be true, both tp and fp must be zero, since in real-world scenarios, neither of them should have a negative value. One cannot be positive while the other is negative because that would imply an invalid situation where counts are less than zero.

true, in real world scenario chances of true positive and false positive cases to be 0 would be rarest when coming to check a precision of a case analysis because why would they even put money in such study. being said that what happens when 0/0 that is tp/tp+fo becomes 0? but here the assigning of precision to 1 is because of it ability to detect the true positive cases in the predicted positives case but yes precision is not to considered a true measure of getting actual positive. It is always combination always better combination of f1 score when it comes precision and recall.

assigning of precision of 1 doesn’t mean the value came as 1. as there were no true positive and false positive, precision is assigned here value of 1.

I think the condition if (self.tp + self.fp == 0): was likely intended to avoid division by zero in the calculation, as this could result in NaN values and potentially interrupt training.

In this case, the correct approach would be:

# Calculate precision
if (self.tp + self.fp == 0):
precision = 0 # Set to 0 instead of 1
else:
precision = self.tp / (self.tp + self.fp)
# Calculate recall
if (self.tp + self.fn == 0):
recall = 0 # Set to 0 instead of 1
else:
recall = self.tp / (self.tp + self.fn)

This adjustment ensures that both precision and recall are set to 0 when there are no true positives or relevant predictions, preventing division by zero without falsely inflating the precision or recall values.

hi @Ansil_M_B
I am trying to state the assigned calculation is about if tp+fp is ==0, then precision is 1, that doesn’t mean the calculation comes to 1.

also another reason which I can understand if precision and recall comes to 0 and then F1 score will have no value as it is harmonic mean of precision and recall which gives more better measure of incorrectly diagnosed cases.

So here assigning precision and recall as 1 is because of the ability to test the true positive in positive predicted case (i.e. tp and fp) and true postive case from all the actually active cases(i.e. tp and fn)