C5W3 Low precision, F1, and recall of the model

Hello everyone,

I wonder why the model’s precision, F1, and recall are low. even though the accuracy is considerably high?

It is mentioned in the course the following
“We could define more useful metrics such as F1 score or Precision/Recall”

Can someone explain the reason of the results below?

   precision    recall  f1-score   support

     0.0       0.94      0.98      0.96     32013
     1.0       0.40      0.22      0.28      2362

accuracy                           0.92     34375

macro avg 0.67 0.60 0.62 34375
weighted avg 0.91 0.92 0.91 34375

The markdown mentions this:
Since the labels are heavily skewed to 0's, a neural network that just outputs 0's would get slightly over 90% accuracy.

Precision, recall and F1-score are used as metrics for imbalanced datasets. Here’s a classification example on imbalanced data.

Thank you for your reply,

I understand that, but why the values of precision, recall, and F1 are very low, or are they acceptable? The model actually is making good predictions, but how to be able to say that the prediction is good if the precision is that low?
Are there any possible ways to increase those values?

The acceptable level of performance is determined by the consumer of the model i.e. your client.

Please look online for methods to deal with imbalanced data and AUC (in the link shared previously).