Hello everyone, I hope you are having a good day!
I’m not sure how useful this is, but I noticed the function get_sensitivity_specificity_test() prints
“Test Predictions: [1 0 0 1 1]”
when it should print
“Test Predictions: [0.8 0.8 0.4 0.6 0.3]”
The error is in line 201 of the file public_tests.py where it is written
print("Test Predictions: ", y_test)
but it should be
print("Test Predictions: ", preds_test)
Best,
Tomas.
Can you share the image of the error you are mentioning.
Test Predictions is y_test because
it is trying to find get_sensitivity_specificity_test() meaning it is finding
sensitivity (float): probability that our test outputs positive given that the case is actually positive
specificity (float): probability that the test outputs negative given that the case is actually negative
So it is trying to find true values of case which is actually negative and positive and predictive values.
Check if you did not mix the codes.
Regards
DP
Sure, here is the image:
What I mean is that when it displays Test Predictions: [1 0 0 1 1], it should really be Test Predictions: [0.8 0.8 0.4 0.6 0.3]. I think it is just a typo in the code because the computed sensitivity (0.666…) and specificity (0.5) are correct (when using [0.8 0.8 0.4 0.6 0.3]). But maybe there is something I am not understanding.
Ok so you got the answer.
Great. that’s a good doubt actually Tomas. The reason it is y_test because it is finding the true values for case and 0.8 which is pred_test is the probability of being how accurate the sensitivity and specificity of a case is actually positive(having the disease) and actually negative(not having the disease.
Y_test for the cell is chosen for Test Prediction as it is finding these actually case with disease and actual cases with no disease as the test cell clearly mentions get_sensitivity_specificity_test() and not predictivity part of the case.
If you see cell name get_accuracy which determines accuracy of predictions for all cases that has pred_test value for test predictions as it is determining accuracy based on a given threshold to determine tp, fp, fn and tn.
Check the test prediction values for that test cell comes as Test Predictions: [0.8 0.8 0.4 0.6 0.3]”
Regards
DP
I have not taken this course or done this programming exercise, but just looking at the information that Tomas has given us why would the specificity be 0.5? If the correct data is shown there, notice that the predictions exactly match the labels, so both sensitivity and specificity should be 100%, right? There are no false positives and no false negatives.
But if you look at the actual \hat{y} values, as Tomas shows us, then you can see that in fact the model has one false positive and one false negative, right? I believe that is the point being made here: the test code is just printing the wrong thing in that position (the labels instead of the actual model output values before rounding to 0 or 1).
Hello Paul,
The Test predictions for the cell being discussed here is specific to the get_sensitivity_specificity_test()
so here test labels are the true labels given as per cases which were actually positive and negative.
then the test predictions are the true predictions to detect the cases which were actually positive and negative with a threshold given 0.5.
Hence the Test labels and test predictions are matching.
Then comes to your next question of there are no false positives and false negatives, kindly refer the below image where it shows how many of the cases have false positive and false negative.
the computed specificity is 0.5 because it is just not the true negative but calculated as per TN / (TN + FP), so if you calculate as per the image shared, 3/(3+4)= 3/7=0.428, so roughly equals to 0.5
I agree these exercises are a bit confusing as it gets confused between accuracy predictions and sensitive-specific test predictions. We used to have tough time in our college days for the same 
Regards
DP