I think this line in the display_images function (and consequently the coloring) in the C1_W1_Lab_3_siamese-network notebook is quite misleading
if predictions[i] > 0.5: t.set_color('red') # bad predictions in red
as these predictions are not bad (ie incorrect) but just pairs that are predicted to be dissimilar (according to the set threshold of 0.5)
I propose an improvement to the prediction labelling to show:
- predicted distance
- true label
- predicted label
and color in red the ones where the true label is not the same as the predicted label
we can achieve this by replacing the corresponding code in the display_images function with the following code:
lab_titles = ['dissimilar', 'similar']
tick_labels = [f'distance = {p:.3f}\ntrue:{lab_titles[int(l)]}\npred:{lab_titles[int(p < 0.5)]}'
for (p, l) in zip(predictions, labels)]
plt.xticks([28*x+14 for x in range(n)], tick_labels)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if int(predictions[i] < 0.5) != labels[i]: t.set_color('red') # incorrect predictions
This will result in the following final picture (example)
