C4:W4:A1 triplet_loss scalar

Just for grins, I tried returning loss.numpy() and it fails the following assertion in that test cell:

loss = triplet_loss(y_true, y_pred)

assert type(loss) == tf.python.framework.ops.EagerTensor, "Use tensorflow functions"
print("loss = " + str(loss))

So you have to return it as a Tensor otherwise the test fails.

The reason that this matters is that if you want to train a model using TF’s automatic gradient computations, you can’t have any numpy functions in the computation graph because they don’t get the automatic gradients calculated. That point is a general one and doesn’t really apply here, since we don’t actually run any training and only use a pretrained model in this assignment. But it’s something to keep in mind in general. If you want to see an example of the type of failure intermixing numpy calls causes, go back to Course 2 Week 3 and in the compute_cost function, use logits.numpy().T to do the transpose instead of tf.transpose. It passes the unit test for the function, but when you run the training later things explode and catch fire.

1 Like