So you have to return it as a Tensor otherwise the test fails.
The reason that this matters is that if you want to train a model using TFâs automatic gradient computations, you canât have any numpy functions in the computation graph because they donât get the automatic gradients calculated. That point is a general one and doesnât really apply here, since we donât actually run any training and only use a pretrained model in this assignment. But itâs something to keep in mind in general. If you want to see an example of the type of failure intermixing numpy calls causes, go back to Course 2 Week 3 and in the compute_cost function, use logits.numpy().T to do the transpose instead of tf.transpose. It passes the unit test for the function, but when you run the training later things explode and catch fire.
I think this is literally true, and might be useful in some situations (eg printing or extracting just the numeric value of the 0-rank Tensor) , however @paulinpaloalto âs experiment suggests this might not be one of those situations, since the return type is expected to be of type tf.python.framework.ops.EagerTensor
There was no error but since the exercise specified it to be in scalar , I thought maybe it will trickle down later. But since thatâs the only use of the triplet_loss fn. there is no such issue.
Sorry for confusing everyone