C4:W4:A1 triplet_loss scalar

I am able to get the correct values of loss. But I am unable to get the value in scalar as asked in the question.
this is what I get

loss = tf.Tensor(527.2598, shape=(), dtype=float32)

i have tried to get it from .shape[0] but that doesnt work.
i dont understand how to get it to scalar

Try this: loss.numpy()

It should be fine to leave the loss value as a Tensor of shape (). That works for me. What indication do you get that this is a problem?

1 Like

Just for grins, I tried returning loss.numpy() and it fails the following assertion in that test cell:

loss = triplet_loss(y_true, y_pred)

assert type(loss) == tf.python.framework.ops.EagerTensor, "Use tensorflow functions"
print("loss = " + str(loss))

So you have to return it as a Tensor otherwise the test fails.

The reason that this matters is that if you want to train a model using TF’s automatic gradient computations, you can’t have any numpy functions in the computation graph because they don’t get the automatic gradients calculated. That point is a general one and doesn’t really apply here, since we don’t actually run any training and only use a pretrained model in this assignment. But it’s something to keep in mind in general. If you want to see an example of the type of failure intermixing numpy calls causes, go back to Course 2 Week 3 and in the compute_cost function, use logits.numpy().T to do the transpose instead of tf.transpose. It passes the unit test for the function, but when you run the training later things explode and catch fire.

1 Like

According to the TensorFlow doc, that is exactly what a TensorFlow scalar looks like. See آشنایی با تنورسازان  |  TensorFlow Core
Specifically this section…

Here is a “scalar” or “rank-0” tensor . A scalar contains a single value, and no “axes”.

# This will be an int32 tensor by default; see "dtypes" below.
rank_0_tensor = tf.constant(4)
print(rank_0_tensor)

tf.Tensor(4, shape=(), dtype=int32)

I think this is literally true, and might be useful in some situations (eg printing or extracting just the numeric value of the 0-rank Tensor) , however @paulinpaloalto ’s experiment suggests this might not be one of those situations, since the return type is expected to be of type tf.python.framework.ops.EagerTensor

There was no error but since the exercise specified it to be in scalar , I thought maybe it will trickle down later. But since that’s the only use of the triplet_loss fn. there is no such issue.
Sorry for confusing everyone