I’m running into an issue on Week 4’s programming assignment Face Recognition.

###
Exercise 1 - triplet_loss

The error I’m getting is:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

My code is below which I’ll remove once I’ve solved it. Thanks in advance!

```
# YOUR CODE STARTS HERE
pos_dist = tf.square(tf.subtract(anchor, positive))
print("shape of pos distance", pos_dist.shape)
neg_dist = tf.square(tf.subtract(anchor, negative))
print("shape of neg ditance", neg_dist.shape)
basic_loss = pos_dist - neg_dist + alpha
print("shape of basic loss", basic_loss.shape)
loss = tf.maximum(basic_loss, 0.0)
print("shape of loss", loss.shape)
# YOUR CODE ENDS HERE
```

Please have another look at the instructions: there need to be some *reduce_sum* calls added there both for *pos_dist*, *neg_dist* and the *loss*.

I added your print statements to my code which passes the test cases and the grader and here’s what I get for the output of that test cell:

```
type(anchor) <class 'tensorflow.python.framework.ops.EagerTensor'>
shape of pos distance (3,)
shape of neg distance (3,)
shape of basic loss (3,)
loss = tf.Tensor(527.2598, shape=(), dtype=float32)
type(anchor) <class 'list'>
shape of pos distance ()
shape of neg distance ()
shape of basic loss ()
type(anchor) <class 'list'>
shape of pos distance ()
shape of neg distance ()
shape of basic loss ()
type(anchor) <class 'list'>
shape of pos distance ()
shape of neg distance ()
shape of basic loss ()
```

So you can see that for the first test case, my values are 1D tensors of length 3.

I’m still struggling with the shapes.

Here’s what I’m trying now, but the values are off. I tried different combinations, but it was throwing erros. I’m not 100% sure if I should be summing before squaring the values, or vice versa. Any tips?

-removing code

##
shape of pos distance (3, 1)

shape of neg ditance (3, 1)

shape of basic loss (3, 1)

shape of loss (3, 1)

loss = tf.Tensor(

[[155365.67]

[301919.9 ]

[266513.62]], shape=(3, 1), dtype=float32)

shape of pos distance (1,)

shape of neg ditance (1,)

shape of basic loss (1,)

shape of loss (1,)

shape of pos distance (1,)

shape of neg ditance (1,)

shape of basic loss (1,)

shape of loss (1,)

AssertionError Traceback (most recent call last)

in

14 y_pred_perfect = ([1., 1.],[1., 1.], [0., 0.,])

15 loss = triplet_loss(y_true, y_pred_perfect, 3)

—> 16 assert loss == 1., “Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example”

17 y_pred_perfect = ([1., 1.],[0., 0.], [1., 1.,])

18 loss = triplet_loss(y_true, y_pred_perfect, 0)

AssertionError: Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example

It is a mistake to use the *axis* parameter and *keepdims=True* on the final *reduce_sum* to compute the actual loss value, right? That is supposed to be a scalar.

You don’t really need the *keepdims = True* on the earlier *reduce_sum* calls for *pos_dist* and *neg_dist*, but it shouldn’t cause any real harm.

That said, I tried adding the *axis = -1* and *keepdims = True* and I get very different answers than you show.

Then I looked more carefully at your code: you have the computations fundamentally wrong for *pos_dist* and *neg_dist*. Please compare the code you actually wrote to the math formulas for those values. What you wrote does not faithfully express what the math formulas say.

Once you get this debugged, please edit your previous posts to remove the solution source code: we are not supposed to leave that sitting around on the forums.

Apologies Paul, I cannot edit the initial post. The system says that it’s been onsite for too long so I cannot edit nor delete it. Do you have elevated privelages to remove it?

I was able to remove the more recent entry.

So I looked at the formula again, and I’m still missing something.

I tried summing up Anchor and Positive before subtracting them. Then I squared it. No joy.

Then I looked at the subscript 2 and thought maybe I was missing something there, but it’s still a little hazy.

Do I need to run it through an absolute function? Maybe I’m missing what f of A and f of P is?

JB

You have the “order of operations” wrong. Read the math formula again. Here’s the order in which that formula tells you do to things:

- Subtract the appropriate tensors
- Square the differences
- Do the difference of the squares and add \alpha
- Add up all the terms

Do you see why that is different than what you did?

I removed the reduce_sum from pos_dist so that it only subtracted positive from the anchor. It then squared that difference.

I did the same for neg_dist. The only difference is that I now subtract negative instead of positive.

I subtracted the difference of the squares and added alpha.

Finally I did a maximum with basic_loss and 0.0 before putting that through reduce_sum.

I get the following error.

##
loss = tf.Tensor(4882.205, shape=(), dtype=float32)

AssertionError Traceback (most recent call last)

in

11 y_pred_perfect = ([1., 1.], [1., 1.], [1., 1.,])

12 loss = triplet_loss(y_true, y_pred_perfect, 5)

—> 13 assert loss == 5, “Wrong value. Did you add the alpha to basic_loss?”

14 y_pred_perfect = ([1., 1.],[1., 1.], [0., 0.,])

15 loss = triplet_loss(y_true, y_pred_perfect, 3)

AssertionError: Wrong value. Did you add the alpha to basic_loss?

I’m really lost on this one.

You do need the *reduce_sum* calls when you compute *pos_dist* and *neg_dist*. You sum over the last axis, so that you end up with a 1D vector with one value per sample.

Hi Paul,

I reincorporated using reduce_sum on anchor, postive and negative. I then find the distances and square them. The shapes seem to follow what you are seeing, but values and losses are off.

Any ideas?

##
anchor is type <class ‘tensorflow.python.framework.ops.EagerTensor’>

pos_dist tf.Tensor([386717.22 414744.97 415558.94], shape=(3,), dtype=float32)

pos_dist shape (3,)

neg_dist tf.Tensor([231351.72 112825.22 149045.44], shape=(3,), dtype=float32)

neg_dist shape (3,)

basic_loss tf.Tensor([155365.7 301919.94 266513.7 ], shape=(3,), dtype=float32)

basic loss shape (3,)

loss tf.Tensor(723799.3, shape=(), dtype=float32)

loss = tf.Tensor(723799.3, shape=(), dtype=float32)

anchor is type <class ‘list’>

pos_dist tf.Tensor(0.0, shape=(), dtype=float32)

pos_dist shape ()

neg_dist tf.Tensor(0.0, shape=(), dtype=float32)

neg_dist shape ()

basic_loss tf.Tensor(5.0, shape=(), dtype=float32)

basic loss shape ()

loss tf.Tensor(5.0, shape=(), dtype=float32)

anchor is type <class ‘list’>

pos_dist tf.Tensor(0.0, shape=(), dtype=float32)

pos_dist shape ()

neg_dist tf.Tensor(4.0, shape=(), dtype=float32)

neg_dist shape ()

basic_loss tf.Tensor(-1.0, shape=(), dtype=float32)

basic loss shape ()

loss tf.Tensor(0.0, shape=(), dtype=float32)

AssertionError Traceback (most recent call last)

in

15 y_pred_perfect = ([1., 1.],[1., 1.], [0., 0.,])

16 loss = triplet_loss(y_true, y_pred_perfect, 3)

—> 17 assert loss == 1., “Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example”

18 y_pred_perfect = ([1., 1.],[0., 0.], [1., 1.,])

19 loss = triplet_loss(y_true, y_pred_perfect, 0)

AssertionError: Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example

Disregard the last one Paul, I figured it out. Thanks for all your help! I kept at the order of operations as you had suggested and that eventually did the trick!

1 Like

It’s great news that you got it to work! Thanks for confirming.

1 Like