I am on the first exercise of the assignment, but I can’t seem to get the correct output. I am wondering if my reduce_sum axis is incorrect. Can someone help me with the problem? Thanks!

As you see in “Additional Hints”, there are 3 lines that you use tf.reduce_sum. The first two are to calculate L2 Norm between “anchor” and “positive/negative”. And, as described, the shape is (m,128), which means that 128 elements will be summed over. So, which dimensions that we need to remove ? That’s the last one with specifying axis=-1, as instructed.

For the last one, we can use default setting. No need to specify axis.

I am reducing the 2nd dimension, but I get the error:

“InvalidArgumentError: Invalid reduction dimension (1 for input with 1 dimension(s) [Op:Sum]”

This happens on the second test case, it gives the expected output on the case with shape (m, 128)

I think this is slightly tricky. The test program passes [3,128] data at first, then start to pass some different data. The challenge is, those are just a list. So, if you want to sum over for a specific axis, say, axis=1, then, it can not do anything. And, if we think more general use of this routine, specifying a “specific axis” may not be practical, because we may get more high-dementional data. So, I suppose setting (-1) is safer if we consider such a case.

Ah, I see. I just fixed the problem, thank you for the help!

I have some issue in this exercise:

TypeError: unsupported operand type(s) for -: ‘list’ and ‘list’ I can’t understand…

ID Labo: raxubbfrdehp

That means you are trying to do a subtraction (an arithmetic operation) and the two operands you have given are “lists” which is not an arithmetic type in python. So why did that happen? They need to be tensors or arrays, most likely. If that is not enough to get you to an understanding of your problem, then please show us the complete exception trace that you are getting.

This doesn’t work either. `tf.reduce_sum()`

doesn’t work with scalars:

```
Invalid reduction dimension (-1 for input with 0 dimension(s) [Op:Sum]
```

(Maybe it did in some older version?)

Well, perhaps your mistake is that the input shouldn’t be a scalar?

Please either create a new thread about your problem or show us the full exception trace you are getting.

Is it OK to post parts of the solution here?

The problem here is that the triplet loss function is supposed to do tf.reduce_sum() twice: first to collapse the 128 feature vector elements into a scalar (going from [m, 128] to [m]) then to collapse m samples into a scalar (going from [m] to scalar).

This works fine when the input given is ([m, 128], [m, 128], [m, 128]):

```
y_pred = (tf.keras.backend.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.keras.backend.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.keras.backend.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
```

But it doesn’t work if the input is ([2], [2], [2]):

```
y_pred_perfect = ([1., 1.], [1., 1.], [1., 1.,])
loss = triplet_loss(y_true, y_pred_perfect, 5)
```

This can of course be worked around in code, but it makes it quite cluttered (and I wonder if autograder will trip o it).

What am I missing?

It’s actually easy if you take the hint that was given earlier on this thread: in the first application of `reduce_sum`

, you want to reduce on the last dimension, right? But you don’t know if it will be 1 or 2. But if you take advantage of the python syntax of using -1 to say “the last dimension”, then it works in either case.

OK, here we go:

*{moderator edit - solution code removed}*

Gives:

```
loss = tf.Tensor(527.2598, shape=(), dtype=float32)
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-69-26a10cad58d9> in <module>
13
14 y_pred_perfect = ([1., 1.], [1., 1.], [1., 1.,])
---> 15 loss = triplet_loss(y_true, y_pred_perfect, 5)
16 assert loss == 5, "Wrong value. Did you add the alpha to basic_loss?"
17 y_pred_perfect = ([1., 1.],[1., 1.], [0., 0.,])
<ipython-input-68-72be2eca1873> in triplet_loss(y_true, y_pred, alpha)
28 basic_loss = pos_dist - neg_dist + alpha
29 # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
---> 30 loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0), axis=-1)
31 ### END CODE HERE
32
/opt/conda/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py in reduce_sum(input_tensor, axis, keepdims, name)
1982
1983 return reduce_sum_with_dims(input_tensor, axis, keepdims, name,
-> 1984 _ReductionDims(input_tensor, axis))
1985
1986
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py in reduce_sum_with_dims(input_tensor, axis, keepdims, name, dims)
1993 return _may_reduce_to_scalar(
1994 keepdims, axis,
-> 1995 gen_math_ops._sum(input_tensor, dims, keepdims, name=name))
1996
1997
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py in _sum(input, axis, keep_dims, name)
10521 return _result
10522 except _core._NotOkStatusException as e:
> 10523 _ops.raise_from_not_ok_status(e, name)
10524 except _core._FallbackException:
10525 pass
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6841 message = e.message + (" name: " + name if name is not None else "")
6842 # pylint: disable=protected-access
-> 6843 six.raise_from(core._status_to_exception(e.code, message), None)
6844 # pylint: enable=protected-access
6845
/opt/conda/lib/python3.7/site-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Invalid reduction dimension (-1 for input with 0 dimension(s) [Op:Sum]
```

You don’t want the axis on the last `reduce_sum`

, right?

Ah, it was right there: “For `tf.reduce_sum`

to sum across all axes, keep the default value axis=None.”

Thanks!

And now that I read the notebook again, they also gave you the -1 hint in the instructions as well. The “meta” lesson here is that saving a couple of minutes by not reading the instructions very carefully is not always a net savings of time.

I wonder if the template code can be improved too. The comments in the `triplet_loss`

function are clearly confusing:

```
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
```

I think the real problem is in how they constructed the test cases. Some of the `anchor`

, `positive`

and `negative`

values end up as type “list” and they handle the single sample ones differently than the multi-sample cases. The result is you end up with 1D arrays in some cases and 2D in others.

Let me run some experiments and then I will file a bug and suggest a better way to construct the test cases.

Hi experts,

I am stuck on this one and failing on the last assert test keep getting value 3.

I’ve tried quite a bit of diff things for troubleshooting but just can’t get my head around what is

going on … yet.

Both distances L2 norms are reduce_sum with axis=-1.

Final loss is computed with default no axis.

I am printing out last two test cases interim output before the end of the triplet loss so hopefully

you may see what my blind spot is and point me to the right direction. And, I am using all tf math

operators by the way.

pos_dist: [0. 0.]

neg_dist: [2. 2.]

basic_loss before adding alpha: [-2. -2.]

basic_loss after adding alpha, before tf.reduce_sum: [1. 1.]

Loss: 2.0, reduced_sum: 2.0, alpha: 3

<<<

pos_dist: [5. 2.]

neg_dist: [1. 5.]

basic_loss before adding alpha: [ 4. -3.]

basic_loss after adding alpha, before tf.reduce_sum: [ 5. -2.]

Loss: 3.0, reduced_sum: 3.0, alpha: 1

<<<

AssertionError Traceback (most recent call last)

in

31 if (loss == 4.):

32 raise Exception(‘Perhaps you are not using axis=-1 in reduce_sum?’)

—> 33 assert loss == 5, “Wrong value. Check your implementation”

34 # END UNIT TEST

AssertionError: Wrong value. Check your implementation

Many thanks in advance.

Cheers,

MCW

Hmmmm, figured it out now . I’ve put the last reduce_sum at the wrong spot.

I am glad you found it on your own