Course 4 Week 4 project 1

I dont know what the problem is but i am getting an error in the Face Recognition project on the first function triplit_loss. Im getting this error from the test 'AssertionError: Wrong value. Are you applying tf.reduce_sum to get the loss?

Here i printed the pos_dist, neg_dist, and then loss. this is the last block to print out.
“’'pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(4.0, shape=(), dtype=float32)
loss tf.Tensor(0.0, shape=(), dtype=float32)”’’

it is suppose to equal 2. Which if I am right should, what printed out for pos and neg should have a loss of 2. Am i missing something in my code.

{moderator edit - solution code removed}

ven if I cast the loss to a dtype= tf.int64, i get this error.
“”“TypeError: Cannot convert 1.0 to EagerTensor of dtype int64"”"

This is the last print block i get:

New Line
pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(2.0, shape=(), dtype=float32)
loss tf.Tensor(1, shape=(), dtype=int64)

my loss is showing it is an int of 1. Why is it throwing an error that it is a float64 1.0?

So i submitted it as it is for grading and I scored a 67/100 on it, but I need at least 70 points to pass this assignment. And the second error code is what scored 67

Note that they specifically tell you to sum over an axis for pos_dist and neg_dist, so the results are vectors. You’re going to get scalar tensors.

1 Like

Ok. I removed the casting of it. But i am still getting an error on the last test. It prints out the first all 6 checks and its on the 6th test that it is failing with this message:

AssertionError Traceback (most recent call last)
in
23 y_pred_perfect = ([[1., 0.], [1., 0.]],[[1., 0.], [1., 0.]], [[0., 1.], [0., 1.]])
24 loss = triplet_loss(y_true, y_pred_perfect, 3)
—> 25 assert loss == 2., “Wrong value. Are you applying tf.reduce_sum to get the loss?”

AssertionError: Wrong value. Are you applying tf.reduce_sum to get the loss?

Here is everything i print out
New Line
pos tf.Tensor(9865.781, shape=(), dtype=float32)
neg tf.Tensor(10190.6, shape=(), dtype=float32)
loss tf.Tensor(0.0, shape=(), dtype=float32)
loss = tf.Tensor(0.0, shape=(), dtype=float32)
New Line
pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(0.0, shape=(), dtype=float32)
loss tf.Tensor(5.0, shape=(), dtype=float32)
New Line
pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(2.0, shape=(), dtype=float32)
loss tf.Tensor(1.0, shape=(), dtype=float32)
New Line
pos tf.Tensor(2.0, shape=(), dtype=float32)
neg tf.Tensor(0.0, shape=(), dtype=float32)
loss tf.Tensor(2.0, shape=(), dtype=float32)
New Line
pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(0.0, shape=(), dtype=float32)
loss tf.Tensor(0.0, shape=(), dtype=float32)
New Line
pos tf.Tensor(0.0, shape=(), dtype=float32)
neg tf.Tensor(4.0, shape=(), dtype=float32)
loss tf.Tensor(0.0, shape=(), dtype=float32)

Why on earth would you want a cost value to be an Integer? You must have done some coercion in your code to cause that.

Heres my function:

{moderator edit - solution code removed}

Its just 4 lines that i added

Did you read my previous comment about the “axis” on the reduce_sum calls? The instructions in the comments are quite clear on that point. Also where does it say anything about using a function decorator for triplet_loss?

Sorry. I am honestly not sure when to use that decorator so i threw it on to see if it changed anything. I think unfortunately it did and led me down a lot of wrong forums chasing that error. I will read over that decorator again. I just tried the axis=0 and got a different result. I didn’t realized that the default axis was None, i thought it was 0. But i explicitly called it out and got different results. Unfortunately it did not change the grade at all. the error i am getting now is:

[ValidateApp | WARNING] For autograder tests, expecting output to indicate
partial credit and be single value between 0.0 and max_points.
Currently treating other output as full credit, but future releases
may treat as error.
[ValidateApp | WARNING] For autograder tests, expecting output to indicate
partial credit and be single value between 0.0 and max_points.
Currently treating other output as full credit, but future releases
may treat as error.
Tests failed on 1 cell(s)! These tests could be hidden. Please check your submission.

But when i run validation in the note book it comes back with zero errors. Success! Your notebook passes all the tests.

Is there a reason that the notebook is telling me everything is correct when the autograder is telling me it is not and failing my grade?

Apparently the test cases in the notebook are not as restrictive as the grader test cases.

You are still misinterpreting what to do with the “axis” parameter. In the instructions, here’s what they say:

* For steps 1 and 2, maintain the number of *m* training examples and sum along the 128 values of each encoding. `tf.reduce_sum` has an axis parameter. This chooses along which axis the sums are applied.
* Note that one way to choose the last axis in a tensor is to use negative indexing (axis=-1).
1 Like

{moderator edit - solution code removed}

Here is a picture of my code using the axis=-1, it returns Success, but still grades as a 67/100

You have the logic for the sum and the maximum “inside out” on the final loss value. You take the maximum with 0 first and then take the sum. The max of the sums is not the same as the sum of the maxes.

But I take your point that this is a bug in the test cases if it doesn’t catch that mistake. I will file a bug about that.

3 Likes

ah crap, i did not see that. These are the little over sights that always get me and drive myself crazy. I switched the two and it came out 100/100. Thank you for clarifying that for me. I was going crazy squinting at the screen trying to read the text and read my code. I lost my glasses a couple months ago and dont start my job till the 14th. So i will hopefully be able to read clearly again soon. Thank you again so much. this one was really driving me crazy. Done with Course 4!! got my certificate and on to the last class of the specialization.

But ya, the notebook will say Success with those functions switch, witth axis=0, and with axis=-1.

I’m curious what you did to get that “popup” message about passing all the tests. I’ve never seen that, so you must have done something I’m not aware of.

With your “inside out” max bug, you get the wrong loss for the first test case and it does not match the expected value of 527.xxx. But the tests don’t actually fail for that. I’ll file a bug about it …

Its been on all of them for othe certs i did. But for these courses it is hit or miss if they were on there, but its just a button in the toolbar that says Validate. Its so much easier having that to check than running through the whole notebook worrying if i made a change somewhere and did not run that cell. So much headaches possible without it

Thanks for pointing me to that. As I mentioned I had obviously never used that before. It may be a useful shortcut, but maybe if you’d actually run the test cell for triplet_loss manually, you would have noticed that the “Expected Value” for the cost did not match your value. Of course I still consider it a bug that the test doesn’t throw an error because of that and will file that in a few minutes (as soon as I can play with the various combinations to probe the limits of what passes).

I tried going through it manually and things were not adding up correctly either lol it was just an issue with me not reading the instructions correctly. I make stoopid oversights like that pretty often unfortunately, but i do it far less than a couple years ago when i first learned python.

For the bugs, can you not just apply the hidden tests to the notebook? Im pretty sure i have taken classes before where they used hidden cells to run the hidden tests.

Hi, Dennis.

Sorry, I have no visibility into how any of this stuff is actually implemented. All I can do is report things to the Course Staff. They’ve just switched to a whole new platform from Coursera for all the grading in this new version of the courses and I get the general impression that everyone is kind of stumbling around in the dark stubbing their toes on things as they try to figure out what is possible and what the limitations of the new platform are.

I’ve filed a bug describing the various problems we’ve covered on this thread, so thanks for your help on documenting/discovering the various landmines here. The good news is that I think there’s at least one really easy improvement, which is to add an “assert” on the 527.xxxx “expected value” for the first test case for triplet_loss. At least with that in place, the user of the “Validate” button could not miss the fact that their value for that test case was incorrect.