Hi, The result from my function mask accuracy is return nan. I didn’t know what I missed.

Go through them all, use the search engine.

Let me know if you are not able to understand any of those.

Understand that a value shows Nan when one of your recalled code is getting divided by zero which gets you infinity or -infinity.

NaN represents missing or undefined data in Python. It is typically encountered while performing mathematical operations that result in an undefined or nonsensical value. NaN is a floating-point value represented by the float(‘nan’) object in Python.

So go through your codes again and checked which might be causing this issue.

Keep learning and debugging!
All the best!!
Regards
DP

I don’t understand how to mask. Is that mask with 0 and then cast that mask into np.float 32 ? I think I mask with 0 so the result would be Nan. I am confusing the logic that used to mask.

Hello @skinx.learning

no where it mentions to mask with 0, Lucas meant if you do not pass an axis argument, it takes the value default as 0, so check if you have chosen the correct value for axis.

I will explain you step-wise on how the code should have looked.

First I am hoping you have gone through the below instructions

1. the first code is to find loss for each items in batch, and
You must always cast the tensors to the same type in order to use them in training. Since you will make divisions, it is safe to use tf.float32 data type.
So you should used tf.cast to the true labels with the mentioned data type in the instructions.

2. Next to creak mask, we need to ignore values. this step was divided in two steps.
the first one needed to mask by using tf.not_equal to the true labels and the value is -1
then the mask need to tf.cast with the previous mask recall and using the same data type of tf.float32

3. Now to get predicted values you again do it in two steps.
First you apply tf.math.argmax to the prediction logits(y_pred) with the axis being -1.
as the first step has been recalled as y_pred_class, you tf.cast the y_pred_class from the previous recalled code and apply the datatype tf.float32

4. Now to compare true values with the predicted ones (again in two steps)
first you check if y_true values is tf.equal to y_pred_class, this being recalled as matches_true_pred
then you use the same tf.cast on the matches_true_pred while using the datatype of tf.float32

5. Now you multiply the previous recalled code line matches_true_pred with mask

6. The last step to compute masked_acc(here tupling the numerator and denominator separately is important.
you divided tf.reduce_sum of matches_true_pred to the tf.reduce_sum of mask

Regards
DP

3 Likes

Many thanks I mention to mask with 0 that not from Lucas, I confused from another lecture. I really appreciate for your help every questions that I ask.

1 Like

Omg, really? Can you find with Ctrl + F `not_equal` in whole assingment? Or any hints, that can be used any methods that you didn’t mention?

some of the hints are not directly provided in the assignment, even I had got that correction through search tool on discourse community but later I did find in one of the exercises of non grade cell, this was used.

even I was stuck when I was doing that assignment because of the same issue.

Regards
DP

Hi,

For me the tests pass but the grader fails.

This is the message from the grader.

I get also this error when trying to train the model a few cells below.