Hi, The result from my function mask accuracy is return nan. I didn’t know what I missed.
Thank for your help in advance.
Did you go through all the threads related to the masked_Accuracy already discussed, read through @arvyzukai response, most of the answer will resolve by reading his comments.
There are almost 45 threads about this, also I am sharing one more thread by @lucas.coutinho which I suspect you have the same issue as per your output
Go through them all, use the search engine.
Let me know if you are not able to understand any of those.
Understand that a value shows Nan when one of your recalled code is getting divided by zero which gets you infinity or infinity.
NaN represents missing or undefined data in Python. It is typically encountered while performing mathematical operations that result in an undefined or nonsensical value. NaN is a floatingpoint value represented by the float(‘nan’) object in Python.
So go through your codes again and checked which might be causing this issue.
Keep learning and debugging!
All the best!!
Regards
DP
I don’t understand how to mask. Is that mask with 0 and then cast that mask into np.float 32 ? I think I mask with 0 so the result would be Nan. I am confusing the logic that used to mask.
Hello @skinx.learning
no where it mentions to mask with 0, Lucas meant if you do not pass an axis argument, it takes the value default as 0, so check if you have chosen the correct value for axis.
I will explain you stepwise on how the code should have looked.
First I am hoping you have gone through the below instructions
Then in the masked_accuracy grader cell,

the first code is to find loss for each items in batch, and
You must always cast the tensors to the same type in order to use them in training. Since you will make divisions, it is safe to use tf.float32 data type.
So you should used tf.cast to the true labels with the mentioned data type in the instructions. 
Next to creak mask, we need to ignore values. this step was divided in two steps.
the first one needed to mask by using tf.not_equal to the true labels and the value is 1
then the mask need to tf.cast with the previous mask recall and using the same data type of tf.float32 
Now to get predicted values you again do it in two steps.
First you apply tf.math.argmax to the prediction logits(y_pred) with the axis being 1.
as the first step has been recalled as y_pred_class, you tf.cast the y_pred_class from the previous recalled code and apply the datatype tf.float32 
Now to compare true values with the predicted ones (again in two steps)
first you check if y_true values is tf.equal to y_pred_class, this being recalled as matches_true_pred
then you use the same tf.cast on the matches_true_pred while using the datatype of tf.float32 
Now you multiply the previous recalled code line matches_true_pred with mask

The last step to compute masked_acc(here tupling the numerator and denominator separately is important.
you divided tf.reduce_sum of matches_true_pred to the tf.reduce_sum of mask
Regards
DP
Many thanks I mention to mask with 0 that not from Lucas, I confused from another lecture. I really appreciate for your help every questions that I ask.
Omg, really? Can you find with Ctrl + F not_equal
in whole assingment? Or any hints, that can be used any methods that you didn’t mention?
some of the hints are not directly provided in the assignment, even I had got that correction through search tool on discourse community but later I did find in one of the exercises of non grade cell, this was used.
even I was stuck when I was doing that assignment because of the same issue.
Regards
DP
Hi,
For me the tests pass but the grader fails.
This is the message from the grader.
I get also this error when trying to train the model a few cells below.
Any advice?
Best regards,
Sorin
I think you solved your issue?
based on the error you are probably recalling one of the codes with incorrect argument.
Regards
DP