C2W2 Assignment: partial error in the grader for preprocessing function

It seems like the code is correct and the manually examined outputs look fine, but the grader gave me the following text that’s not 100% clear to me:

Under processing_fn I got 5/10 points.

Failed test case: transformed_data incorrectly defined (showing changed values with respect to correct answer).
Expected:
{},
but got:
{"root[0]['traffic_volume_xf']": {'new_value': 1, 'old_value': 0}, "root[3]['traffic_volume_xf']": {'new_value': 1, 'old_value': 0}, "root[4]['traffic_volume_xf']": {'new_value': 1, 'old_value': 0}}.

It’s clear that it’s not happy with the traffic_volume_xf values but the code looks good to me.
I don’t want to post my answer here, so please let me know if there is a good way to resolve this issue / understand the problem the notebook/grader is having - or maybe it’s me? :sweat_smile:

Edit: what is the new_value vs old_value? Seems like one of these is the “expected” value, but not clear which is which and all 3 printed examples here have the same pair of 1 vs 0.

Thanks in advance!
Tigran

1 Like

Hi Tigran! From another learner’s submission, this problem came up when the comparison being done was mean > traffic volume. Please check that you’re solution does traffic volume > mean when you use tf.greater(). Hope this helps!

Thanks Chris,

I checked and the order of arguments seem to be correct.
I’ve replaced the casting code provided in the template on that line with traffic_volume that’s computed above. Though I didn’t expect this to change anything, I couldn’t run the grader again as I already have a submission that’s above the passing grade.

Thanks again.

@tigran not sure if it is applicable to you, but this helped me: C2W2 - Assignment Score Issue - Machine Learning Engineering for Production(MLOps) / MLEP Course 2 - DeepLearning.AI

Thank you @hengzc - I also used the reduce_mean so that might have been the issue, though I don’t see what was the difference. While looking through the docs I also didn’t find/notice any other mean but the reduce_mean.

I can no longer run the grader to check, but this seemed to have helped others.

For completeness, here is the final conclusion on the post above: Use tft.mean where needed, and don’t use tf.reduce_mean !

That’s where I generally run into issues as well. My knowledge of python modules and what functions are located in which module can cause slowdowns. If functions are nested more than one layer below the one I’m looking for I’m in for a bunch of document searching/reading…heh.