Bug in Test (comparator)

The test func used for checking is buggy as it allows incomplete answers. (concretely week1 & 2 of Course 4 - CNNs, potentially others)

It will output “all tests passed” although the answer is incomplete because it just checks existing layers (based on input tuples). This is due to the zip() function used, if the passed iterators have different lengths, the iterator with the least items decides the length of the new iterator. Therefore if I create i.e. only the first 2 layers of the desired network of 10 layers, the tess will still pass!

Fix: Add a check of length of the learner inputs vs. instructor-expectation to fix this. (assert len(learner) == len(instructor) )

1 Like

@mubsi, your attention is needed here.

Hey @TMosh , can you please make an issue out of it ? Thanks.

I can take care of converting this into a GitIssue. @marcmuc has done us a great service here by actually debugging the problem! I’ve seen several recent threads from students who pass the tests in the notebook for IdentityBlock, but fail the grader. I’m guessing that this is exactly the cause …

Indeed he has. Thank you @marcmuc for that. And thank you @paulinpaloalto.

can you tag me into those threads ? I want to replicate the issue. Thanks, Paul

@Mubsi: Sure, here’s one from earlier today, but I should warn you that we haven’t yet confirmed the theory that the bug Marc pointed out is the cause here.

Here’s one from yesterday, but there again I have not yet been able to confirm the diagnosis.

1 Like

@Mubsi: Having looked at this a little bit, I’m not really convinced this particular bug is what caused the identity_block grader problems. What @marcmuc has pointed out is a legitimate bug in the comparator function, but I can only see that being use to test the full ResidualNet50 model at the very end. The nature of the bug is that the learner’s model has to be truncated at the end, not have missing layers in the middle. So I can get the “all tests passed” message on the full model with the last few layers dropped on the full model, which means the Flatten and Dense layers which are part of the template. And then the training crashes and burns because the output shapes are wrong. So there is a remaining mystery here about the two threads I linked above.

Just to close the loop here, I filed a GitIssue about the issues on this thread. @Mubsi, you’ve probably already seen it, but just in case :nerd_face:

1 Like