Programming Assignment: Conditional GAN "Try Again · 0/5 points earned"

I finished the “Conditional GAN” programming assignment for week 4. All the tests passed, but I received " Try Again · 0/5 points earned". I’m pretty sure I removed all the print statements and stuff I had for testing purposes. Could someone please help me figure out what happened?

Hi @logos_masters,

Did you accidentally remove any necessary comments? Be sure to keep all the required comments unchanged, and write your code within the indicated comment sections. If you’re still having issues, try backing up your code, reverting the notebook to the initial version, and then rewriting your code again!

Hope this helps!

I don’t believe so, I will try doing that. Maybe I accidentally deleted something without realizing.

Oh ok, let me know if you need further help! I usually revert my notebook to the initial version and rewrite the code to be sure everything is fine!

Unfortunately I have the same problem with a fresh notebook. Under “My submissions”, it says :
Cell #7. Can't compile the student's code. Error: RuntimeError("Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'",)

However, according to the tests in my notebook, Cell #7 succeeded.

Oh! This issue isn’t due to a grading mistake. It looks like a data type mismatch in your code. Make sure that the data types are consistent, especially where specific types are expected. Try printing the data types at key points to help spot where the mismatch might be (sequence elements should be of type long, but it seems yours are currently float).

1 Like

I think this is another instance of the issue described on this thread.

2 Likes

Thanks. I am running it again with the type conversion in each input in combine instead of on the final output, we’ll see if that takes care of it.

Is there any real-world context in which it might be important to do it in this order, or is it just a quirk of the grading script?

As explained on that other thread, it’s a quirk in that the grader uses an older version of PyTorch than the one in the notebook. So it’s a “versionitis” problem, which can happen in the python/ML space pretty frequently: the APIs don’t “hold still” for very long and sometimes they change in incompatible ways.

There is another link in that thread to yet another way in which the mismatching version can be a problem with the grader. This has been reported to the course staff more than a year ago, but I guess it’s too much of a hassle to reissue the graders. Sorry!

2 Likes