Module 3 Programming Assignement issue

The assignment_part_1/unittests.py is asserting for fixed code to generate value 43

n,seed, expected = 30,10,43

However, the original python 2.7 code generates result 50…

(python27) jovyan@59a1a9a3f958:~/work/assignment_part_1$ python --version
Python 2.7.15
(python27) jovyan@59a1a9a3f958:~/work/assignment_part_1$ python magic_summation_python27.py 30 10
Magic summation is equal to: 50.
(python27) jovyan@59a1a9a3f958:~/work/assignment_part_1$ cd backup_data/
(python27) jovyan@59a1a9a3f958:~/work/assignment_part_1/backup_data$ python magic_summation.py 30 10
Magic summation is equal to: 50.
(python27) jovyan@59a1a9a3f958:~/work/assignment_part_1/backup_data$ 

Are we expected to fix the code to match the unittest’s random value of 43, or should we maintain the original function prior to upgrade (returns 50)

Additionally, for assignment_part_2

There seems to be some issue with conda setup that none of the file:/// dependencies in requirements.txt can be found/loaded properly, yet the instructions specifically indcates that

  1. YOU MAY ONLY CHANGE PANDAS AND/OR NUMPY LIBRARY, ANY OTHER CHANGE MAY CAUSE A ZERO SCORE.

But we can’t get a working dependencies if the package can’t be accessed?

python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1693930252784/work
PyYAML==6.0.1
pyzmq==25.1.2
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
toml==0.10.2

Interestingly, 50 passed the grading … so the unittest is just wrong…

1 Like

However, for assignment_part_2, not sure why its not picking up the grading, as this error should only appears if panda version is too new…

There was an error grading your submission. Details:
type object ‘DataFrame’ has no attribute ‘from_items’

The code works fine in python3 venv & terminal conda…

Are you doing the assignment in your own environment?

No, its online in the Quiz interface Jupyter Terminal, lab ID: [edited]

Hi @BlueFox

I have fixed the unittest in part 1. Regarding part 2, I am investigating this issue. Thanks for bringing this. I have removed your lab id as it is sensitive information.

I’ll let you know once the issue is fixed.

Best,
Lucas

Hi all, the issue is now fixed. Please try to submit your solution again.

Hi, thx for the update!
Assignment 2 is now much simpler, but it still doesn’t seem to work properly.

Got following error

  • Grader Error: Grader feedback not found

Visit the Discussion forum to see if your peers are experiencing or have found resolutions for similar errors. If the error isn’t resolved in 24 hours, please reach out to Coursera through our Help Center.

Actually, forgot to re-run bash submit_solution.sh (since requirements.txt is now part of the submission)

Now it works fine, thanks!

2 Likes

@BlueFox @lucas.coutinho … I am getting 43 after making the code compatible with python 3 today. Is there any issue with the assignment 1?

Initial magic_list: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
Indices to remove: {0, 2, 4, 5, 6, 8, 11, 14, 15, 16, 18, 19, 20, 21, 22, 24, 27, 28}
Filtered magic_list: [2, 4, 8, 10, 11, 13, 14, 18, 24, 26, 27, 30]
Transformed list: [2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30]
Magic summation is equal to: 43.

I have the same problem as @SUJITHA

Hi all,
I have the same problem which I did to produce 47 or 53, but never 50.

Here’s my code for verification:

# mentor edit: code removed

Note that I tried to run n=30 and seed=10 and I keep getting 47 or 53.

Hi all, I have figured out the bug. It’s the LLM that I am using which is ChatGPT 4o. You must use the ChatGPT provided by the course which is running ChatGPT 3.5, and then the answer is correct.

3 Likes

I’d say that relying on a concrete version of the LLM is quite brittle a strategy. The real answer is that we need to always carefully compare the output of the LLM with what was there before replacing the old code.

Hi Bernard,

When you launched the lab version 3.5 GPT, did you notice this?

I cannot get the answer at all:

Failed test case: magic_summation executed properly, but output is incorrect for parameters n = 30 and seed = 10.
Expected: 50
Got: 52.56272099972978

The GPT got upgraded recently? Yes or No?

Yes, the GPT is upgraded. When you use the GPT-4o with Vision, it works and you get 50. If you use an external LLM like I did - ChatGPT 4o in ChatGPT plus or Claude 3 on Claude AI, the answer did not come to 50. It will be best to let the people behind the course to get our feedback. Of course, the answer did not converge to 50 with some LLMs, which surprises me. :slight_smile:

1 Like

No, it’s not upgraded. It says it is, but it’s not:



I’m running into the same issues as above, it is unfortunate as one cannot progress to the next parts of the course and come back to this one if it faces technical mismatches. Until I can progress, I might not have time to do it. The next one would be AI-Powered Software and System Design, which is disabled until you pass this one. Maybe it would be a good idea for the hypercare phase of any such course to let participants progress further until robustness is achieved, and then one can revert to sequential enablement of modules?

1 Like

Did you try selecting GPT-3.5 from the drop-down list?

Thank you TMosh for your prompt reply! I do not have this option, the only option in the dropdown list is GPT-4o with Vision as displayed, while the one actually running is GPT-3.5. And as a result I get the same “47 or 53, but never 50” problematic as above.

In the concrete, output when running python unittests.py:
Original list (Count: 30): [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
Random indices (Count: 10): [20 7 5 2 3 21 13 27 12 1]
New list (Count: 20): [1, 5, 7, 9, 10, 11, 12, 15, 16, 17, 18, 19, 20, 23, 24, 25, 26, 27, 29, 30]
Modified list (Count: 20): [5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30]
Magic summation is equal to: 53

Failed test case: magic_summation executed properly, but output is incorrect for parameters n = 30 and seed = 10.
Expected: 50
Got: 53