Programming Assignment: Solving Versioning and Dependency conflicts with an LLM

Hello, I am currently doing the team software engineering with ai course module 3 and stuck on the programming part 1. It’s been 11+ hours of chatting with 3 different chatbots about converting code from Python 2.7 to Python 3. I need help. Below is the code that I am stuck on:

[code removed from moderator]

1 Like

Same here, but fixed the code. Issue here is compatibility between Python 2 and Python 3 related to division operator, which the LLM is not fully addressing yet by default unless you ask for it.

1 Like

Hello there,

You had not posted under the related specialization/course I have moved it to the right place.

Second you are not allowed to post code publicly according to the rules in this forum.

Third, try the above advice and see if it resolves your issue, otherwise send me your solution in a private message and I will have a look at it!

1 Like

Thank you, this has helped out and switching chatbots from multiple versions of ChatGPT to the classes gpt.

1 Like

Thanks for the hints. I spent a good bit of time with other LLMS as well. Following worked:

  1. Using the provided ChatGPT
  2. Focusing the LLM on division operator difference between python2.7 and 3.

I’ll tell you what. That was a pain. No LLM was able to solve the code adaptation. As I work with Python, I had to MANUALLY fix the code in order to pass the tests. With no Python knowledge it cannot be done. Also done the second exercise without LLM. I’m reflecting now that LLMs still have strong limitations and weaknesses when it comes to providing code assistance.

1 Like

This assignment was totally annoying. I used chatgpt 4o for 1hr and assessed both the division handling difference and numpy versioning behaviour. Then i swhitched to chatgpt o1 preview and got it in 20 minutes or so feeding him with the example analyzed by the unittests function. Eventually produced a working version.
Definetly the most annoying test of the unit.

1 Like

I concur.

Or maybe this task isn’t one that chat tools are optimized for.

They do have limitations, try to think about natural vs. artificial intelligence! But the point here is that they can use to help and assist you with efficiency, any technology is meant for that not to do the intelligence part!

1 Like

Might i get some help in private? I’m stuck and from my point of view I don’t know what other hints/suggestions to follow.

1 Like

Yes send me in private, I will reply if I can be of help!

No more need. I fixed the main code, but the grader and the unittests are flawed. I run the Python 2.7 code completely unmodified (from the backup folder) with the values where i get grader/test errors. I get the wrong result.

I’m gonna try to cover all of this cases by hand, as the Python 3 code works absolutely identical to the given Python 2.7 code.

Can someone please share the code of version 2.7 python I lost the code and backup after some retries.

Also not sure if I’m the only one stuff here but 3 hours and I was not able to resolve the first part of migrate from python 2 to 3

I’m having the same problem

Here is the magic_summation.py file stored in the backup_data folder. It is written in Python 2.
magic_summation.py (1.6 KB)

3 Likes

Is there a way that the creator check the lab about migrating Python 2.7 to Python 3

I have been spending a lot of time with that test I created a new method from scratch, and checked the division operator with python 3 but it is still failing can someone please check that?

I’m not sure if I’m the only one, but something makes me think there is a problem with that challenge.

With Python3, replace division operator(/) with double slash (//) to get the integer value.

I tried with it, I tried also without LLM, and test still failing, also grades I don’t know what is missed there.

The magic_summation.py in backup_data is giving result of 50 when ran with n=30 and seed 10. But the unit test expects a result of 46 for the same input.

I am confused because this is the original file, which i have not modifed, and still it is giving wrong results.