Course 2 week1
I’m seeing really odd things happening in the coursera interface.
Unable to report problems with sections, the report a problem checkboxes will not check.
Running cells, output matches expected, then later those same cells, the output is hidden and - what a coincidence - the output no longer matches expected. (Of course this is not realized until after the notebook is submitted!)
The grader marked sections wrong, checking for anomalies in two sections, the output was “There are no anomalies” yet the grader marks these sections as incorrect. Seems that the grader was checking against items not listed in the requirements?
Whether invisible mischief is coming from user device or app side, this is not the way to compete, agreed?
Individuals and companies are more vulnerable these days. How can we identify and properly categorize these issues?
I wanted to add that my browser is up to date, with a version only a few weeks old. I realize the helpdesk cannot troubleshoot, but I thought they may be able to pass along screen shots and/or categorize data so that repeated or severe issues get floated up for proper review and analysis. However they continue to repeat tips about browsers and clearing cache, so I’m left without confirmation of their understanding.
I see many others posting items about their coursera work here, if there is a better place to bring awareness please let me know. I hope this is helpful.
Hi! I understand your frustration about the possible incorrect grader feedback. A few learners have reported it as well and I’ve forwarded the concern to Coursera for checking. In the meantime, there is a workaround that might resolve your issue. In a nutshell, you’ll need to get a fresh copy of the notebook and paste your solutions there. Please take a look at this thread for details: C2W1 Assignment - grader output showed error - #4 by chris.favila .
And here are additional things you may need to watch out for:
- There should not be any cell in the notebook that throws an error. Otherwise, the grader will halt and not give a partial grade. This usually stems from submitting an assignment without completing all the exercises (e.g. doing Exercise 1 then pressing
Submit
immediately)
- You may have renamed the notebook and pressed the
Submit
button there. The grader expects the default filename when grading. That is the file opened when you launch the notebook from the Coursera classroom. For example, if the default filename is C2W1_Assignment.ipynb
, then the grader will grade that notebook even if you pressed the Submit
button from C2W1_Assignment_2021_05_11.ipynb
Hope this helps! If not, feel free to update here. Thanks!
Hi Chris, thanks for these tips. However neither of these happened in my submission. You can see below, these two sections with no anomalies. The grader marked these sections as wrong. One was marked as 0/10 and the other was marked as 5/10. Resubmitted just minutes later without changes or cell reruns in these sections, and these sections passed 10/10. If others are also having similar experiences, it might be worth documenting and looking further into on the Coursera side.


I think this should be all of the information that could be helpful for Coursera. I will add that the helpdesk seems to be trolling the customers. No less than 32 emails on the chain regarding this, most replies from them make no sense (every response comes from a different individual who ignores the previous replies/information), and almost none acknowledge the issues. I think most customers would rather have an acknowledgement of the issue, and a good work around if possible, knowing that the issues are correctly documented for later review by the experts.
Hi! Thank you very much for these information. I will append your observation about the sudden correctness of the grader to my previous report to Coursera. That is indeed very strange!
I will also forward your concern re: the help desk. To clarify, when you say help desk, do you mean the Coursera Help Center ? What address did you send the email? Or are you referring to the Chat function on the lower right? If you have a ticket number or any reference I can use, just let me know so I can also attach it.
In any case, it is best to post here in Discourse if you will have additional technical concerns about the course materials. I think the Help Center is mainly for account or enrolment concerns so the agents may not have known how to handle it.
Thanks again for the feedback!
Thank you for the follow up, Chris. I will remember these tips for future reference.
Tina
Chris, I see that I replied to an email, and the reply is automatically being posted publicly here? That is very unethical, if that is the case, it is deceptive.
Hmm. I see that I can delete a reply, but not the original post, in this forum. Well, I just wanted to inform, so I feel I’ve done that and the topic is no longer needed (unless you would like to see if other are experiencing the same?). Feel free to have an admin delete.
Hi Tina! Please check your inbox. I just sent you a message re: Coursera’s feedback about this issue. Thanks!
Thank you, Chris. I’m glad to see the feedback and I hope that the information I provided has been helpful. Logging and analyzing the data about issues can be very helpful for running the organization effectively and efficiently. I’ll add that so far, I think the content for the ML Ops course has been very good. I look forward to continuing to be a part of this community.
1 Like
Today I have experienced changing the code, saving the file, rerunning the cell, and receiving the same syntax error as before I changed it.
So, I had an unmatched closing parenthesis (2 of them), and I removed it (leaving 1 of them), then saved the file, then reran the cell, and the same error appears, referencing the old code (2 closing parenthesis).
This was in the coursera lab, although I feel I’ve seen this happen many times in other environments as well, so I thought I’d mention it here.
Could be why there is so much difficulty troubleshooting. For example, I cannot get the greater than comparison to work, even though I’ve tried just about every work around and my code looks much like the others I’ve seen. If it is not recognizing the updated code, that could be why.
I’ve noticed that the autosave does not seem to be working, and ctrl+s seems not to be working. If I click the save icon at the top of the file, and then rerun the cell, it references updated code.
Hi Tina!
Thank you for flagging this! I think this is related to Python importing the module file only once (search for Module Reloads
here for related reading). That means your previous transform module is not reimported even after you’ve made the correction.
I think TFX gets around this by implementing an enable_cache
argument in the context run method. With that, you can re-run your Transform component this way: context.run(transform, enable_cache=False)
. I realize that this detail was not shown in the ungraded lab or official documentation so it’s best if we make it part of the starter code for this assignment. This was actually done in the Week 3 assignment but did not see that it is not implemented in Week 2.
We will fix this asap. Thanks again! As for the autosave not working, I hope it is just a glitch. I just used CTRL+S recently and it seems to work okay. Will keep an eye on it just in case.
Hope you found this useful!
1 Like
Thanks for the suggestion, Chris, I will certainly give it a try.
The saving issues seem to come and go, I’ve noticed. ctrl+s starting working again in a new session.
This suggestion does not seem to work, unfortunately. I’m still receiving the same error. Per our last discussion, I’ll still need to verify that the ‘traffic_transform.py’ content is overwriting/being imported and referenced correctly. Any additional suggestions for verifying this are appreciated.
Perhaps I will need to look also at how the inputs are created and handled (all with the given code). All the code up to this point works fine. If I piece apart the code line by line, it works, with the exception of being able to handle the inputs.
The code in the tutorial labs are structured differently, and nothing is jumping out to me as being a source of disconnect from there.
Also I’m still not sure what is going on with autosave - last checkpoint was 25 minutes ago. This source says default autosave interval is 120 seconds, so this seems like another thing I’ll need to look into.
I’ve run this: %autosave 120
with this output: Autosaving every 120 seconds
but it doesn’t seem to be working, or at least it doesn’t indicate when it is.
Also I was able to read lines back from the transform file and verified that my changes are there. So the issue may be with the inputs creation and passing to the transform argument. I think all of that is given code? It doesn’t match the structure from the tutorials.
Hi Tina! Please check your inbox. It seems there’s a deeper issue here that might involve the lab environment and I’d like to replicate it.
But before that, please also double check first your solution at the last part of the transform module. The starter code (last part of that file) is written this way:
# Create a feature that shows if the traffic volume is greater than the mean and cast to an int
outputs[_transformed_name(_VOLUME_KEY)] = tf.cast(
# Use `tf.greater` to check if the traffic volume in a row is greater than the mean of the entire traffic volumn column
tf.greater(None, None(tf.cast(inputs[_VOLUME_KEY], tf.float32))),
tf.int64)
Make sure that you’re only replacing the None
keywords here. There are only two and one of these will need a method from the tft module . If that is not the issue, then we can proceed to further troubleshooting. Thanks!
1 Like
Per my previous message, this ‘corrected itself’ with no changes to code. Just wanted to reply here as well. Thanks again.
1 Like