Cannot submit W1A2 in sequencial networks

I get the following error:
Cell #10. Can’t compile the student’s code. Error: IndentationError(‘unexpected indent’, (‘/tmp/student_solution_cells/cell_10.py’, 12, 1, ’ np.random.seed(24)\n’))

1 Like

Are you sure everything runs correctly in the notebook? If so, then I guess the theory must be that the notebook environment is more forgiving about mixing spaces and tabs when creating the indentation. Of course indentation is a key part of the python syntax. It should be pretty easy to find that failing line and redo the indentation using only tabs.

In my copy of that notebook, that line is in the sample_test cell. There should have been no need for you to modify that cell and the indentation there is pretty straightforward: 1 tab.

The other thing to check is that WYSIWYG, meaning that if you are working in a renamed notebook, hitting submit does not submit that notebook: it submits the “standard” notebook, meaning the one opened by the “Work in Browser” or “Launch lab” link.

1 Like

If you used any of the hint code, and used copy-and-paste to insert it into a graded function, that can cause problems.

Sometimes the formatting of the hint code contains markup formatting that will work in the notebook, but will not work with the grader.

Usually this can be detected if you see keywords in your code cells that are highlighted in red font.

So, check your notebook and see if you see any programming syntax that is highlighted in red for no good reason.

2 Likes

Hi.
I am sure that everything is OK in the notebook. I doubled checked it.
This time I get 75% due to some problem in exercise 3:

Code Cell UNQ_C3: Unexpected error (KeyError(‘da_next’)) occurred during function check. We expected function optimize to return Test 3 failed. Please check that this function is defined properly.
, though, when I run it it passes the check:

1 Like

That means your mistake is in the clip function, but the unit tests and the grader tests don’t detect the error as a problem in clip, but it gets reported as a problem in optimize. Here’s a thread that explains what is going on there. Sorry, the error messages are not helpful here.

We have obviously seen this before and I thought I had filed a bug report about it, but I can’t find it now. Given that it happens every few months, this really should be fixed and shouldn’t even be that hard to do: just add a test case in the notebook for the clip function to specifically catch this “error”. Although in the bigger picture, I’m not getting why they make such a big deal of this. What would be the harm of supplying an extra gradient other than a little wasted compute and memory?

1 Like

Actually please read this post on another thread that talks in more detail about this. You might need to read more of the thread, but that one post contains the key points. You really have to go out of your way to step on this landmine. They tell you in the comment what to do and even if you disobey or misunderstand the intent of those instructions, they still provide you with the safety net to catch the error: that last line before the return that computes the gradients dictionary to return. But you also apparently deleted that. So they were really trying to help you out here to avoid this issue.

I will still file a bug about this, because this whole thing just seems like a waste of time and mental energy to me. What is the real conceptual point they are making here with the test that catches this misbehavior? I’m guessing that your code is simpler and cleaner and what harm is really done by returning an extra gradient?

1 Like

In addition to Paul’s comments, keep in mind that passing the unit tests in the notebook never proves that your code is perfect.

1 Like

Hi Paul.
I looked at the thread you pointed out. Not sure what was the exact code used there.

Anyway, I am not sure what exactly I should do with my clip code:

for gradient in gradients.keys():
    np.clip(gradients[gradient], -maxValue, maxValue, out = gradients[gradient])

But frankly it all seems semantic, and not that important. I’ll settle with the current grade, as long as it enables me to continue.

Thx, Gilad

1 Like

Your code is working too hard.
You already have “gradient” from the for-loop.
Just use it where appropriate.

1 Like

I must admit that something is fishy here:
My original code was:
dWaa, dWax, dWya, db, dby = gradients[‘dWaa’], gradients[‘dWax’], gradients[‘dWya’], gradients[‘db’], gradients[‘dby’]

START CODE HERE

Clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)

for gradient in gradients.keys():
np.clip(gradients[gradient], -maxValue, maxValue, out = gradients[gradient])

END CODE HERE

gradients = {“dWaa”: dWaa, “dWax”: dWax, “dWya”: dWya, “db”: db, “dby”: dby}

return gradients

And it passed the test, although it shouldn’t have, as the last line, gradients = {“dWaa”: dWaa, “dWax”: dWax, “dWya”: dWya, “db”: db, “dby”: dby}
line overrides the clipping, done directly to the dictionary

When I put a remark on the last line, it passes too. And my mark is 100!!!
When I tried your suggestion:
for gradient in gradients.keys():
np.clip(gradient, -maxValue, maxValue, out = gradient)
It fails with:

TypeError: return arrays must be of ArrayType

And this is because I transfer the field (a string) to the clipper.

So my taking is:

  1. You need to pass the gradients[gradient] to the clipper.
  2. You should remove the dWaa, dWax, dWya, db, dby = gradients[‘dWaa’], gradients[‘dWax’], gradients[‘dWya’], gradients[‘db’], gradients[‘dby’] line from the code!

Thx for your support.
Gilad

1 Like

I disagree with your solution.

Because you’re given this:
dWaa, dWax, dWya, db, dby = gradients[‘dWaa’], gradients[‘dWax’], gradients[‘dWya’], gradients[‘db’], gradients[‘dby’]

Then you can use this in the for-loop:
for gradient in [dWax, dWaa, dWya, db, dby]:

I have not tried your method of using gradients.keys().

1 Like

Update:

I tried your code for the clip() function for-loop, and it worked correctly and passed the grader.

1 Like

Thx

בתאריך יום ב׳, 1 באפר׳ 2024, 22:58, מאת Tom Mosher via DeepLearning.AI ‏<notifications@dlai.discoursemail.com>:

1 Like