Using incremental operators `+=` and `-=` appears to interfere with autograder in Assignment 3

In the function update_parameters, where we are asked to update the NN parameters via gradient descent, I found that using the incremental operator -= led to autotests failing.

For example, my solution using this syntax:

    W1 -= <my_code>
    W2 -= <my_code>
    b1 -= <my_code>
    b2 -= <my_code>

fails, whereas this code passes:

    W1 = <my_code>
    W2 = <my_code>
    b1 = <my_code>
    b2 = <my_code>

I found this surprising because when treated as a blackbox, the function implemented with incremental operators produces the same output. Yet doing so seems to mess up the internal tests carried out on the function.

I wanted to post this here to potentially save someone some time.

If this is any way against conduct, I ask a moderator please remove this post.

hey @ekiefl, which course and assignment is this ?

Neural Networks and Deep Learning, Assignment 3